Back to Blog
Opinion

Students and Professors Are Making Their Own AI Rules. That's the Problem.

LabNotes.ai Team
AIHigher-EdPolicyAcademic Integrity
Students and Professors Are Making Their Own AI Rules. That's the Problem.

Everyone Has Rules. Nobody Agrees on Them.

A recent NPR report captured something most people in higher education already feel: students and professors are both making their own rules about AI, and those rules rarely align.

Students are using ChatGPT to brainstorm, outline papers, study for exams, and yes, sometimes write entire essays. Professors, meanwhile, range from banning AI entirely to embracing it as a collaborator. One faculty member compared using AI to write a college essay to bringing a forklift to the gym. Another views it as a powerful learning tool that students should be trained to use responsibly.

Both perspectives have merit. But the gap between them is where students fall through.

The Numbers Tell a Clear Story

According to a poll by Inside Higher Ed and Generation Lab, roughly 85% of undergraduates now use AI for coursework. About 19% admit to using it to write full essays. The Digital Education Council's global survey found 86% of students use AI regularly, with 54% using it weekly and nearly one in four using it daily.

Meanwhile, a national survey by AAC&U and Elon University found that 95% of faculty believe AI will increase student overreliance on these tools. Ninety percent said it will diminish critical thinking. And 78% reported that cheating on their campus has increased since generative AI became widely available.

These are not fringe concerns. They represent a near-unanimous faculty consensus that something is going wrong.

The Policy Vacuum

The core issue is not that AI exists in the classroom. It is that most institutions have not provided clear, consistent guidance on how it should be used.

Only 5% of students in the Digital Education Council survey said they were fully aware of their institution's AI guidelines. Faculty are left to set their own policies, course by course, often with contradictory approaches across a student's schedule. One professor bans ChatGPT. The next one requires it.

Students are not confused because they lack intelligence. They are confused because the rules change every time they walk into a different classroom.

Why "Just Ban It" Does Not Work

Prohibition has never been an effective response to widely available technology. Students use AI because it is useful. Telling them not to, without offering a better alternative, does not reduce usage. It just pushes it underground.

Detection tools have not solved this either. False positive rates remain high enough that innocent students get flagged, while students who know how to prompt effectively remain undetected. The arms race between AI generation and AI detection is one that education will always lose.

Why "Just Allow It" Does Not Work Either

Unrestricted AI use creates a different problem. When students use ChatGPT as an answer machine, they skip the cognitive work that produces learning. More than half of students who use AI for coursework reported mixed feelings about it, saying it helps sometimes but can also make them think less deeply.

That self-awareness is important. Students know something is off. They just do not have a better option.

The Missing Middle

What is needed is not a ban and not a free-for-all. It is AI that is designed to support learning rather than replace it.

That means tools where the AI asks questions instead of answering them. Where the student does the thinking and the AI provides scaffolding. Where the interaction itself is the evidence of learning, not a final answer pasted into a submission box.

This is the approach we took with LabNotes.ai. The AI refuses to give students the answer. Instead, it guides them through reasoning, step by step, using a Socratic approach calibrated to STEM disciplines. The professor sees the full interaction, not just the output. There is nothing to detect because the learning process is the assignment.

The Real Question

The debate about AI in higher education is often framed as a technology question. It is not. It is a design question.

Students and professors will continue making their own rules as long as institutions leave a vacuum. The solution is not more policies. It is better tools, ones that make the right kind of AI use the path of least resistance.

When AI is designed to teach instead of tell, the rules write themselves.