The Problem with Being Afraid of AI in Education

A Room Full of Fear
We recently attended an academic talk about AI and technology in education. The speaker made a philosophical argument that resonated with us in principle: that new technologies are innovations, not inevitabilities, and we should ask whether they actually improve things before adopting them. That's a reasonable position.
But what struck me wasn't the speaker's thesis. It was the audience.
One person argued that data centers are consuming farmland and harming the environment, so they refuse to use AI at all. Another said they believe AI is fundamentally unethical. Others nodded along. The room had the energy of a support group for people who wanted permission to reject technology entirely.
We sat there thinking: while this room debates whether AI belongs in the classroom, our co-founder's students are already using ChatGPT to complete their homework.
The Gap Between Fear and Reality
The fear is real and it comes from a legitimate place. Educators feel like technology is being imposed on them by administrations, by vendors, by the culture at large. The narrative is always the same: "AI is here, adapt or be left behind." That framing strips educators of agency and it's understandably infuriating.
But the response of "therefore I will not engage with it" doesn't protect anyone. It just creates a vacuum that students fill unsupervised.
Our co-founder, Rebecca Coates, is an Associate Professor of Chemistry at Carroll College in Helena, Montana. A few years ago, she started noticing something strange. Homework assignments were being completed faster than she could complete them herself. Students who scored well on homework were bombing exams. At first it was only a couple of students she suspected were using ChatGPT. Now it's a large handful.
She tried other approaches. Different assignment formats, stricter proctoring, honor code conversations. But most students weren't using AI maliciously. They were using it the way they use everything else: as a shortcut. The problem isn't bad intentions. It's that the tools they have access to are designed to give answers, not build understanding.
Skepticism Is Healthy. Paralysis Is Not.
We agree with the philosophical point from the talk: we should treat new technologies as choices, not foregone conclusions. We should ask whether they improve things. We should demand that the burden of proof falls on the innovators, not the people being asked to adopt.
But that framework should lead to better tools, not to no tools at all.
The argument that "AI is unethical" or "data centers are bad for the environment" are real conversations worth having. But they are not arguments against building AI tools that serve educators. They are arguments for building them thoughtfully.
A professor who refuses to engage with AI in 2026 is not making a principled stand. They are making a choice to let their students use uncontrolled AI with no guardrails, no visibility, and no pedagogical framework. The students are not waiting for the philosophy department to reach consensus.
"AI" Is Not One Thing
One of the most frustrating parts of this conversation is how "AI" gets treated as a single, monolithic technology. The speaker cited studies from MIT showing that AI hinders learning outcomes. Those studies are real. But look at what they actually measured: students using general-purpose chatbots like ChatGPT to complete assignments. Of course that hinders learning. It's the digital equivalent of copying from the back of the textbook.
But concluding that "AI hinders learning" from those studies is like concluding that "the internet hinders research" because students copy from Wikipedia. The problem isn't the underlying technology. It's how the tool is designed and how students interact with it.
AI that gives you the answer when you ask is a fundamentally different tool than AI that asks you a question when you're stuck. AI that lets students bypass thinking is a different tool than AI that requires students to explain their reasoning before it will help them move forward. AI with no educator visibility is a different tool than AI where every conversation is transparent to the professor.
These are not subtle differences. They are entirely different pedagogical approaches that happen to share the same three-letter acronym.
We know from firsthand experience that AI can be an extraordinarily effective learning tool when it's structured correctly. When students have to earn milestones one at a time, when the AI guides rather than tells, when struggling is part of the design rather than something to be eliminated, students actually learn more deeply. The key is that the AI has to be built for learning, not for convenience.
Treating all AI as the same thing and citing blanket studies to dismiss the entire technology is not rigorous thinking. It's the kind of shortcut that, ironically, these same academics would never accept in a student's research paper.
A Third Option
The conversation we keep hearing has two sides: ban AI or surrender to it. Both positions share the same assumption: that AI is something that happens to educators rather than something educators can shape.
We rejected both.
When Rebecca saw what was happening in her classes, she didn't ban AI and she didn't accept it as inevitable. She asked a different question: what if the AI worked the way she teaches? What if it guided students through problems step by step without giving away the answer? What if she could see exactly how every student was thinking, not just what they scored? What if she controlled the AI rather than the other way around?
That's what we built with LabNotes.ai. It's a complete AI-powered homework and tutoring platform where the educator sets the rules. You decide the tutoring level per assignment. You choose what course materials the AI can reference. You turn it off entirely for exams. Every student-AI conversation is fully visible to you. The AI requires students to explain their reasoning and refuses to give direct answers.
This isn't AI imposed on education. It's AI shaped by an educator for educators.
The Real Risk
The risk in education right now isn't that we'll adopt AI too quickly. It's that the educators who care most about teaching will opt out of the conversation entirely, ceding the design of educational AI to people who have never taught a class.
The skeptics in that room had good instincts. They sensed that something was wrong with how AI is being sold to education. They were right about that. But the answer isn't to retreat. It's to build the thing you wish existed.
Every innovation is a choice. We chose to build one that puts educators in control. If you're a professor who is skeptical of AI in education, you're exactly the person who should be shaping how it's used. Not from the sidelines. From the driver's seat.
LabNotes.ai is a complete AI-powered homework and tutoring platform for STEM education. Built by a chemistry professor. Controlled by educators. Learn more.