Leaning In: Teaching Responsible AI Use in Undergraduate Scientific Writing

When I first taught our senior Chemistry capstone course in Spring 2024, conversations among faculty about AI use in an advanced writing course were dominated by fear. How do we catch students using AI? What guardrails do we need? How do we prevent this?
The horror stories circulated freely. Detection tools were debated endlessly. And yet, when we actually talked to students, most were aware of these tools but were not using them in the ways we imagined. At least, they were not using them in ways that matched our anxieties.
Two years later, the landscape has transformed. AI use is no longer a question of if, but how — and its prevalence is growing exponentially. The American Chemical Society has updated its guidance on AI use in journal publications, becoming notably less restrictive over time.
Our students are using these tools. The real question is whether we teach them to use AI responsibly — or leave them to figure it out on their own.
My colleague and I decided to lean in.
Two Instructors, Two Starting Points
I came into this as a moderately active AI user: curious, cautiously optimistic, and already experimenting. My co-instructor did not. He was skeptical, unfamiliar with the tools, but genuinely interested in learning more.
Interested enough, in fact, that he attended today's workshop not just as an instructor, but as a learner.
That dynamic matters. Our students are entering a profession where some collaborators will readily embrace AI, while others will approach it with healthy skepticism. Modeling both perspectives — and showing that curiosity can coexist with caution — felt like an important part of what we were teaching.
Three Sections, Three Readers
Our capstone course is not just a writing class. It is a research experience.
My co-instructor and I design open-ended research projects, and students carry out the work: collecting data, analyzing results, and writing up their findings as journal articles of the sort one might submit to the Journal of the American Chemical Society. Throughout the semester, we run workshops on the conventions of scientific writing.
Today's workshop focused on the Abstract, Introduction, and Experimental sections. We frame these as being written for three different readers:
- The Abstract — for the time-pressed reader who needs the complete story in miniature
- The Introduction — for the informed colleague who needs an argument, not a textbook chapter
- The Experimental section — for the replicating scientist who needs enough detail to reproduce the work exactly
As we planned the session, my co-instructor and I realized something important. AI assistance is not equally useful across all three sections.
The Experimental section is precise, methodological, and deeply specific to what each student actually did in the lab. AI cannot know your procedure, your instrumentation, your conditions, or your choices. That section needs to come from the student.
The Introduction and Abstract, however, are places where AI can genuinely help. Brainstorming how to frame a research question. Synthesizing literature to identify where a project fits. Drafting and refining language to meet disciplinary conventions. These are areas where AI tools can accelerate the process — provided students know how to use them critically.
Rules Before Tools
Before we opened a single AI application, we walked through the ACS guidelines on AI use. This was deliberate. I wanted students to understand professional expectations before they started experimenting.
The policy is straightforward. AI cannot be an author. All AI use must be disclosed in the Acknowledgments, describing when and how it was used. Substantial use should be detailed in the Methods section.
And this is the part I emphasized most: you are responsible for every word. If AI generates an inaccurate citation, a false claim, or a fabricated reference, that responsibility rests with you.
I put it to them simply. You, the human, need to be able to defend every sentence in your journal article. If you cannot, you are no longer the author.
What We Learned in the Workshop
I started with a simple question: Have you ever used generative AI, like ChatGPT?
Six of the eight students raised their hands immediately. The two who had not admitted — almost sheepishly and to visible disbelief from their peers. Eventually they did admit they had been using Google's AI but never ChatGPT or anything else.
Every student in the room was using generative AI. The only variable was which tool.
So we got to work. We explored different tools for different purposes: NotebookLM for source-grounded synthesis (since it only knows what you upload and does not hallucinate outside references), Claude for writing and argument structure, and ChatGPT for brainstorming and drafting.
Each tool has strengths. None is a replacement for a scientist's judgment.
Rather than telling students to simply "use AI," we gave them a prompt menu with specific starting points for different tasks: What are the main findings of each article? What gap do these articles collectively point toward? Here is my gap sentence — does it follow logically from the background I have described?
The goal was to teach students how to have a productive conversation with these tools, not how to accept the first output uncritically.
What struck me was the range of responses. Some students embraced the tools immediately and experimented freely. Others struggled with the interface or the logic, but still recognized their utility.
I encouraged them to use what they were comfortable with. There is no single correct way to use these tools, and skepticism is reasonable. My co-instructor started there, and his caution served him well.
What is dangerous is not skepticism. What is dangerous is being too trusting.
The Moment It Clicked
The highlight of today's workshop came when we used NotebookLM to synthesize students' literature sources and surface knowledge gaps — the conceptual foundation for any strong Introduction.
As the AI identified what was missing from current understanding, students began recognizing something familiar. The knowledge gaps aligned almost exactly with the research project objectives they had been given at the beginning of the semester.
They were stunned. How did the AI know?
I let the moment land before explaining the answer. It wasn't actually surprising, because my co-instructor and I had done our due diligence. We designed their research projects intentionally, grounded in our own expertise and targeted toward gaps we knew existed in the literature. The AI simply helped students see what we had already seen when we built the course.
That is not a threat to scientific training. It is a teaching moment. The AI did not replace the intellectual work — it illuminated it.
And now those students have a clearer understanding of why their projects matter, which will make their Introductions stronger.
Modeling What We Teach
One small detail I am proud of: the slides I used for this workshop include an AI acknowledgment at the bottom. I used Claude.ai to help brainstorm the structure, design the slide deck, and check for errors in the final draft — and I disclosed that.
It was an intentional choice. If I am asking students to be transparent about their AI use, I should be transparent about mine.
The Conversation Has Changed
Two years ago, I sat in meetings where colleagues asked, "How do we stop this?!"
Now I am asking a different question. How do we prepare students for a scientific world where these tools are everywhere?
The answer, I think, is the same one we have always relied on in capstone courses. We teach process. We teach critical thinking. We teach students to interrogate their sources — including AI-generated ones.
We show them that skepticism and curiosity can coexist, and that it is possible to lean in without losing one's footing.
AI is a thinking tool. You are the scientist.
Today, eight chemistry majors saw their research projects reflected back to them through an AI-generated synthesis — and they understood, perhaps for the first time, why those projects matter.
That felt like progress.