What the Latest Survey Data Tells Us About AI in Higher Ed

A Flood of New Data
Over the past year, several large-scale surveys have examined how students and faculty are actually engaging with AI in higher education. The combined picture is striking: student adoption is nearly universal, faculty concern is near-unanimous, and the institutional response has been slow to catch up.
We reviewed data from the Digital Education Council, Inside Higher Ed and Generation Lab, AAC&U and Elon University, Chegg, and a meta-summary compiled by Campbell University's Academic Technology Services. Here is what the numbers say.
Student Adoption Is No Longer a Trend. It Is Baseline.
The Digital Education Council surveyed 3,839 students across 16 countries and found that 86% are using AI regularly in their studies. More than half use it weekly. Nearly one in four use it daily.
An Inside Higher Ed and Generation Lab poll of over 1,000 undergraduates at 166 U.S. institutions found a similar figure: 85% of students use AI for coursework. The most common uses are brainstorming, outlining papers, and studying for exams.
A Chegg survey of 11,706 undergraduates reported 80% adoption. A UK-based study found 92% of students using AI tools, up from 66% the previous year.
These numbers have converged. Regardless of geography, institution type, or survey methodology, the finding is consistent: the vast majority of students are using AI, and usage is increasing.
But Usage Does Not Mean Confidence
High adoption numbers obscure an important nuance. Students are using AI, but many are not confident in what they are doing with it.
The Digital Education Council found that 58% of students reported insufficient AI knowledge and skills. Nearly half, 48%, said they do not feel adequately prepared for an AI-enabled workplace. Only 5% said they were fully aware of their institution's AI policies.
The Inside Higher Ed data adds another layer: more than half of students who use AI for coursework reported mixed feelings, acknowledging that it helps sometimes but can make them think less deeply.
Students are not blindly enthusiastic about AI. Many recognize the trade-off between convenience and learning. What they lack is guidance on how to navigate it.
Faculty Concern Is Nearly Universal
The AAC&U and Elon University survey of 1,057 faculty members produced some of the most striking numbers in the dataset.
95% said AI will increase student overreliance on AI tools. 90% said it will diminish critical thinking. 83% said it will decrease student attention spans. And 78% reported that cheating on their campus has already increased since generative AI became widely available, with 57% saying it has increased significantly.
These are not the concerns of a resistant minority. They represent a broad, deeply felt faculty consensus.
Faculty are not anti-technology. Many already engage students in discussions about AI's limitations. But they are watching students outsource reasoning to tools that optimize for answers, not understanding, and they are worried about what that means for the value of the education they provide.
The Gap Between Use and Usefulness
The most important finding across all of these surveys is not any single number. It is the gap between how students use AI and how it could be used.
Students primarily use AI to get things done faster. They use it to generate drafts, summarize readings, and produce answers. This is rational behavior. AI is genuinely efficient at these tasks.
But efficiency and learning are not the same thing. When AI handles the cognitive work, the student misses the struggle that produces understanding. The generation effect, one of the most replicated findings in learning science, shows that information students produce themselves is retained 30 to 50% better than information they receive passively.
Current AI tools are optimized for the opposite. They are answer machines. They do the thinking so the student does not have to.
What the Data Suggests We Should Build
If the surveys tell us anything clearly, it is that the status quo is not working. Students are using AI without guidance. Faculty are worried but lack tools to respond. And institutions are moving too slowly to bridge the gap.
The path forward is not to restrict AI use. The adoption numbers make that unrealistic. It is to redesign AI tools so that using them well and learning deeply are the same thing.
That means AI that asks questions instead of answering them. AI that scaffolds reasoning instead of replacing it. AI that gives faculty visibility into the learning process, not just the final output.
This is the design philosophy behind LabNotes.ai. Our AI tutor refuses to give students the answer. Instead, it guides them through problems using a Socratic approach, producing a full interaction transcript that shows exactly where a student struggled, what misconceptions they held, and how their thinking developed.
The survey data confirms what we have believed from the start: the problem is not that students are using AI. It is that the AI they are using was not designed for learning.
Sources
The data in this post draws from the following surveys and summaries:
- Digital Education Council Global AI Student Survey, 2024 (3,839 students, 16 countries)
- Inside Higher Ed and Generation Lab flash survey, 2025 (1,047 students, 166 U.S. institutions)
- AAC&U and Elon University faculty AI survey, November 2025 (1,057 faculty)
- Chegg undergraduate AI survey, 2024 (11,706 students)
- Campbell University Academic Technology Services meta-summary, March 2025