Deeper insight into how students actually learn
Current ed-tech platforms capture grades and completion rates. LabNotes.ai captures the learning process itself — every hint path, every misconception trigger, every shift in student confidence. Structured, anonymized, and exportable for rigorous research.
{
"student_id": "anon_hash_7x9k2",
"concept": "limiting_reagent",
"session": {
"duration_sec": 847,
"messages": 12,
"hint_level_reached": 3,
"self_corrections": 2
},
"milestones": {
"total": 5,
"completed": 3,
"first_attempt_correct": [1, 3],
"required_hints": [2]
},
"misconceptions_triggered": [
"atomic_vs_molecular_mass",
"mole_ratio_inversion"
],
"confidence_trajectory": [0.3, 0.4, 0.6]
}Structured data at every step of the learning process
Beyond quiz scores and completion rates, LabNotes.ai will capture how students think through problems — which solution paths they attempt, where they get stuck, which misconceptions they trigger, and how their confidence evolves across a semester. All data is anonymized and exportable in standard formats.
Concept-level confidence trajectories across entire cohorts
Misconception frequency and clustering by topic, section, and semester
Built-in infrastructure for controlled A/B pedagogical experiments
Export to CSV, JSON, or API for analysis in R, Python, SPSS, or any tool
Questions this platform is built to investigate
The data architecture is being designed around these core research areas.
Does guided AI tutoring lead to better exam outcomes than answer-providing AI tools?
Comparing exam performance, concept retention, and long-term understanding between students using Socratic-guided AI versus conventional AI assistance.
Do students become more independent learners over time?
Tracking hint usage, self-correction rates, and time-to-solution across a full semester to measure growing independence.
Does course-specific AI context improve learning transfer?
Controlled comparison of outcomes when AI references instructor-specific materials versus using only general subject knowledge.
Can behavioral data predict struggling students before exams?
Using real-time misconception patterns, hint dependency, and engagement signals to identify at-risk students early enough to intervene.
Built for institutional and grant requirements
Every infrastructure decision is being made with research rigor and compliance in mind.
FERPA Compliant
Student data handling is designed to support institutional compliance with FERPA
IRB-Friendly
Anonymized, structured data exports designed for institutional review board protocols
Exportable Data
CSV, JSON, and API access for R, Python, SPSS, and other analysis tools
A/B Testing
Infrastructure for randomized, controlled experiments across student populations
Institutional Deploy
SSO, LTI integration, and flexible licensing for campus-wide deployment
Pilot Support
Dedicated onboarding, technical support, and data consultation for research partners
Let's build the evidence together
We're looking for research partners and institutional collaborators who want to study AI-assisted STEM learning with real data. If that's you, let's talk about the Fall 2026 pilot.
Get in Touch