Description
Large language models (LLMs) are increasingly used as tutors, yet their tendency to hallucinate and their opaque
reasoning undermine trust and pedagogical safety. We present EduTrace, a multimodal evidence-first tutoring
system that enforces retrieval-augmented generation (RAG) with mandatory citation of vetted course materials
before any explanation is produced. EduTrace first surfaces a compact “evidence panel” consisting of textbook,
lecture, and diagram passages selected via dense retrieval and re-ranking, and only then generates the tutoring
response using evidence-constrained decoding that binds each claim to explicit provenance. Beyond question
answering, EduTrace automatically generates practice items aligned to course outcomes and grades free-text student
responses using rubric-guided LLM judging calibrated against human raters, producing actionable coaching
feedback. A distinctive output is an evidence trace report that provides confidence scores, highlights unsupported
claims, and identifies knowledge gaps when retrieval cannot justify an answer. We deploy EduTrace in a real
introductory biology module with analytics for learning gains and engagement. In a controlled study design
(illustrative results shown in this draft), EduTrace yields higher normalized learning gains than a traditional RAG
tutor and rule-based practice, while improving retrieval precision and achieving substantial agreement with expert
grading. These results suggest that transparent, evidence-first tutoring can improve both reliability and learning
outcomes in LLM-based education




Reviews
There are no reviews yet.