Improving and Scaling Coaching through LLMs
Effective coaching in project-based learning environments is critical for developing students’ self-regulation skills, yet scaling high-quality coaching remains a challenge. Coaches struggle to track students' progress across multiple teams and projects, making it difficult to provide targeted feedback and adapt recommendations based on evolving student needs. Existing AI-based project management tools facilitate task tracking but fail to capture the nuanced ways students approach their work. Large Language Models (LLMs) have shown promise in analyzing text-based interactions and generating structured feedback, but their application to coaching remains underexplored.
This paper presents an LLM-enhanced coaching system designed to support project-based learning by helping connect peers struggling with the same regulation gap, and to help coaches by identifying regulation gaps and generating tailored practice suggestions. Our system integrates LLM-driven semantic matching with structured metadata on Context Assessment Plan (CAP) notes. The metadata is derived from our novel codebook, which includes regulation gap and key term definitions gathered across learning science literature.
We evaluate our approach by comparing multiple retrieval and classification techniques, including word embeddings, LLM-based tagging, and a hybrid model incorporating both. From our evaluation thus far, results demonstrate that our hybrid system most effectively retrieves relevant coaching cases, reducing the cognitive burden on mentors while maintaining high-quality, context-aware feedback. We will continue to build upon our user interfaces as well as the quality of our note matching.
Team
Faculty
- None
Ph.D. Students
- None
Masters and Undergraduate Students
- Allyson Lee
- Terry Chen