Skip to content

Enhancing Teaching Assistant‑Led Pedagogy in Large‑Scale Courses through LLM‑Augmented Tutorials and Student Support

Poster Presentation
AI and Pedagogical Design
Date : 3 Dec 2025 (Wed)
Time : 12:00pm -
 1:30pm
Venue : Common Area Outside CPD 3.21-3.41, The Jockey Club Tower, Centennial Campus, HKU
Presenter(s) / Author(s):
  • Mr. Ka Wai Ernest Yip, Student, School of Clinical Medicine (Department of Medicine), Li Ka Shing Faculty of Medicine, HKU
  • Abstract

    Generative Artificial Intelligence (genAI) is emerging not only as a student aid but as a pedagogical partner for teaching teams. In Life Sciences 1b, Harvard University’s largest pre‑medical course enrolling over 400 undergraduates, the density of an interdisciplinary curriculum, diverse student background, and variability of teaching team expertise create sustained challenges for consistent high‑quality, consistent support common to large courses. Drawing on experience as a Teaching Assistant (TA) in the course, this presentation examines TA‑centric integrations of Large Language Models (LLMs) in tutorials and student support to enhance instructional effectiveness.

    In large‑enrolment courses like Life Sciences 1b, TAs lead weekly seminars, facilitate active learning, guide problem-sets (PSETs) assignments, and prepare students for examinations. The scale and heterogeneity of needs amongst students intensify TAs preparation demands, creating conditions where AI can act as a pedagogical amplifier. During tutorials, LLMs were deployed in five capacities with strong pedagogical promise: (1) rapid needs assessment and leveling; (2) anticipatory misconception modelling; (3) Q&A generation that foregrounds “why” reasoning; (4) scenario‑based learning design; and (5) adaptive explanatory frameworks that tune tone, analogies, and depth to varied learner backgrounds. In office hours and revision sessions, LLMs functioned as live co‑facilitators for: (1) targeted content distillation; (2) misconception bank with counter-explanations; (3) meta-learning via reusable checklists distilled from worked solutions; and (4) on-demand multi-view explanation (graphical, verbal, symbolic).

    Early use suggests three intertwined benefits: (1) greater teaching efficiency without loss of rigor; (2) more personalized support; and (3) scalable practices that maintain quality across sections of large‑enrolment contexts. However, risks remain – particularly around factual inaccuracies, over‑reliance on AI outputs, and the need for transparency and disclosure of AI in learning interactions. Further empirical research is essential to evaluate short and long-term implications of LLMs on learning, student perceptions, and TA pedagogical skill development.

    These integrations offered do not prescribe a single model so much as a direction for leveraging gen AI in large-course instruction – one that helps TAs and student-learners press further and pursue deeper mastery in the material while offering interested instructors a parallel lane for preparation, coordination, and partnering with AI. Under such conditions, LLMs start to further evolve as adaptable and effective partners rather than novelties. Ultimately, this presentation contributes to pragmatic, evidence‑seeking use of LLMs that supports deeper mastery amongst TAs while preserving instructor judgment and disciplinary rigor.

    Presenter(s) / Author(s)

    AIConf2025_ProfileImg_ErnestYip
    Mr. Ka Wai Ernest Yip, Student, School of Clinical Medicine (Department of Medicine), Li Ka Shing Faculty of Medicine, HKU