Skip to content
Home  /  Events

Examples of How Colleagues at HKU Integrate AI and Manage Associated Risks

In the 2024 to 2025 academic year, many HKU teachers have integrated some form of generative AI technology and learning activities in their classrooms. On the one hand, teachers are proactively exploring ways to harness the benefits of GenAI technology; on the other hand, teachers seek ways to manage the ethical challenges the technology presents.

AI Ethics Video Series

TALIC invited colleagues in the T&L community to share and show examples of how they integrate AI technology and manage associated risks in a series of videos.

Fostering an Ethical and Critical Mind for AI in Higher Education

Brian Tang, Executive Director of the LITE Lab (Law, Innovation, Technology, and Entrepreneurship) at HKU Law School, emphasizes appropriately incorporating AI into coursework and teaching students the “intentional use of AI as a learning companion” to augment critical thinking and problem-solving skills. Brian says he incorporates diverse assessment methods such as reflections, verbal presentations, and peer evaluations, moving beyond traditional essays that AI can easily generate. Addressing the concerns that AI enables students to generate written work too quickly and effortlessly, and the increasing pressure students face from peers using AI undetected, Brian recommends that teachers focus on a mindset shift to encourage students of the benefits of exercising their “mental muscles”.

Brian Wha-li Tang
Brian Wha-li TangLITE Lab, HKU Law School
Read More
"We should instill the ‘Student Gym of the Mind’ concept. Students cheat themselves by over-reliance and cognitive tasks through inappropriate use of AI."

Building the Grey Line: Guiding Ethical AI Use in the Classroom

Professor Cecilia Chan of the Teaching and Learning Innovation Centre (TALIC) addresses a new kind of academic dishonesty known as “AI-giarism,” which generally refers to using AI to create content without crediting original human or AI sources. She emphasizes that a ‘grey line’—a ’morality gap’—exists, where students and educators interpret AI ethics differently. Cecilia advocates for transparency of use, urging students to declare AI use just as they would human sources and encouraging open discussions in classrooms. Cecilia stresses the importance of guiding students on responsible AI use rather than just policing it.

Professor Cecilia Ka Yuk Chan
Professor Cecilia Ka Yuk ChanTeaching and Learning Innovation Centre
Read More
“Ethics isn’t just about cheating or not cheating. It’s about intention, impact, and accountability. It’s about fairness. It’s about learning.”

Reimagining Assessments in the Age of AI

To help students use AI responsibly, Dr. Tai Chun John Fung, Sr. Lecturer of the School of Nursing, recommends some practical measures, including requiring students to disclose AI use in their work submitted for assessments. John and his colleagues also integrate AI literacy modules into their disciplinary teaching to help students embrace AI, familiarizing them with authorized AI use, and reminding the students of AI’s limitations, such as hallucinations and bias.

John sees the use of AI as an opportunity to innovate and focus on higher-order skills, including empathy, creativity, and critical thinking, rather than a “shortcut” to request that AI complete written assignments.

Dr. Tai Chun John FUNG
Dr. Tai Chun John FUNGSchool of Nursing
Read More
“We should think deeply and creatively about how to adapt our assessments for this new era.”

AI-Resistant Assignments and Human-Centered Learning Tasks

Nicole Lau, a teacher from the Department of Psychiatry at SClinMed, shares that she designs “AI-resistant assignments” that focus on human-related tasks intended to resist automation. Her assignments may include concept maps, reflective journals, and peer interviews, emphasizing human-centered tasks that draw on students’ own lived experiences. Nicole underscores the importance of embedding clear policies and stresses the need to integrate these policies into the learning process.

Ka Man Nicole Lau
Ka Man Nicole LauDepartment of Psychiatry, SClinMed
Read More
“When students understand the rules and the reasons behind them, they're more likely to see themselves as active partners in ethical learning.”

Going forward, how would you define and uphold ethical boundaries in a way that harnesses technological advancement and safeguards academic integrity?

Share your story with us:

EdTech Sharing

Reference:

  • Awadallah Alkouk, W., & Khlaif, Z. N. (2024, December). AI-resistant assessments in higher education: practical insights from faculty training workshops. In Frontiers in education (Vol. 9, p. 1499495). Frontiers Media SA.
  • Chan, C. K. Y. (2024). Students’ perceptions of ‘AI-giarism’: Investigating changes in understandings of academic misconduct. Education and Information Technologies, 1-22.
  • Khlaif, Z. N., Hamamra, B., & Hussein, E. T. (2025). AI Paradox in Higher Education: Understanding Over-Reliance, Its Impact, and Sustainable Integration.
  • Overono, A. L., & Ditta, A. S. (2025). The use of AI disclosure statements in teaching: developing skills for psychologists of the future. Teaching of Psychology, 52(3), 273-278.
  •