The Risks and Responsibilities of Generative AI: Empowering Students with Ethical Awareness
Artificial Intelligence (AI) stands as a pivotal force in transforming education, ushering in a slew of benefits alongside some challenges, as highlighted by UNESCO (2023). One of the notable opportunities is personalized learning, where AI can help cater to the unique needs of each student, ensuring they receive the appropriate level of challenge and support. Moreover, through adaptive learning, AI can monitor how a student is doing and tweak the learning material and experience accordingly, ensuring a rewarding learning journey. The automation feature of AI is another significant boon, where AI can handle routine tasks like grading, thus freeing up teachers to engage in more creative and meaningful educational activities. Furthermore, AI can introduce new ways of learning, such as immersive virtual worlds and interactive simulations, making learning more engaging and fun.
However, there are hurdles on the path. A predominant concern is the potential bias in the AI systems; if the data used to train AI is biased, it may favour some students over others, raising discrimination. Data privacy is a big deal too since AI requires a lot of information about students to function effectively, and it is crucial to keep this data safe. For AI to work well in classrooms, both teachers and students will need training, which demands time and resources. Additionally, a significant investment is needed to get the right tools and resources for successfully integrating AI in education systems. As we venture further into the realm of AI in education, it is crucial to approach it wisely, ensuring we make the most out of the benefits while tackling the challenges head-on (UNESCO, 2023).
What do our students think of GenAI?
The exploration into university students’ perception towards Generative AI (GenAI) like ChatGPT in higher education, as conducted by Chan & Hu (2023), unveils a mixed bag of enthusiasm and worries. The study sheds light on how students generally have a good grasp of GenAI technologies, with their familiarity being shaped by their knowledge of GenAI and how often they use it. The findings underline both the potential upsides and downsides of employing GenAI in the educational realm, with students’ perceptions varying based on their prior experiences with GenAI technologies. Overall, the participants exhibited a sound understanding of what GenAI technologies can and can’t do, alongside a positive outlook on utilizing these technologies for learning, research, and future career pursuits. Nevertheless, concerns around reliability, privacy, ethical dilemmas, unclear policies, and the possible impact on personal growth, career opportunities, and societal values were also brought to light. The benefits and concerns regarding the use of GenAI technologies are detailed in the table below from their paper.
Student Perception of GenAI Technologies |
Benefits related to |
Challenges concerning |
1. Personalized and immediate learning support |
1. Accuracy and transparency |
2. Writing and brainstorming support |
2. Privacy and ethical issues |
3. Research and analysis support |
3. Holistic competencies |
4. Visual and audio multi-media support |
4. Career prospects |
5. Administrative support |
5. Human values |
6. Uncertain policies |
AI’s Illusion: The Ethical Maze in Generative Fake Imagery
The increasing misuse of Generative AI (GenAI) has raised significant concerns regarding its potential for harm and societal impact. Of particular concern is the generation of fake images, which presents ethical challenges as it becomes increasingly difficult to distinguish between real and fabricated content. This blurring of lines opens the door to potential misinformation, deception, and manipulation (Murugesan, 2023).
A notable incident reported by The Guardian in April 2023 highlighted the issue when a prize-winning photograph was revealed to have been generated by an AI. Another example involves a false report of an explosion at the Pentagon, accompanied by an apparently AI-generated image – as reported by NPR, Washington Post and other media. These instances underscore the importance of addressing the ethical implications of GenAI and the potential threats it poses. Therefore, everybody has to manage risks induced by AI and generative AI (Baxter & Schlesinger, 2023).
Building AI Literacy: Five Essential Aspects for Students to Navigate the World of Artificial Intelligence
In order to navigate the world of artificial intelligence effectively, it is crucial for students to develop AI literacy. Here are five key aspects that students should be aware of in the AI Literacy Framework (Chan, 2023b):
Understanding the five aspects of the AI Literacy Framework Chan, 2023 |
AI Concepts |
AI Application |
AI Hype vs. Reality |
AI Safety and Security |
Responsible AI Usage |
Familiarity with basic terminology
- Artificial narrow /general/super intelligence
- machine learning
- machine intelligence
|
Awareness of common AI applications, APIs and plugins
- virtual assistants
- recommendation systems
- facial recognition
|
Differentiate between the potential of AI and the marketing hype
realistic expectation of what AI can / cannot do |
Awareness of potential security risks
- possible threats to personal data
- misuse of technology
|
Responsible use of AI applications and understand and take steps to address limitations of AI systems
- fact-checking information
- consider ethical implication
- question the reliability of AI-generated content
|
AI Concepts: Students should familiarize themselves with basic AI terminology and principles to gain a better understanding of how AI systems function.
AI Applications: Awareness of common AI applications and their presence in everyday life is essential. This includes basic understanding technologies like virtual assistants and facial recognition.
AI Hype versus Reality: Students should be able to differentiate between the potential of AI and the marketing hype surrounding it. This will help them develop realistic expectations of what AI can and cannot accomplish.
AI Safety and Security: Understanding the potential security risks associated with AI applications is crucial. Students should be aware of threats to personal data and the potential for misuse of AI technology.
Responsible AI Usage: Developing a sense of responsibility when using AI applications is vital. This involves considering the limitations of AI, fact-checking information, addressing ethical implications, and questioning the reliability of AI-generated content.
For more details, please check the course “HKU AI Literacy for Education” https://learning.hku.hk/catalog/course/ai-literacy-for-education-student/
Ensuring Ethical AI: Empowering Students to Evaluate and Navigate the Veracity of AI-Generated Content
GenAI has been observed to produce content with questionable quality or outright misinformation. This raises ethical concerns regarding the responsible use of AI technology and the potential impact of false or misleading information. To address this, students must acquire the necessary skills to discern the reliability of information generated by GenAI technologies. Otherwise, there could be potential misuse of AI tools for academic dishonesty. Students could also find difficulties in differentiating between AI-generated and human-authored text. (Chan, 2023)
Students should understand how to determine the accuracy, reliability, and biases present in content produced by AI systems. They need to be aware of the potential risks and challenges associated with the misuse or misinterpretation of AI-generated information, as it can have significant consequences on individuals, society, and decision-making processes.
Meanwhile, the more data you feed into the algorithm, the more accurate and personalized the content it generates becomes. However, this also means that personal data is being used, which can be a cause for concern. Companies often limit access to personal information, but an AI can be queried in dozens of different ways. If this data falls into the wrong hands, it could be used for malicious purposes, such as identity theft, cyberattacks, and social engineering scams. Therefore, during a responsible inquiry about generative AI or co-working with generative AI, students also need to consider data privacy and security issues. Students should protect and not disclose personal data in the data input process, and ensure the security of AI applications (Chan & Lee, 2023). More discussions on privacy, security and safety on AI in education can be found in literature, such as Nguyen et al. (2023)
Striking the Right Balance: Empowering Students to Harness the Power of AI while Cultivating Essential Human-Centric Skills
While AI can be a powerful tool, excessive reliance on it can hinder the development of key skills necessary for their study and their careers. Students must strike a balance between AI-aided learning and the cultivation of critical thinking skills, problem-solving abilities, and other human-centric competencies.
For example, students should understand how AI can be involved in decision-making and problem-solving tasks, and whether it may impede the development of their essential cognitive skills. Furthermore, it is important for students to build human-centric skills, such as empathy, innovation, and the ability to navigate the complexities of the world. Students should recognize how AI technologies can be used to enhance these skills rather than replace or diminish their significance. It is important for students to fully understand practical issues like ethics when integrating AI into their learning for a well-rounded and impactful learning experience (Chan & Tsi, 2023). It is also important for students to maintain their own control and accountability in the context of generative AI (Chan & Lee, 2023).
By fostering AI literacy, understanding the ethical implications, and striking the right balance between AI and human-centric skills, students can effectively navigate the challenges and harness the potential of AI in a responsible and beneficial manner.
References:
- Baxter, K., & Schlesinger, Y. (2023, June). Managing the Risks of Generative AI. Harvard Business Review. https://hbr.org/2023/06/managing-the-risks-of-generative-ai
- Chan, C.K.Y. (2023). Is AI changing the rules of academic misconduct? An in-depth look at students’ perceptions of ‘AI-giarism’. https://arxiv.org/abs/2306.03358
- Chan, C.K.Y. (2023b). The Roadmap to Responsible AI: Policy, Assessment and Literacy, (Keynote Speech) International Conference on Student Development organized by Institusi Pembangunan Felo, Universiti Teknologi Malaysia 7 Sept 2023.
- Chan, C. K. Y., & Lee, K. K. W. (2023). The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and Millennial Generation teachers? https://arxiv.org/abs/2305.02878
- Chan, C. K. Y., & Tsi, L. H. Y. (2023). The AI Revolution in Education: Will AI Replace or Assist Teachers in Higher Education? [Preprint]. arXiv. https://arxiv.org/abs/2305.01185
- Murugesan, S. (2023). Ethical Concerns on AI Content Creation. IEEE Computer Society. https://www.computer.org/publications/tech-news/trends/ethical-concerns-on-ai-content-creation
- Nguyen, A., Ngo, H. N., Hong, Y., Dang, B., & Nguyen, B. P. T. (2022). Ethical principles for artificial intelligence in education. Education and Information Technologies, 28(4), 4221-4241.
- UNESCO. (2023). Generative Artificial Intelligence in education: What are the opportunities and challenges? Retrieved from https://www.unesco.org/en/articles/generative-artificial-intelligence-education-what-are-opportunities-and-challenges