Reflection
.: Why this news matters for education
This week’s most important Australian news in AI for education is The Australian Framework for Generative Artificial Intelligence (AI) in Schools.
The government publication which is only six pages, with the framework covering just two,
seeks to guide the responsible and ethical use of generative AI tools in ways that benefit students, schools and society.
In many ways, tools and AI systems like ChatGPT do not facilitate this. When we use them without awareness, we amplify bias and discrimination.
In today’s Promptcraft, I have shared two stories of research and reporting about cultural and gender bias, and this is just the tip of the iceberg.
.: ChatGPT Replicates Gender Bias in Recommendation Letters
.: GPT’s cultural values resemble English-speaking and Protestant European countries
Let me show you the principles and guiding statements from the framework related to this.
2. Human and Social Wellbeing
Generative AI tools are used to benefit all members of the school community.
2.2 Diversity of perspectives: generative AI tools are used in ways that expose users to diverse ideas and perspectives and avoid the reinforcement of biases.
4. Fairness
Generative AI tools are used in ways that are accessible, fair, and respectful.
4.1 Accessibility and inclusivity: generative AI tools are used in ways that enhance opportunities, and are inclusive, accessible, and equitable for people with disability and from diverse backgrounds.
4.3 Non-discrimination: generative AI tools are used in ways that support inclusivity, minimising opportunities for, and countering unfair discrimination against individuals, communities, or groups.
4.4 Cultural and intellectual property: generative AI tools are used in ways that respect the cultural rights of various cultural groups, including Indigenous Cultural and Intellectual Property (ICIP) rights.
None of these principles are upheld without mitigation at the moment.
For example, the silent cultural alignment to English-speaking and Protestant European countries does not “expose users to diverse ideas and perspectives and avoids the reinforcement of biases.”
One potential future is that large language models and chatbots become sidelined by education systems in favour of walled-gardened versions, which become heavily guard-railed.
For me, elevating the AI literacy of educators is a crucial way to mitigate this, and it starts with raising awareness of these types of stories I share today – not just the time-savers and practical applications.
Powerful tools like these can cause us to ‘sleep at the wheel’; the risk is that high utility can mask the need for discernment and critical reflection.
For some time now, I have held concerns that these AI systems have arrived at a time when time-strapped teachers need support under pressure. The support might come from using these tools, but at what cost?
.:
~ Tom