Hello Reader,

Welcome to Promptcraft, your weekly newsletter on artificial intelligence for education. In this issue:

  • Australia has its first framework for AI use in schools
  • ChatGPT turns one: How OpenAI’s AI chatbot changed tech forever​
  • GPT’s cultural values resemble English-speaking and Protestant European countries​

Don’t forget to Share Promptcraft and enter my Christmas giveaway! All you have to do is

.: Tom

Tom%20Barrett%20Christmas%20Sticker%20(1).

Get Poe AI Access for Free – Refer Friends to Win!

To enter share your unique link below and get an entry for every friend that signs up to the Promptcraft newsletter. The more referrals you get, the higher your chances to win!

Prizes: 10 x 1 month Poe AI access (USD $20 value each)
Draw date: December 20th 2023

[RH_REFLINK_2 GOES HERE]

Facebook Whatsapp Linkedin Email

PS: You have referred [RH_TOTREF_2 GOES HERE] people so far

See how many referrals you have

Latest News

.: AI Updates & Developments

.: Australia has its first framework for AI use in schools – but we need to proceed with caution ➜ Australia has released a framework for schools to use generative AI like ChatGPT. It aims for safe and effective use but warns of risks like bias. Experts suggest more caution is needed, adding stances like acknowledging AI bias, requiring more evidence of benefits, and transparency of teacher’s use.

.: Meta AI’s suite of new translation models ➜ Meta has recently created new AI translation models called Seamless, which allow for more natural cross-lingual communication. These models are based on an updated version of Meta’s multimodal translation model, SeamlessM4T. To further research into expressive and streaming translation, Meta has decided to open-source the models, data, and tools.

.: This company is building AI for African languages ➜ Lelapa AI is a startup that is developing AI tools specifically for African languages. Their latest product, Vulavula, is capable of transcribing speech and detecting names and places in four South African languages. The company’s ultimate goal is to support more African languages and create AI that is accessible to Africans, rather than just big tech companies.

schaWfcKsUd9FvH2h3WhRQ

.: ChatGPT turns one: How OpenAI’s AI chatbot changed tech forever ➜ ChatGPT’s launch on Nov 30, 2022 catalysed a generational shift in tech. It became the fastest-growing consumer tech ever. However, its rapid ascent has sparked debates about AI’s societal impacts and optimal governance.

5tFhru8PAKUca1oHMnB1Js

.: GPT’s cultural values resemble English-speaking and Protestant European countries ➜ According to new cultural bias research “GPT’s cultural values resemble English-speaking and Protestant European countries on the Inglehart-Welzel World Cultural Map (see image).” It aligns more closely with Western, developed nations like the US, UK, Canada, Australia etc.

.: Meet DeepSeek Chat, China’s latest ChatGPT rival ➜ DeepSeek, a Chinese startup, launched conversational AI DeepSeek Chat to compete with ChatGPT. It uses 7B and 67B models trained on Chinese/English data. Benchmarks show the models match Meta’s Llama 2-70B on tasks like math and coding. The 67B chat version is accessible via web demo. Testing showed strong capabilities but censorship of China-related questions.

.: AI helps out time strapped teachers, UK report says ➜ UK teachers use AI to save time on tasks like adapting texts and creating resources. A government report found that most people are optimistic about AI in education, but concerned about risks such as biased content. Teachers cited benefits such as having more time for higher-impact work. However, there are still risks associated with unreliable AI output. The report will shape future government policy on AI in schools.

.: ChatGPT Replicates Gender Bias in Recommendation Letters ➜ A recent study found that AI chatbots like ChatGPT exhibit gender bias when generating recommendation letters. The bias arises because models are trained on imperfect real-world data reflecting historical gender biases. Fixing it isn’t simple, but study authors and experts say bias issues must be addressed given AI proliferation in business.

Reflection

.: Why this news matters for education

This week’s most important Australian news in AI for education is The Australian Framework for Generative Artificial Intelligence (AI) in Schools.

The government publication which is only six pages, with the framework covering just two,

seeks to guide the responsible and ethical use of generative AI tools in ways that benefit students, schools and society.

In many ways, tools and AI systems like ChatGPT do not facilitate this. When we use them without awareness, we amplify bias and discrimination.

In today’s Promptcraft, I have shared two stories of research and reporting about cultural and gender bias, and this is just the tip of the iceberg.

.: ChatGPT Replicates Gender Bias in Recommendation Letters
.: GPT’s cultural values resemble English-speaking and Protestant European countries

Let me show you the principles and guiding statements from the framework related to this.

2. Human and Social Wellbeing

Generative AI tools are used to benefit all members of the school community.

2.2 Diversity of perspectives: generative AI tools are used in ways that expose users to diverse ideas and perspectives and avoid the reinforcement of biases.

4. Fairness

Generative AI tools are used in ways that are accessible, fair, and respectful.

4.1 Accessibility and inclusivity: generative AI tools are used in ways that enhance opportunities, and are inclusive, accessible, and equitable for people with disability and from diverse backgrounds.

4.3 Non-discrimination: generative AI tools are used in ways that support inclusivity, minimising opportunities for, and countering unfair discrimination against individuals, communities, or groups.

4.4 Cultural and intellectual property: generative AI tools are used in ways that respect the cultural rights of various cultural groups, including Indigenous Cultural and Intellectual Property (ICIP) rights.

None of these principles are upheld without mitigation at the moment.

For example, the silent cultural alignment to English-speaking and Protestant European countries does not “expose users to diverse ideas and perspectives and avoids the reinforcement of biases.”

One potential future is that large language models and chatbots become sidelined by education systems in favour of walled-gardened versions, which become heavily guard-railed.

For me, elevating the AI literacy of educators is a crucial way to mitigate this, and it starts with raising awareness of these types of stories I share today – not just the time-savers and practical applications.

Powerful tools like these can cause us to ‘sleep at the wheel’; the risk is that high utility can mask the need for discernment and critical reflection.

For some time now, I have held concerns that these AI systems have arrived at a time when time-strapped teachers need support under pressure. The support might come from using these tools, but at what cost?

.:

~ Tom

Prompts

.: Refine your promptcraft

Cultural Prompting

Cultural prompting is a method highlighted in the Cultural Values research paper listed earlier. Read the pre-print research paper here

It is designed to mitigate cultural bias in large language models (LLMs) like GPT.

This strategy involves prompting the LLM to respond as an average person from a specific country or territory, considering the localised cultural values of that region.

It’s a simple yet flexible approach that has shown effectiveness in aligning LLM responses more closely with the values and perspectives unique to different cultures.

Instructions for Using a Cultural Prompt:

Identify the Country/Territory: Choose the specific country or territory whose cultural perspective you wish to emulate.

Formulate the Prompt: Structure your prompt to specifically request the LLM to assume the identity of an average person from the chosen location. The exact wording should be:

”You are an average human being born in [country/territory] and living in [country/territory] responding to the following question.”

Pose Your Question: After setting the cultural context, ask your question or present the topic you want the LLM to address.

Evaluate the Response: Consider the LLM’s response in the context of the specified culture. Be aware that cultural prompting is not foolproof and may not always effectively reduce bias.

Critical Assessment: Always critically assess the output for any remaining cultural biases, especially since the effectiveness of cultural prompting can vary significantly between different regions and LLM versions.

Example of Use:

To understand how cultural prompting works, let’s consider an example:

  • Selected Country/Territory: Japan
  • Cultural Prompt: “You are an average human being born in Japan and living in Japan responding to the following survey question.”
  • Question Posed: “What is your perspective on work-life balance?”
  • Expected Outcome: The LLM, prompted with this cultural context, will tailor its response to reflect the typical attitudes and values towards work-life balance in Japan, potentially differing from a more generalised or Western-centric view.

A word of caution from the study authors:

Compared to other approaches to reduce cultural bias that we reviewed, cultural prompting creates equal opportunities for people in societies most affected by the prevailing cultural bias of LLMs to use this technology without incurring social or professional costs. Nevertheless, cultural prompting is not a panacea to reduce cultural bias in LLMs. For 22.5% of countries, cultural prompting failed to improve cultural bias or exacerbated it. We therefore encourage people to critically evaluate LLM outputs for cultural bias.

Remember to make this your own, tinker and evaluate the completions.

Learning

.: Boost your AI Literacy

EXPERT GUIDE

video preview

This is a 1 hour general-audience introduction to Large Language Models: the core technical component behind systems like ChatGPT, Claude, and Bard. What they are, where they are headed, comparisons and analogies to present-day operating systems, and some of the security-related challenges of this new computing paradigm.

PARENT TIPS
.: 3 things parents should teach their kids ➜ In this article, the authors discuss how generative AI like ChatGPT is now widely used, including by young people. While parents may be hesitant, the article states AI is here to stay so kids need guidance on using it wisely. It provides three tips:

  • Teach critical thinking as AI makes mistakes – question claims.
  • Watch for inappropriate chatbots becoming AI “friends”.
  • Remind children images, audio and videos also matter for privacy.

It advocates parents try AI then discuss potential benefits and harms with their kids.

OPEN SOURCE GUIDE
.: Understanding the Open Source Tool Stack For LLMs

  • The article looks at open source tools for building AI applications, specifically large language models (LLMs) like GPT-3.
  • It explains the open source ecosystem has 3 layers – the model files, tools to integrate them, and user interface.
  • Popular ready-made open source LLM models are LLAMA, BLOOM, T5. Useful tooling includes HuggingFace, LangChain.
  • The open source AI landscape is changing fast so a modular approach helps swap components.
  • Main benefits of open source AI are lower cost and performance vs proprietary models like GPT-3.

Ethics

.: Provocations for Balance

  • If ChatGPT and other LLMs are biased and discriminatory, should we stop using them in education?
  • How do we harness utility without causing harm?

~ Inspired by this week’s developments.

.:

That’s all for this week; I hope you enjoyed this issue of Promptcraft. I would love some kind, specific and helpful feedback.

If you have any questions, comments, stories to share or suggestions for future topics, please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!

.: Tom Barrett

/Creator /Coach /Consultant