Hello Reader,

Promptcraft is a curated newsletter on AI for education designed to elevate your AI literacy.

In this issue:

  • Apple teased AI improvements at their recent event
  • Meta AI’s image tool lacks diversity in representing different cultures
  • A teacher is accused of using AI to make his school Principal appear racist

Let’s get started!

~ Tom Barrett

HARDWARE

.: Apple teased AI improvements, including the M4’s neural engine, at its iPad event

Summary ➜ Apple highlighted AI features, including the M4 neural engine, at its recent iPad event. The company showcased AI-powered tools like visual lookup and live text capture on the new iPad Air and Pro models. Apple hinted at future AI advancements for developers in iPadOS.

Why this matters for education ➜ Apple are yet to reveal their hand regarding AI strategy, and by all accounts we will hear more in their developer event in June. When you consider these device upgrades, chip improvement and the challenge of devices dedicated to AI – perhaps mobile phones and tablet technology will see a new wave of development from AI.

Using AI tools on-device, instead of via cloud based services, is likely to offer performance benefits, greater flexibility, as well as, improved standards in privacy and safety which is a key component for education implementation.

At the very least, I think we will see more personal control and new data privacy standards, which the AI ecosystem will have to engage with.

In 2023, Apple shipped 234.6 million iPhones, capturing 20.1% market share

BIAS

.: Meta AI’s image tool lacks diversity in representing different cultures

Summary ➜ Meta AI’s image generator shows a strong bias by consistently adding turbans to images of Indian men, which does not accurately reflect the diversity of the population. Despite being rolled out in various countries, including India, the tool lacks diversity in representing different cultures and professions.

Why this matters for education

Bias in AI image generators is a well-studied and well-reported phenomenon, but consumer tools continue to exhibit glaring cultural biases. The latest culprit in this area is Meta’s AI chatbot, which, for some reason, really wants to add turbans to any image of an Indian man.

These failings remind us we need a more nuanced understanding of the limitations and biases present in current AI systems. However I am not sure adding these examples to the collection of “learning opportunities” is much consolation to the harm caused.

(Image generated with Midjourney)

aUggkNtA9XCZhoGvQUoBLy

DEEPFAKE

.: A teacher is accused of using AI to make his school Principal appear racist

Summary ➜ A teacher in Baltimore is accused of using AI to create fake recordings of his school principal saying racist things. The principal faced threats and disruption after the false recordings spread online. The incident highlights the dangers of AI misuse and the need for better regulations.

Why this matters for education ➜ Clearly not a great situation, that the latest deepfake incident occurs within the education ecosystem. There is a connection here to Apple’s advances in on-device AI capability, which might bring in stronger safety and data privacy. Perhaps stronger regulation and control over voice and identity cloning in the cloud can help to prevent these incidents.

The story reminds us of the work we have to do.

“This is not Taylor Swift. It’s not Joe Biden. It’s not Elon Musk. It’s just some guy trying to get through his day,” he said. “It shows you the vulnerability. How anybody can create this stuff and they can weaponize it against anybody.”

.: Other News In Brief

📸 OpenAI working on new AI image detection tools

🕵️‍♂️ Microsoft launches AI chatbot for spies

🔍 OpenAI to steer content authentication group C2PA

📚 Audible deploys AI-narrated audiobooks

🐋 Sperm whale ‘alphabet’ discovered, thanks to machine learning

🛡️ How VISA is using generative AI to battle account fraud attacks

🤖 Apple poaches AI experts from Google, creates secretive European AI lab

📲 Siri for iOS 18 to gain massive AI upgrade via Apple’s Ajax LLM

📱 Anthropic finally releases a Claude mobile app

💬 Google adds AI conversation practice for English language learners

:. .:

What’s on my mind?

.: US-Centric Bias and its Impact

My recent collaboration with teachers from across Scandinavia – Norway, Denmark, Sweden, and Finland – reminded me of a critical concern within the growing use of AI in education. The issue? The potential for bias and cultural insensitivity within AI tools, particularly language models (LLMs).

Many leading AI companies and the datasets used to train their AI systems are rooted in the United States. This US-centric origin can create limitations – the AI may lack a nuanced understanding of cultural differences, leading to biases in its output. It highlights the need for a broader, more inclusive approach to AI development.

This issue reminds me of the “mirrors, windows, and doors” model often used in education. This concept emphasises the importance of the following for students:

  • Mirrors: Seeing themselves reflected in the learning materials.
  • Windows: Offering insights into different perspectives and cultures.
  • Doors: Opening up opportunities for engagement with the world on a larger scale.

In the same way, the AI tools used in our classrooms should also embrace these principles.

A recent example of this bias can be seen in image generation tools. Meta AI, a widely used platform, came under fire for consistently depicting Indian men in turbans. (See above for the story)

While turbans are a significant part of Indian culture, their overwhelming presence in the AI’s output ignores the vast diversity of clothing and ethnicities within India. This highlights the need for AI developers to incorporate more geographically and culturally diverse datasets during training.

Educators have a vital role in driving change. We need to champion the development of more inclusive, culturally sensitive AI.

~ Tom

Prompts

.: Refine your promptcraft

During my current visit to Sweden, where I am working with teachers, I have found it fascinating to learn about the various ways they have been incorporating AI tools into their work.

One particular example that seems to strike a chord with educators across different countries is the use of AI tools to refine, adapt, and improve email communication with parents.

Although I never personally experienced the need to email parents during my teaching career, many teachers I collaborate with have expressed the pressure and anxiety they feel when communicating via email.

They often worry about striking the right tone, being clear and concise, and maintaining a professional yet approachable demeanour.

A helpful promptcraft technique to address this challenge is to develop a short style guide based on your own written content.

By analysing your previous emails and identifying the key elements of your communication style, you can create a set of guidelines that reflect your unique voice and approach.

Then, when crafting prompts for AI tools, you can incorporate these style guidelines to ensure that the generated content aligns with your personal communication style.

To give you an example, here’s a glimpse into my email writing style:

To create your own writing style guide just use a prompt similar to the example below:

Carefully analyse the example email text below to generate a writing style guide. Include a description of the tone, voice, style and approach you identify from the examples.

By providing the AI tool with this style guide as part of the prompt, you can maintain consistency in your communication and reduce the time and effort required to compose emails.

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

GLOSSARY

.: The A-Z of AI: 30 terms you need to understand artificial intelligence

Artificial intelligence is arguably the most important technological development of our time – here are some of the terms that you need to know as the world wrestles with what to do with this new technology.
… understanding this language of AI will be essential as we all – from governments to individual citizens – try to grapple with the risks, and benefits that this emerging technology might pose.

HIGHER-ED
.: University Students Concerned About AI Skills

University-bound students are worried about how AI usage by others may affect their academic and career opportunities. A study of 1,300 students shows that many see AI as both helpful and concerning, with concerns about ethics and competitive disadvantages.

“I’m struck that they’re evidencing a fear that others are using this to gain a leg up and conclude they have to do the same thing,” said Art & Science Group principal David Strauss.

FACES
.: Can you spot which of these faces is generated by AI? Probably not — here’s why

Experts say it’s becoming harder to tell AI-generated faces from real ones. People often mistake AI faces as real due to advancements in technology. Media literacy and awareness are crucial to navigate this new landscape.

Ethics

.: Provocations for Balance

Scenario 1: The “All-American” Student

A school adopts an AI-powered “virtual tutor” advertised to provide personalised learning paths. Soon, students from immigrant families and international students report getting recommendations heavily biased towards Western history, US-centric examples, and subtly promoting American cultural norms and ideals over their native ones.

Does responsible AI development demand cultural advisors and diversity audits for educational tools, even for seemingly neutral subjects?

Scenario 2: The “Perfect” Uni Application

A new AI tool goes viral, promising to “optimise” university essays, suggesting not just edits but rewriting sentences to appeal to what it claims are admissions officers’ preferences. Counselors find that AI-driven revisions favour stories of overcoming hardship that conform to American narratives of “grit,” potentially erasing nuanced experiences of marginalised students.

If AI tools shape and standardize how students present themselves, is this a new form of inequality? Can educators fight AI with AI, designing tools that help preserve student authenticity?

Scenario 3: When Translation Goes Wrong

To better communicate with parents, a school adopts an AI-powered translation tool for emails and newsletters. Immigrant parents soon complain that translations are not just inaccurate, but convey disrespect or perpetuate stereotypes about their cultures. Turns out, the AI model wasn’t trained with nuanced understanding of cultural idioms.

Is it ever ethical to rely on AI for translation in situations where cultural sensitivity and accuracy are crucial to building trust? Are there alternatives?

Inspired by some of the topics this week and dialled up.

:. .:

How would you rate this issue of Promptcraft?

(Click on your choice below)

⭐️⭐️⭐️⭐️⭐️ Loved it!

⭐️⭐️⭐️ Good

⭐️ Poor

If you have any other kind, specific and helpful feedback, please reply to this email or contact me at tom@dialogiclearning.com

.: :.

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett