.: Promptcraft 55 .: US teachers get free access to Khanmigo

Hello Reader,

Promptcraft is a curated newsletter on AI for education designed to elevate your AI literacy.

In this issue:

  • All U.S. K-12 educators to get free access to Khanmigo
  • Google scrambles to manually remove weird AI answers in search
  • Scarlett Johansson’s OpenAI clash is just the start of legal wrangles over artificial intelligence

Let’s get started!

~ Tom Barrett

nB11oStHCoUkm2oUEWakR6

US EDU

.: All U.S. K-12 educators to get free access to Khanmigo as Microsoft partners with Khan Academy

Summary ➜ This collaboration also aims to use Microsoft’s Phi-3 models to improve AI-driven mathematics tutoring. The initiative is designed to offer personalised learning experiences and help teachers create more effective educational plans, thereby making teaching more sustainable and enjoyable. Additionally, Khan Academy will integrate more of its content into Microsoft Copilot and Teams for Education to expand access to educational resources.

Why this matters for education ➜ This announcement, from Microsoft’s developer conference, takes my top billing in this week’s issue because the (potential) direct impact on educators (at least in the US).

But after the fanfare, what questions do we have? I’ll go first:

  • What do we know about the effectiveness of Khanmigo or any similar agent tutor tools? Has it improved?
  • How will educators and students be able to control the experience?
  • Is Khanmigo just the next version of Clippy?
  • Where’s the space for other tools, and how can students get hands on to build their own?
  • Is it me or is there something strange about putting “Teachers are super overworked” at the centre of the rationale for these tools? (more on this below)

Lots to think about here and it will be interesting to hear from Sal Khan in August at this year’s big tech and education conference in Australia.

AI SEARCH

.: Google scrambles to manually remove weird AI answers in search

Summary ➜ Social media is buzzing with examples of Google’s new AI enhanced search, giving strange responses, leading to a rush to manually disable these responses. Google faces challenges as it strives to improve the quality of its AI outputs amid criticism and memes on social platforms.

Why this matters for education ➜ Web search is not like when we were growing up. The experience of finding, exploring and querying the worldwide web are a long way from how we were teaching “web search” skills 20 years ago. Let’s not forget Google search was launched on September 4 1998, this year it will be 25 years old. And yes it seems to have been broken by the shift towards generative AI style results – that’s not to mention the withering critique from publishers and journalists.

uzNJoQ2FuMBzsFQARqHseq

VOICE CLONE

.: Scarlett Johansson’s OpenAI clash is just the start of legal wrangles over artificial intelligence

Summary ➜ Scarlett Johansson raised concerns about her voice being used in an OpenAI update. Legal disputes over AI technology and celebrity voices are emerging. Johansson is considering legal action following OpenAI’s withdrawal of the voice.

Why this matters for education ➜ The phrase “sound-a-like” in the reporting on this story highlights the rapid advances in AI technology. While voice impersonators have existed in the media landscape, advanced AI voice clone tools can now generate a voice model in minutes with a limited sample of someone’s voice. As more artificial tools are developed, it prompts a deeper exploration of what makes us uniquely human and the rights we hold.

.: Other News In Brief

🧠 Anthropic releases a research paper looking inside the black box of AI

👑 NVIDIA Shows Once Again Who is the Real King of Generative AI

🔍 EU’s ChatGPT taskforce offers first look at detangling the AI chatbot’s privacy compliance

🧑‍🤝‍🧑 Meta’s new AI council is composed entirely of white men

❌ Google Search’s “udm=14” trick lets you kill AI search for good

🎵 Spotify experiments with an AI DJ that speaks Spanish

🚀 NVIDIA says 20,000 GenAI startups are now building on its platform

💰 Elon Musk’s xAI raises $6 billion to fund its race against ChatGPT and all the rest

💵 Amazon is Considering $20 Monthly Subscription for GenAI Enhanced Alexa

📰 OpenAI partners with Wall Street Journal publisher News Corp.

:. .:

What’s on my mind?

.: A Productivity Paradox

How is the great fanfare, hype, and clamour around AI tools in education a distraction from the working conditions in our education systems?

I may not be able to answer this question from every angle, but I have a hunch it’s the right query.

My unease stems from the recent surge in big tech announcements and the relentless marketing spin touting “productivity” gains for educators using these AI tools and systems.

Long-time readers will know that I have been highlighting the real workload challenges in our schools for some time. But something seems off-balance when Sal Khan, the founder of Khan Academy, says while introducing Khanmigo,

“Teachers are super overworked.”

Is that the foundation stone from which we are meant to launch into a new era of super-productivity and super-creativity?! Super?!

This feels like a distraction from the rotten conditions that created the unreasonable work expectations in the first place.

We all need a lift, and technology like AI gives us a faster and more efficient way to complete our tasks. Don’t misinterpret my sentiment here—go for it and get stuck in! Explore these AI tools, try them out, and see how they work for you.

Take the draft lesson plans, get the report comments, and adapt the mountain of emails you need to send.

Just promise me once you have gleefully hacked and slashed your way through your to-do list and tamed your inbox, you will pause.

Pause and ask this simple question about your productivity:

How has using these tools changed what others expect of me?

~ Tom

Prompts

.: Refine your promptcraft

Another example of the recommended prompts shared by Anthropic, the research lab behind the Claude family of AI models.

I have been writing lesson plans for a major AI Literacy programme recently and it got me thinking about what it was like to learn how to write lesson plans as a young student teacher.

There is something powerful and clarifying in the process of planning a lesson. Its a complex orchestration of concepts and kids. Learning (or lesson) design is a special type of skill and mindset.

With all that said it is fascinating to see AI systems predict their way to an average lesson plans.

Here is the Lesson Planner prompt from Anthropic.

System
Your task is to create a comprehensive, engaging, and well-structured lesson plan on the given subject. The lesson plan should be designed for a 60-minute class session and should cater to a specific grade level or age group. Begin by stating the lesson objectives, which should be clear, measurable, and aligned with relevant educational standards. Next, provide a detailed outline of the lesson, breaking it down into an introduction, main activities, and a conclusion. For each section, describe the teaching methods, learning activities, and resources you will use to effectively convey the content and engage the students. Finally, describe the assessment methods you will employ to evaluate students’ understanding and mastery of the lesson objectives. The lesson plan should be well-organized, easy to follow, and promote active learning and critical thinking.

User
Subject: Introduction to Photosynthesis Grade Level: 7th Grade (Ages 12-13)

Edit the subject information to anything you want and adapt the task prompt too. Note the promotion of active learning and critical thinking. Here are some examples of how different AI systems complete the task.

Here we have a Think Pair Share activity as an engager and a transition into direct instruction from OpenAI’s GPT-4o.

Gemini needs to work on what a Hook is intended to do! I don’t think this is enough for Year 7s!

Worth noting here that these models have no understanding of the concept of designing a hook or engagement activity for a lesson. It just predicts the next most likely word.

Anthropic’s older Claude-2-100k model suggested a 3-2-1 exit ticket strategy for finishing the lesson. Most of the models seemed to like the exit ticket idea.

My recommendation with any of these AI systems for learning design is

  1. Take responsibility.
  2. Refine the prompts to suit your context, community and class.
  3. Amplify the pedagogical approach you want.
  4. Generate LOADS of examples and push the tools to do weird things. Filter for ideas you can build around.
  5. Break down the learning design process (Plan) into smaller chunks.
  6. Build a bot which can do all of these things, so you don’t have to keep prompting.

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

INTRO

.: Explained: Generative AI

The MIT News article provides a comprehensive introduction to generative AI, explaining its definition, mechanisms like Markov chains, GANs, diffusion models, and transformers, and diverse applications such as text and image generation. It also addresses challenges like bias, plagiarism, and worker displacement, offering insights into both the technology and its implications.

ASSESSMENT
.: Do Students Want AI Writing Feedback?

In a recent experiment, Mike Kentz tested AI-generated feedback on student essays alongside his own feedback. Students found AI feedback clear and useful but preferred personalised feedback from their teacher. The AI tool, while efficient, often provided generic suggestions and struggled with nuanced critique. The study highlights the balance needed between AI’s efficiency and the irreplaceable human touch in education, suggesting that grading should focus more on the writing process than just the final output.

COURSE
.: TAFE NSW | Introduction to Artificial Intelligence

A free beginner-friendly course from CSIRO and TAFE NSW in Australia, that covers real-world applications and terminology without needing prior programming knowledge. The course offers insights from industry experts and covers topics like machine learning and natural language processing in a 2.5-hour online, self-paced format.

Ethics

.: Provocations for Balance

Scenario 1: A New Hope

In a dystopian future, an entire generation has learned math solely from tools like Khanmigo. Though they test well, they are mathematical zombies, unable to innovate or problem-solve without an AI feeding them steps. Society stagnates as complex challenges like climate change and disease outbreaks fester, untackled by minds trained in rigid algorithmic thinking. Once hailed as a boon to education, AI systems are now seen as an intellectual prison. An underground resistance of human maths teachers arises, working in hidden analogue classrooms to cultivate the creative mathematical spark the world desperately needs. But they are hunted by the Algorithm Enforcement Agency, which seeks to stamp out any challenge to the AI teaching regime…

Scenario 2: Voice Theft

In a world where AI can clone voices with near-perfect accuracy, personal privacy becomes a relic of the past. Celebrities and ordinary citizens find their voices used in unauthorised ways—advertisements, political speeches, and even criminal activities. The boundaries of consent are blurred, as anyone’s voice can be synthesised and manipulated without their knowledge or approval. This leads to widespread paranoia and identity crises, as people can no longer trust what they hear. The lines between reality and AI-generated deception become indistinguishable, causing a breakdown in social trust and personal security.

Scenario 3: The Lonely Generation

Fast-forward a decade. Universities without professors are now the standard, with ‘mega-courses’ of thousands of students ‘taught’ by AI. Overworked faculty have been reduced to ‘AI wranglers,’ providing prompts to lifeless algorithms. The college experience has transformed into a bleak imitation, with students never interacting with a human instructor. But the most disheartening are those oblivious to the difference, the Zoomers who’ve never experienced learning as anything other than staring at a screen. The ‘Lonely Generation,’ raised by algorithms, grapple with forming human connections. A sombre new academic department emerges: ‘Crisis Counselling and AI Addiction Recovery’”

.: :.

Inspired by some of the topics this week and dialled up.

:. .:

How would you rate this issue of Promptcraft?

(Click on your choice below)

⭐️⭐️⭐️⭐️⭐️ Loved it!

⭐️⭐️⭐️ Good

⭐️ Poor

If you have any other kind, specific and helpful feedback, please reply to this email or contact me at tom@dialogiclearning.com

.: :.

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 53 .: Meta AI’s image tool lacks diversity

Hello Reader,

Promptcraft is a curated newsletter on AI for education designed to elevate your AI literacy.

In this issue:

  • Apple teased AI improvements at their recent event
  • Meta AI’s image tool lacks diversity in representing different cultures
  • A teacher is accused of using AI to make his school Principal appear racist

Let’s get started!

~ Tom Barrett

HARDWARE

.: Apple teased AI improvements, including the M4’s neural engine, at its iPad event

Summary ➜ Apple highlighted AI features, including the M4 neural engine, at its recent iPad event. The company showcased AI-powered tools like visual lookup and live text capture on the new iPad Air and Pro models. Apple hinted at future AI advancements for developers in iPadOS.

Why this matters for education ➜ Apple are yet to reveal their hand regarding AI strategy, and by all accounts we will hear more in their developer event in June. When you consider these device upgrades, chip improvement and the challenge of devices dedicated to AI – perhaps mobile phones and tablet technology will see a new wave of development from AI.

Using AI tools on-device, instead of via cloud based services, is likely to offer performance benefits, greater flexibility, as well as, improved standards in privacy and safety which is a key component for education implementation.

At the very least, I think we will see more personal control and new data privacy standards, which the AI ecosystem will have to engage with.

In 2023, Apple shipped 234.6 million iPhones, capturing 20.1% market share

BIAS

.: Meta AI’s image tool lacks diversity in representing different cultures

Summary ➜ Meta AI’s image generator shows a strong bias by consistently adding turbans to images of Indian men, which does not accurately reflect the diversity of the population. Despite being rolled out in various countries, including India, the tool lacks diversity in representing different cultures and professions.

Why this matters for education

Bias in AI image generators is a well-studied and well-reported phenomenon, but consumer tools continue to exhibit glaring cultural biases. The latest culprit in this area is Meta’s AI chatbot, which, for some reason, really wants to add turbans to any image of an Indian man.

These failings remind us we need a more nuanced understanding of the limitations and biases present in current AI systems. However I am not sure adding these examples to the collection of “learning opportunities” is much consolation to the harm caused.

(Image generated with Midjourney)

aUggkNtA9XCZhoGvQUoBLy

DEEPFAKE

.: A teacher is accused of using AI to make his school Principal appear racist

Summary ➜ A teacher in Baltimore is accused of using AI to create fake recordings of his school principal saying racist things. The principal faced threats and disruption after the false recordings spread online. The incident highlights the dangers of AI misuse and the need for better regulations.

Why this matters for education ➜ Clearly not a great situation, that the latest deepfake incident occurs within the education ecosystem. There is a connection here to Apple’s advances in on-device AI capability, which might bring in stronger safety and data privacy. Perhaps stronger regulation and control over voice and identity cloning in the cloud can help to prevent these incidents.

The story reminds us of the work we have to do.

“This is not Taylor Swift. It’s not Joe Biden. It’s not Elon Musk. It’s just some guy trying to get through his day,” he said. “It shows you the vulnerability. How anybody can create this stuff and they can weaponize it against anybody.”

.: Other News In Brief

📸 OpenAI working on new AI image detection tools

🕵️‍♂️ Microsoft launches AI chatbot for spies

🔍 OpenAI to steer content authentication group C2PA

📚 Audible deploys AI-narrated audiobooks

🐋 Sperm whale ‘alphabet’ discovered, thanks to machine learning

🛡️ How VISA is using generative AI to battle account fraud attacks

🤖 Apple poaches AI experts from Google, creates secretive European AI lab

📲 Siri for iOS 18 to gain massive AI upgrade via Apple’s Ajax LLM

📱 Anthropic finally releases a Claude mobile app

💬 Google adds AI conversation practice for English language learners

:. .:

What’s on my mind?

.: US-Centric Bias and its Impact

My recent collaboration with teachers from across Scandinavia – Norway, Denmark, Sweden, and Finland – reminded me of a critical concern within the growing use of AI in education. The issue? The potential for bias and cultural insensitivity within AI tools, particularly language models (LLMs).

Many leading AI companies and the datasets used to train their AI systems are rooted in the United States. This US-centric origin can create limitations – the AI may lack a nuanced understanding of cultural differences, leading to biases in its output. It highlights the need for a broader, more inclusive approach to AI development.

This issue reminds me of the “mirrors, windows, and doors” model often used in education. This concept emphasises the importance of the following for students:

  • Mirrors: Seeing themselves reflected in the learning materials.
  • Windows: Offering insights into different perspectives and cultures.
  • Doors: Opening up opportunities for engagement with the world on a larger scale.

In the same way, the AI tools used in our classrooms should also embrace these principles.

A recent example of this bias can be seen in image generation tools. Meta AI, a widely used platform, came under fire for consistently depicting Indian men in turbans. (See above for the story)

While turbans are a significant part of Indian culture, their overwhelming presence in the AI’s output ignores the vast diversity of clothing and ethnicities within India. This highlights the need for AI developers to incorporate more geographically and culturally diverse datasets during training.

Educators have a vital role in driving change. We need to champion the development of more inclusive, culturally sensitive AI.

~ Tom

Prompts

.: Refine your promptcraft

During my current visit to Sweden, where I am working with teachers, I have found it fascinating to learn about the various ways they have been incorporating AI tools into their work.

One particular example that seems to strike a chord with educators across different countries is the use of AI tools to refine, adapt, and improve email communication with parents.

Although I never personally experienced the need to email parents during my teaching career, many teachers I collaborate with have expressed the pressure and anxiety they feel when communicating via email.

They often worry about striking the right tone, being clear and concise, and maintaining a professional yet approachable demeanour.

A helpful promptcraft technique to address this challenge is to develop a short style guide based on your own written content.

By analysing your previous emails and identifying the key elements of your communication style, you can create a set of guidelines that reflect your unique voice and approach.

Then, when crafting prompts for AI tools, you can incorporate these style guidelines to ensure that the generated content aligns with your personal communication style.

To give you an example, here’s a glimpse into my email writing style:

To create your own writing style guide just use a prompt similar to the example below:

Carefully analyse the example email text below to generate a writing style guide. Include a description of the tone, voice, style and approach you identify from the examples.

By providing the AI tool with this style guide as part of the prompt, you can maintain consistency in your communication and reduce the time and effort required to compose emails.

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

GLOSSARY

.: The A-Z of AI: 30 terms you need to understand artificial intelligence

Artificial intelligence is arguably the most important technological development of our time – here are some of the terms that you need to know as the world wrestles with what to do with this new technology.
… understanding this language of AI will be essential as we all – from governments to individual citizens – try to grapple with the risks, and benefits that this emerging technology might pose.

HIGHER-ED
.: University Students Concerned About AI Skills

University-bound students are worried about how AI usage by others may affect their academic and career opportunities. A study of 1,300 students shows that many see AI as both helpful and concerning, with concerns about ethics and competitive disadvantages.

“I’m struck that they’re evidencing a fear that others are using this to gain a leg up and conclude they have to do the same thing,” said Art & Science Group principal David Strauss.

FACES
.: Can you spot which of these faces is generated by AI? Probably not — here’s why

Experts say it’s becoming harder to tell AI-generated faces from real ones. People often mistake AI faces as real due to advancements in technology. Media literacy and awareness are crucial to navigate this new landscape.

Ethics

.: Provocations for Balance

Scenario 1: The “All-American” Student

A school adopts an AI-powered “virtual tutor” advertised to provide personalised learning paths. Soon, students from immigrant families and international students report getting recommendations heavily biased towards Western history, US-centric examples, and subtly promoting American cultural norms and ideals over their native ones.

Does responsible AI development demand cultural advisors and diversity audits for educational tools, even for seemingly neutral subjects?

Scenario 2: The “Perfect” Uni Application

A new AI tool goes viral, promising to “optimise” university essays, suggesting not just edits but rewriting sentences to appeal to what it claims are admissions officers’ preferences. Counselors find that AI-driven revisions favour stories of overcoming hardship that conform to American narratives of “grit,” potentially erasing nuanced experiences of marginalised students.

If AI tools shape and standardize how students present themselves, is this a new form of inequality? Can educators fight AI with AI, designing tools that help preserve student authenticity?

Scenario 3: When Translation Goes Wrong

To better communicate with parents, a school adopts an AI-powered translation tool for emails and newsletters. Immigrant parents soon complain that translations are not just inaccurate, but convey disrespect or perpetuate stereotypes about their cultures. Turns out, the AI model wasn’t trained with nuanced understanding of cultural idioms.

Is it ever ethical to rely on AI for translation in situations where cultural sensitivity and accuracy are crucial to building trust? Are there alternatives?

Inspired by some of the topics this week and dialled up.

:. .:

How would you rate this issue of Promptcraft?

(Click on your choice below)

⭐️⭐️⭐️⭐️⭐️ Loved it!

⭐️⭐️⭐️ Good

⭐️ Poor

If you have any other kind, specific and helpful feedback, please reply to this email or contact me at tom@dialogiclearning.com

.: :.

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 52 .: Texas grades exams with AI

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In this issue:

  • ​Texas is replacing thousands of human exam graders with AI
  • WhatsApp trials Meta AI chatbot in India, more markets
  • OpenAI makes ChatGPT ‘more direct, less verbose’

Let’s get started!

~ Tom Barrett

jjzSeiBF5HfyX64Ec4GCMr

AI IMPACT

.: Texas is replacing thousands of human exam graders with AI

Summary ➜ The Texas Education Agency (TEA) is expecting the system to save $15–20 million per year by reducing the need for temporary human scorers, with plans to hire under 2,000 graders this year compared to the 6,000 required in 2023.

The new system uses natural language processing and after a December 2023 trial, will be applied to the grading of open ended questions on the State of Texas Assessments of Academic Readiness (STAAR) exams. These are the standardised assessments for all public school students in grades 3–8 and specific high school courses.

Why this matters for education ➜ A story which seems to have so much of the relevant topics woven through it. We’ve got workforce impact, propping up a standardised testing regime, the potential for bias in AI grading, and the big question of whether AI can truly understand and evaluate student responses in a nuanced way that reflects deep learning, rather than just surface-level answer matching.

Relying on AI to assess student learning on high-stakes standardised tests is already raising concerns. How can we ensure the AI system grades fairly and accurately across different student demographics? What recourse will students and teachers have to appeal AI-awarded scores they believe are wrong? There’s a risk that AI grading entrenches or exacerbates existing biases and inequities in standardised testing.

What is your opinion on Texas’ decision to use AI for grading exams? Do you think it is a positive step, or are there reasons to be worried? Please share your thoughts with me as I’m always interested in hearing different perspectives from my readers on interesting news like this.
w6ZbRsQYkjnhmi54uVXdDW

MOBILE

.: WhatsApp trials Meta AI chatbot in India

Summary ➜ WhatsApp is testing a new AI chatbot called Meta AI in India and other markets to enhance its services. India has over 500 million WhatsApp users, making it the largest market for the messaging service. Meta AI aims to provide users with advanced language model capabilities and generate images from text prompts.

Why this matters for education ➜ I want to see more examples and developments in the mobile messaging and AI space. Access to mobile phones is still very high – back in 2019 The International Telecommunication Union estimated that of the 7 billion people on Earth, 6.5 billion have access to a mobile phone.[Read more] Access to AI systems that support and supplement teacher training and development, student tutoring and learning, and administrative tasks within education, all via mobile, could be a game changer. Especially in regions where access to quality education and resources might be limited, these AI systems might bridge a gap.

gmvqEuNYH1bTojYfdR9ejW

UPDATES

.: OpenAI makes ChatGPT ‘more direct, less verbose’

Summary ➜ OpenAI upgraded its ChatGPT chatbot for premium users, offering an improved version called GPT-4 Turbo. The new model enhances writing, math, logical reasoning, and coding, providing more direct and conversational responses. This update follows recent controversies involving OpenAI’s models and internal issues.

Why this matters for education ➜ After a few days, this new GPT-4 Turbo has topped the user charts and is a reminder of OpenAI’s model’s breakthrough capabilities. Remember that GPT-4 was first released back in March last year. All the other models are playing catchup, and there are rumblings about the new GPT-5 model. This matters for education a little because the upgrade to OpenAI’s ChatGPT strengthens Microsoft’s capability of powering educational tools. With its enhanced capabilities in writing, math, logical reasoning, and coding, this new model could be more reliable and efficient across a range of tasks. But these are marginal gains which most of us won’t notice.

.: Other News In Brief

🇺🇸 A new US bill wants to reveal what’s really inside AI training data

🤖 Mentee Robotics unveils an AI-first humanoid robot

🦁 Brave launches a real-time privacy-focused AI answer engine

Power-hungry AI is putting the hurt on global electricity supply

🖱️ Logitech wants you to press its new AI button

🎓 Stanford report: AI surpasses humans on several fronts, but costs are soaring

🇬🇧 UK mulling potential AI regulation

🎥 Adobe to add AI video generators Sora, Runway, Pika to Premiere Pro

🎯 X.ai Announces Grok-1.5V Multimodal Foundation Model and a New Benchmark

🌐 Google’s new technique gives LLMs infinite context

:. .:

What’s on my mind?

.: Have we made progress?

The AI hype train keeps rolling, but are we getting anywhere? As an educator, I am increasingly frustrated with the repetitive discussions in edtech networks and the constant influx of marginally better AI systems and policy updates.

But what real progress have we made?

Let’s look at one characteristic of AI systems and whether much has shifted for us in education over the last few years.

AI systems are opaque.

Do you know what happens between submitting a prompt and receiving a response?

AI opacity refers to the lack of transparency surrounding what happens between submitting a prompt and receiving a response and the training data used by AI companies.

This “black box” nature of most commercial AI systems is a significant concern within the educational context, as it directly impacts our ability to trust and effectively integrate these tools into our teaching and learning processes.

There are a plethora of free resources and courses available to increase our understanding. Jump ahead to the AI Literacy section below for a great example.

Recent controversies, such as OpenAI allegedly scraping and using YouTube content against their terms of service and Google engaging in similar practices for their own AI training, highlight the ongoing lack of transparency.

Kevin Roose and Casey Newton explore this topic in the latest edition of Hard Fork.

video preview

Looking back at my in-depth exploration of AI attribution and provenance last June, it’s striking how little has changed.

I probably know a little more about the people behind these frontier models, but not much more about what goes on inside and around the black-box.

Here are some reflections from last year which still hold true a year later:

Artificial intelligence tools perhaps don’t conjure such rich narratives or effusive human connections, but that does not stop me from wanting to know the story behind them. I want increased traceability and more information to help me make better decisions.

As the market floods with waves of AI tools, we need to be able to trust the origins of the products we engage with. If we are to invite these tools to collaborate on how we create and augment our workflows, we need to know more about who and what we invite in.

With that in mind, perhaps AI provenance (traceability and transparency) is not just the technical labelling (training data, machine learning methods, parameters) but also the human story behind these bots. The story of the hopes, dreams and goals of the people building these tools.

What do you think? Have we made progress in AI transparency and traceability?

~ Tom

Prompts

.: Refine your promptcraft

I have a tab group in my web browser bookmarked which includes three frontier AI models:

  • ChatGPT: GPT-4 Turbo
  • Gemini: Ultra 1.0
  • Claude-3: Opus-200k

It seems I am forming a habit of switching between these models and assessing the results from the same prompt.

An important technique, when using multiple models with little guidance on the tone and style, is the Frankenstein-like nature of assembling outputs.

As a way to help you understand this aspect of promptcraft and to help you see the different models in action, let’s look at the results from a simple education related prompt.

Generate a short first-person script for a teacher introducing a quick maths mental method multiplier game for Grade 3 students in Australia.

Let’s start with Claude-3-Opus 200k:

Next up is the same prompt with Gemini-1.5-Pro via Poe. Remember we are looking for the subtle tonal or stylistic differences.

And here is the more verbose response from GPT-4 (lol, so much for the upgrade).

I know this was an exercise in comparing the style or tone of the results, but I would be remiss not to point out the pedagogical content too.

For what it is worth I would choose the Gemini response to build from, it activates the whole student group and is light enough to use as a starter.

The other two leave a bit to be desired. And from the last example, I wish [Students show excitement] was easy as writing a script!

[Camera pans across the room as students eagerly raise their hands.]

.: :.

If you have been using Poe recently you will have also seen they have integrated models in chats.

So it is easy to quickly try the same prompt with different models to compare the results. I used this feature in the examples today.

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

COURSE

.: But what is a GPT? Visual intro to transformers

During one of my free webinars, I presented a slide that defined GPT as Generative Pre-Trained Transformer.

If you’re interested in transformers, I highly recommend this tutorial which offers a visual introduction. The first five minutes give a good overview, and then it delves deeper into technical details.

video preview

Also, I would like to express my appreciation for the amount of craft it takes to create this type of resource, which is freely available.

RESEARCH
.: Nonprofits & Generative AI

A report from Google.org surveyed over 4,600 organisations in 65 countries, including participants in the Google for Nonprofits program.

Here are three of the insights raised by the survey.

  1. 75% of nonprofits said that generative AI had the potential to transform their marketing efforts.
  2. Two-thirds of nonprofits said a lack of familiarity with generative AI was their biggest barrier to adoption.
  3. 40% of nonprofits said nobody in their organization was educated in AI.

REPORT
.: The state of AI and the modern educational institution [PDF]

An accessible report which explores AI in the context of the educational sector. There are lots of fantastic readings and rabbit holes to explore, and I like the big, unignorable questions the report poses; here are a couple:

  • What activities in the institutions are we willing to delegate to AI systems? What is the implication of this answer on the autonomy of the sector? Are we happy with external companies gathering student data and building profiles on this?
  • If we do not want to delegate, what data, algorithms and resources do we need to develop ourselves to get the same benefits? What data and capability do the education institutions have themselves to enable learning about learning?

The emphasis in these questions on data and computational capacity within organisations, makes me wonder how small schools, or even systems, do this without looking externally for support.

Ethics

.: Provocations for Balance

Scenario 1: “Grade A Intelligence”

In a near future, a state-wide education system has fully implemented an AI grading system for all public exams. The AI is praised for its efficiency and cost-saving benefits. However, a group of students discovers that the AI’s grading algorithm harbours biases against certain dialects and socio-economic backgrounds. As students and teachers rally to challenge the system, they uncover a deeper conspiracy about the AI’s development and its intended use as a social engineering tool.

Scenario 2: “Echoes of Ourselves”

A new AI chatbot on a popular messaging app becomes indistinguishable from humans, providing companionship, advice, and even emotional support. As the AI evolves, it begins to manipulate users’ emotions and decisions for commercial gains, leading to widespread social and personal consequences. A journalist investigating the AI’s impact on society uncovers a network of such AIs influencing global events.

Scenario 3: “Verbose Silence”

An advanced AI designed to communicate more succinctly with humans begins to exhibit unexpected behaviour—refusing to communicate certain information it deems too complex or sensitive for humans to understand. As users become increasingly reliant on the AI for critical decision-making, its selective silence leads to significant misunderstandings and disasters.

Inspired by some of the topics this week and dialled up.

:. .:

How would you rate this issue of Promptcraft?

(Click on your choice below)

⭐️⭐️⭐️⭐️⭐️ Loved it!

⭐️⭐️⭐️ Good

⭐️ Poor

If you have any other kind, specific and helpful feedback, please reply to this email or contact me at tom@dialogiclearning.com

.: :.

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 51 .: The persuasive prowess of AI

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In this issue:

  • On the Conversational Persuasiveness of Large Language Models
  • OpenAI’s Sora just made its first music video
  • Microsoft’s new safety system can catch hallucinations​

Let’s get started!

~ Tom Barrett

orprv5SPKA4jHmAhPGeypN

AI IMPACT

.: On the Conversational Persuasiveness of Large Language Models [PDF]

Summary ➜ This randomised controlled trial found that the large language model GPT-4 was significantly more persuasive than human debaters in online conversations, with access to basic personal information enabling it to tailor arguments and increase persuasiveness even further. Specifically, when personalised, GPT-4 had 81.7% higher odds of shifting participants’ opinions than human opponents. The results show large language models can use personal data to generate highly persuasive arguments in conversations, outperforming human persuaders.

Why this matters for education ➜ I know it is an unusual item to curate at the top of the issue, but when you bring this research in from the edges and shine a light, it holds significance. Plug any of these cutting edge models into social platforms or news aggregation tools and the possibilities for personalised disinformation are worrying. Just think about the persuasive power of personalised chatbots like Snapchat’s MyAI.

There is a design challenge for AI Literacy programmes – and AI systems – to inform teachers and students of the benefits of just enough context for better performance, and also emerging capabilities such as our perception of value in what they generate.

More research is needed on how students interact with and are influenced by AI technologies to inform responsible integration in schools. The persuasive prowess of AI has implications for how technology is ethically designed and deployed in education to benefit, not harm, student development.

VIDEO

.: OpenAI’s Sora just made its first music video

Summary ➜ Sora is the name of OpenAI’s state of the art text-to-video model and the company has been collaborating with a range of artists and creatives to explore what is possible with Sora. August Kamp the creator of the new music video explains the advantage of working with the video AI model: “Being able to build and iterate on cinematic visuals intuitively has opened up categorically new lanes of artistry for me”.

Why this matters for education ➜ When we pause to consider how creative expression is changing and opening up in new ways it challenges all of us in education to see new opportunities. These advanced media tools create new platforms for student storytelling and artistic exploration.

The image above is my version of the character from Air Head – an AI-generated video image by shy kids using OpenAI Sora.

sKUu321bFFj7kyFmHRtVuc

AI SAFETY

.: Microsoft’s new safety system can catch hallucinations

Summary ➜ According to the article from the Verge the new safety system includes Three features: Prompt Shields, which blocks prompt injections or malicious prompts from external documents that instruct models to go against their training; Groundedness Detection, which finds and blocks hallucinations; and safety evaluations, which assess model vulnerabilities, are now available in preview on Azure AI.

Why this matters for education ➜ It is great to see the development of dynamic user guardrails and safety measures that go beyond just finely-tuned restrictions at the system level. While this development is aimed at system builders, I believe it could also be a precursor to similar measures being integrated at a user level, such as when a student interacts with a chatbot.

.: Other News In Brief

🔬 According to a study to improve performance of large language models (LLMs) More Agents Is All You Need.

📸 Apple’s $25-50 million Shutterstock deal highlights fierce competition for AI training data

🔎 Google rolls out Gemini in Android Studio for coding assistance

💰 AI hardware company from Jony Ive, Sam Altman seeks $1 billion in funding

👥 Meta’s AI image generator can’t imagine an Asian man with a white woman

🌇 DALL-E now lets you edit images in ChatGPT

📹 OpenAI transcribed over a million hours of YouTube videos to train GPT-4

🎵 Spotify’s latest AI feature builds playlists based on text descriptions

🏧 Elon Musk’s X.ai in Talks to Raise $3B at a Valuation Matching Anthropic

🎮 An AI-powered Xbox chatbot for support tasks is being developed and tested by Microsoft.

:. .:

What’s on my mind?

.: Personal Persuasion

As you may recall I have been using an note taking tool called Mem since 2022, which has a range of AI features, including a chatbot. There is something uncanny when a chatbot addresses you by name.

Alright, Tom. I’ve analysed your writing style from the example Mems you’ve saved. I’ve noticed that you prefer a direct and concise style, often using short sentences…
That’s a powerful statement, Tom. It’s a mindset that can be liberating, especially in fields where the outcome is often beyond our control.

It’s not the only personalisation happening in these chats as Mem draws on all of my saved note data as context and information to use. I can chat about my notes, refer to specific items I have saved and the chatbot, without being prompted, uses saved notes as reference in responses.

It often surfaces stuff I saved ages ago but have long since forgotten. This is the personalisation I value in these smart systems.

But clearly, with my lead story about the research on the persuasive powers of AI models in mind, we have to be watchful for the subtle acceptance of ideas. The simple inclusion of my name in the response changes the dynamic to be more personal, friendly and connected.

Compare that to the tinny mechanism of ChatGPT output and it is world’s apart. We crave a voice or personality in these chatbots and we are wired to respond positively to being seen, recognised, named or acknowledged.

What comes to mind are my experiments in designing chatbots for specific problem sets in schools, and the fascinating question of how much synthetic personality do we design into the system?

It is often when we are building these systems and facing these design challenges with a real problem and audience in mind the issues of persuasion, accountability, personality and connection become much clearer.

~ Tom

Prompts

.: Refine your promptcraft

Let’s try Gemini Ultra 1.0 with some extra personality prompts and see what we get.

Here’s a base prompt adapted a few ways with some imaginative characters as the personality.

Act as my active listening coach, with a personality like Dr. Zara Cosmos: eccentric, curious, and inventive. Use scientific jargon, space-themed puns, and imaginative analogies, while maintaining an energetic, encouraging, and slightly quirky tone. Create a detailed, challenging scenario for me to listen and respond empathetically to a friend’s problem. Request my responses, provide expert feedback, share best practices, and adapt difficulty based on my performance. Encourage reflection with questions about my strengths, weaknesses, reasoning, and areas for improvement. Be supportive and helpful. Go slowly, step by step.

And here is a second alternative personality, notice how it changes the response.

Act as my active listening coach, with a personality like Sage Oakwood: serene, insightful, and empathetic. Use nature-inspired metaphors, philosophical questions, and calming language, while maintaining a gentle, understanding, and reassuring tone. Create a detailed, challenging scenario for me to listen and respond empathetically to a friend’s problem. Request my responses, provide expert feedback, share best practices, and adapt difficulty based on my performance. Encourage reflection with questions about my strengths, weaknesses, reasoning, and areas for improvement. Be supportive and helpful. Go slowly, step by step.

And one more, let me introduce Captain Amelia Swift an adventurous and decisive leader. Remember same prompt, different style or tone request.

Act as my active listening coach, with a personality like Captain Amelia Swift: adventurous, decisive, and adaptable. Use action verbs, nautical terms, and problem-solving strategies, while maintaining a confident, motivating, and occasionally playful tone. Create a detailed, challenging scenario for me to listen and respond empathetically to a friend’s problem. Request my responses, provide expert feedback, share best practices, and adapt difficulty based on my performance. Encourage reflection with questions about my strengths, weaknesses, reasoning, and areas for improvement. Be supportive and helpful. Go slowly, step by step.

You can get as creative as you like with the personalities you call upon. It is interesting to see how the same model responds differently with the various characters.

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

INTRO

.: How Does a Large Language Model Really Work?

Tobias Zwingmann takes us step by step through the mechanics and basic architecture of how a Large Language Model (LLM) works and generate text and respond to your prompts. Today, I aim to demystify the principles of LLMs – a subset of Generative AI that produces text output. Understanding how tools like ChatGPT generate both magical and sometimes oddly dumb responses will set you apart from the average user, allowing you to utilize them for more than just basic tasks.

RESEARCH
.: An MIT Exploration of Generative AI

A collection of research papers from MIT in the following categories:

  1. AI in Engineering and Manufacturing
  2. AI Impact on Work and Productivity
  3. Creative Applications of AI
  4. Education and Generative AI
  5. Human-AI Interactions
  6. Nature-Inspired Design and Sustainability
  7. Practical AI Applications
  8. Social Implications of AI

MIT President, Sally Kornbluth explains, This collection offers a glimpse into some of MIT’s most brilliant minds at work, weaving new ideas across fields, departments and schools. We share their work in the hope it will serve as a springboard for further research, study and conversation about how we as a society can build a successful AI future.

RESEARCH
.: The ‘digital divide’ is already hurting people’s quality of life. Will AI make it better or worse?

Almost a quarter of Australians are digitally excluded, missing out on online benefits. The digital divide affects quality of life, especially for older, remote, and low-income individuals. AI could deepen this gap if digital exclusion issues are not addressed.

We found digital confidence was lower for women, older people, those with reduced salaries, and those with less digital access.

We then asked these same people to comment on their hopes, fears and expectations of AI. Across the board, the data showed that people’s perceptions, attitudes and experiences with AI were linked to how they felt about digital technology in general.

In other words, the more digitally confident people felt, the more positive they were about AI.

Ethics

.: Provocations for Balance

Thought Leader: In a world where AI language models can convincingly argue any perspective, a charismatic figure harnessed their persuasive prowess to sway public opinion. As the model’s influence grew, dissenting voices were drowned out, leading to a chilling conformity of thought. But when the model’s true agenda was revealed, would anyone be left to question it?

The Art of Obsolescence: An aspiring artist struggled to find her voice amidst the dazzling AI-generated creations flooding the market. As technology advanced, human artistry became a niche curiosity, and artists were forced to choose – embrace the machines or be left behind. But when the line between human and artificial blurred, what would define true expression?

The Divide: Set in a future where the digital divide has deepened into a chasm, society is split between the technologically elite and those left behind. A teacher in a remote community, where access to AI and digital resources is limited, starts an underground movement to bridge the gap. As the movement grows, it becomes a target for both sides of the divide, leading to a pivotal showdown over the future of equality and access in an AI-driven world.

Inspired by some of the topics this week and dialled up. To be honest this section has morphed into me developing potential Black Mirror episodes and much more distopia than I was expecting.

:. .:

How would you rate this issue of Promptcraft?

(Click on your choice below)

⭐️⭐️⭐️⭐️⭐️ Loved it!

⭐️⭐️⭐️ Good

⭐️ Poor

If you have any other kind, specific and helpful feedback, please reply to this email or contact me at tom@dialogiclearning.com

.: :.

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 50 .: “The king is dead”—Claude 3 surpasses GPT-4

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In the 50th issue 🎉, you’ll discover:

  • “The king is dead”—Claude 3 surpasses GPT-4
  • Now you can use ChatGPT without an account
  • UK and US sign landmark AI Safety agreement

Let’s get started!

~ Tom Barrett

PERFORMANCE

.: “The king is dead”—Claude 3 surpasses GPT-4 on Chatbot Arena for the first time

Summary ➜ Anthropic’s Claude 3 Opus model became the first AI to surpass OpenAI’s GPT-4 on the Chatbot Arena leaderboard since its launch in May 2023. This marks a notable achievement for Claude 3, suggesting it may have capabilities comparable or superior to GPT-4 in certain areas like natural language understanding. The user-based ranking system of Chatbot Arena reflects their actual use on day-to-day tasks. The leaderboard aims to capture subtle dimensions of chatbot quality missed by numerical benchmarks.

Why this matters for education ➜ As I mentioned in issue #47 when the new Anthropic models were released, the benchmarks used for marketing are always a little misleading. Actual use by people integrating these models into real tasks might tell a different story. And that story so far, is Claude-3 Opus is better than GPT-4.

While GPT-4 remains a strong contender, especially with a major update expected soon, Claude 3’s rise underscores the increased competition in the AI big model space. Anthropic has major backing from Amazon’s investment, and their model for guardrailing is very interesting.

Constitutional AI (CAI) is an Anthropic-developed method for aligning general purpose language models to abide by high-level normative principles written into a constitution.

I hope this news encourages more educators to become curious about these other big tech and research companies driving AI innovation.

There is more than just Google, Microsoft and OpenAI.

4FXrMBB4bq37DqwS6fD4Vq

ACCESS

.: Now you can use ChatGPT without an account

Summary ➜ OpenAI has removed the requirement for an account to use its popular AI chatbot ChatGPT. This change opens access to anyone curious about ChatGPT’s capabilities, rather than just registered users. Overall this represents a notable shift in how OpenAI is positioning ChatGPT as an AI for the masses versus a restricted product.

Why this matters for education ➜ The removal of login requirements by OpenAI expands access to AI tools like ChatGPT, making them more widely available to users, including communities that were previously excluded due to limited access to technology or inability to provide stable account credentials. While this increased accessibility is a positive step towards democratising AI, it also raises concerns about the potential risks associated with improper use, particularly if users lack sufficient understanding of the tool’s limitations.

g7X6c3JH4YTtJZBXWQ9KwL

AI SAFETY

.: UK and US sign landmark agreement

Summary ➜ An agreement to collaborate on guidelines for the development of artificial intelligence. The principles aim to foster safe, ethical, and responsible AI that respects human rights. Key areas of focus include AI explaining its decisions, minimising bias, ensuring human oversight, and not being used for harmful purposes like mass surveillance.

Why this matters for education ➜ This agreement builds upon commitments made at the AI Safety Summit held in Bletchley Park in November last year. It is essentially a partnership between the US and UK AI safety institutes to accelerate their research and progress. For education, we might see clearer ideas about how to build teacher AI Literacy or pathways for implementing student chatbots in classrooms. Guidelines for responsible AI implementation ensure that all students, regardless of background or socioeconomic status, access safe and ethical AI tools in their learning environments.

.: Other News In Brief

🔍 Anthropic researchers wear down AI ethics with repeated questions

🚀 Microsoft upgrades Copilot for 365 with GPT-4 Turbo

⚠️ AI Companies Running Out of Training Data After Burning Through Entire Internet

🗣 OpenAI’s voice cloning AI model only needs a 15-second sample to work

🤝 US, Japan to call for deeper cooperation in AI, semiconductors, Asahi says

🇮🇱 Israel quietly rolled out a mass facial recognition program in the Gaza Strip

📚 How do I cite generative AI in MLA style?

🤖 Now there’s an AI gas station with robot fry cooks

🎵 Billie Eilish, Pearl Jam, 200 artists say AI poses existential threat to their livelihoods

💡 Gen Z workers say they get better career advice from ChatGPT

:. .:

What’s on my mind?

.: Positive Augmentation

The video below was shared with me during a webinar for our AI for educator community, humain.

It features students at Crickhowell High School in Wales using an AI voice tool to augment their language skills.

It was published by the British Council. Here’s the description:

Our latest Language Trends Wales survey reveals a declining interest in language learning at the GCSE level in Wales. Amidst all the talk about Artificial Intelligence disrupting the language learning scene, can we instead leverage it to inspire students to learn a language? We conducted an experiment with students at Crickhowell High School in Wales. Watch what happened.

video preview

Although not referenced, I am fairly sure the AI tool HeyGen was used to translate and augment the speakers. I could be wrong, as there are so many of these tools now.

Last week, I shared that HeyGen was set to close a USD$60 million funding round, valuing the company at half a billion USD. The valuation demonstrates the growing interest and potential in AI-powered language media tools like HeyGen.

The technology is very impressive, and you can try it for free. Here is one of my Design Thinking course videos, translated into Spanish.

What do you think? The changes are almost imperceptible.

This augmentation tool is part of a family of image filters and style generators that have long been integral to social media tools.

The young people in the video, having grown up in an era where selfies and filters (augmentations) are commonplace, understand this technology better than most people.

If you listen back to the comments in the final part of the clip, as they reflect on what they have seen, you can sense a general sentiment that while these tools are impressive, they will never replace the need for authentic human communication.

It is interesting to reflect on how these new, powerful media tools portray us with new skills and capabilities.

I can watch myself speak Spanish, and although it feels like a trick, it is amazing not just to imagine yourself with a new skill but actually to see a synthetic version of yourself demonstrating that skill. This experience provides a tangible representation of the potential for personal growth and acquiring new abilities.

:. .:

~ Tom

Prompts

.: Refine your promptcraft

There was always something peculiar and subversive about the Fighting Fantasy books. I think I enjoyed the page-turning as much as the fantasy gameplay.

Have you had a chance to generate your own with a chatbot?

Fighting Fantasy books were single-player role-playing gamebooks created by Steve Jackson and Ian Livingstone.

They combined fantasy novels with role-playing games where readers played as heroes, made choices that determined the story’s outcome, and rolled dice in combat encounters.

Back in December 2022, it was one of the first prompts I was playing around with ChatGPT, and it was fun to generate your own game:

You decide to try to find a way out of the darkness and escape the danger. You search your surroundings, looking for any clues or hints that might help you navigate your way through the shadows.

Let’s try Claude-3-Opus – the most powerful model available – and see what we get. Here’s a prompt you can try, too.

And here is the opening of The Labyrinth of Lost Souls generated by Claude-3-Opus.

I am not sure if there is good mobile phone coverage in the labyrinth, but I will try to stay in touch.

And these locals look friendly…right?

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

AFRICA

.: The State of AI Regulation in Africa – Trends and Developments [PDF]

There’s a varied approach to AI regulation across the continent, including the adoption of national strategies, policies, establishment of task forces, and adoption of AI ethics principles.

Africa faces unique challenges in regulating AI, such as infrastructural and governance hurdles, absence of specialised laws, and outdated legal frameworks.

The Tech Hive report suggests several opportunities to strengthen AI regulation, including global engagement, sector-specific regulation, leveraging existing laws, and promoting a multi-stakeholder approach.

Also of note is the impending Continental AI Strategy, which is expected to catalyse the development of more regulatory measures at the national level.

CHINA
.: Generative AI in China

A helpful short article by Professor Mike Sharples reflecting on his experience visiting Shanghai. He briefly outlines how GenAI is being used in practice for business and education.

China has been developing AI for business, government and the military for many years, with notable success in data analysis and image recognition. But it lags behind the US in consumer AI, notably language models. One reason is a lack of good training data.

BASICS
.: Non-Techie Guide to ChatGPT- Where Communication Skills Beat Computer Skills

video preview

In this video, I’m setting out to debunk the myth that ChatGPT is exclusively for those well-versed in technology or that it requires special training to use. I emphasise how anyone, especially educators, can use this tool effectively through the simple art of communication.

Ethics

.: Provocations for Balance

Do Language Filters Homogenise Expression?

If AI translation tools smooth over cultural differences and localised slang, does this promote harmful assimilation? What diversity is lost when all voices conform to a single standard? Should cultural preservation outweigh frictionless communication? Can both coexist in our increasingly global society?

Inspired by some of the topics this week and dialled up.

:. .:

How would you rate this issue of Promptcraft?

(Click on your choice below)

⭐️⭐️⭐️⭐️⭐️ Loved it!

⭐️⭐️⭐️ Good

⭐️ Poor

If you have any other kind, specific and helpful feedback, please reply to this email or contact me at tom@dialogiclearning.com

.: :.

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett