.: Promptcraft 51 .: The persuasive prowess of AI

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In this issue:

  • On the Conversational Persuasiveness of Large Language Models
  • OpenAI’s Sora just made its first music video
  • Microsoft’s new safety system can catch hallucinations​

Let’s get started!

~ Tom Barrett

orprv5SPKA4jHmAhPGeypN

AI IMPACT

.: On the Conversational Persuasiveness of Large Language Models [PDF]

Summary ➜ This randomised controlled trial found that the large language model GPT-4 was significantly more persuasive than human debaters in online conversations, with access to basic personal information enabling it to tailor arguments and increase persuasiveness even further. Specifically, when personalised, GPT-4 had 81.7% higher odds of shifting participants’ opinions than human opponents. The results show large language models can use personal data to generate highly persuasive arguments in conversations, outperforming human persuaders.

Why this matters for education ➜ I know it is an unusual item to curate at the top of the issue, but when you bring this research in from the edges and shine a light, it holds significance. Plug any of these cutting edge models into social platforms or news aggregation tools and the possibilities for personalised disinformation are worrying. Just think about the persuasive power of personalised chatbots like Snapchat’s MyAI.

There is a design challenge for AI Literacy programmes – and AI systems – to inform teachers and students of the benefits of just enough context for better performance, and also emerging capabilities such as our perception of value in what they generate.

More research is needed on how students interact with and are influenced by AI technologies to inform responsible integration in schools. The persuasive prowess of AI has implications for how technology is ethically designed and deployed in education to benefit, not harm, student development.

VIDEO

.: OpenAI’s Sora just made its first music video

Summary ➜ Sora is the name of OpenAI’s state of the art text-to-video model and the company has been collaborating with a range of artists and creatives to explore what is possible with Sora. August Kamp the creator of the new music video explains the advantage of working with the video AI model: “Being able to build and iterate on cinematic visuals intuitively has opened up categorically new lanes of artistry for me”.

Why this matters for education ➜ When we pause to consider how creative expression is changing and opening up in new ways it challenges all of us in education to see new opportunities. These advanced media tools create new platforms for student storytelling and artistic exploration.

The image above is my version of the character from Air Head – an AI-generated video image by shy kids using OpenAI Sora.

sKUu321bFFj7kyFmHRtVuc

AI SAFETY

.: Microsoft’s new safety system can catch hallucinations

Summary ➜ According to the article from the Verge the new safety system includes Three features: Prompt Shields, which blocks prompt injections or malicious prompts from external documents that instruct models to go against their training; Groundedness Detection, which finds and blocks hallucinations; and safety evaluations, which assess model vulnerabilities, are now available in preview on Azure AI.

Why this matters for education ➜ It is great to see the development of dynamic user guardrails and safety measures that go beyond just finely-tuned restrictions at the system level. While this development is aimed at system builders, I believe it could also be a precursor to similar measures being integrated at a user level, such as when a student interacts with a chatbot.

.: Other News In Brief

🔬 According to a study to improve performance of large language models (LLMs) More Agents Is All You Need.

📸 Apple’s $25-50 million Shutterstock deal highlights fierce competition for AI training data

🔎 Google rolls out Gemini in Android Studio for coding assistance

💰 AI hardware company from Jony Ive, Sam Altman seeks $1 billion in funding

👥 Meta’s AI image generator can’t imagine an Asian man with a white woman

🌇 DALL-E now lets you edit images in ChatGPT

📹 OpenAI transcribed over a million hours of YouTube videos to train GPT-4

🎵 Spotify’s latest AI feature builds playlists based on text descriptions

🏧 Elon Musk’s X.ai in Talks to Raise $3B at a Valuation Matching Anthropic

🎮 An AI-powered Xbox chatbot for support tasks is being developed and tested by Microsoft.

:. .:

What’s on my mind?

.: Personal Persuasion

As you may recall I have been using an note taking tool called Mem since 2022, which has a range of AI features, including a chatbot. There is something uncanny when a chatbot addresses you by name.

Alright, Tom. I’ve analysed your writing style from the example Mems you’ve saved. I’ve noticed that you prefer a direct and concise style, often using short sentences…
That’s a powerful statement, Tom. It’s a mindset that can be liberating, especially in fields where the outcome is often beyond our control.

It’s not the only personalisation happening in these chats as Mem draws on all of my saved note data as context and information to use. I can chat about my notes, refer to specific items I have saved and the chatbot, without being prompted, uses saved notes as reference in responses.

It often surfaces stuff I saved ages ago but have long since forgotten. This is the personalisation I value in these smart systems.

But clearly, with my lead story about the research on the persuasive powers of AI models in mind, we have to be watchful for the subtle acceptance of ideas. The simple inclusion of my name in the response changes the dynamic to be more personal, friendly and connected.

Compare that to the tinny mechanism of ChatGPT output and it is world’s apart. We crave a voice or personality in these chatbots and we are wired to respond positively to being seen, recognised, named or acknowledged.

What comes to mind are my experiments in designing chatbots for specific problem sets in schools, and the fascinating question of how much synthetic personality do we design into the system?

It is often when we are building these systems and facing these design challenges with a real problem and audience in mind the issues of persuasion, accountability, personality and connection become much clearer.

~ Tom

Prompts

.: Refine your promptcraft

Let’s try Gemini Ultra 1.0 with some extra personality prompts and see what we get.

Here’s a base prompt adapted a few ways with some imaginative characters as the personality.

Act as my active listening coach, with a personality like Dr. Zara Cosmos: eccentric, curious, and inventive. Use scientific jargon, space-themed puns, and imaginative analogies, while maintaining an energetic, encouraging, and slightly quirky tone. Create a detailed, challenging scenario for me to listen and respond empathetically to a friend’s problem. Request my responses, provide expert feedback, share best practices, and adapt difficulty based on my performance. Encourage reflection with questions about my strengths, weaknesses, reasoning, and areas for improvement. Be supportive and helpful. Go slowly, step by step.

And here is a second alternative personality, notice how it changes the response.

Act as my active listening coach, with a personality like Sage Oakwood: serene, insightful, and empathetic. Use nature-inspired metaphors, philosophical questions, and calming language, while maintaining a gentle, understanding, and reassuring tone. Create a detailed, challenging scenario for me to listen and respond empathetically to a friend’s problem. Request my responses, provide expert feedback, share best practices, and adapt difficulty based on my performance. Encourage reflection with questions about my strengths, weaknesses, reasoning, and areas for improvement. Be supportive and helpful. Go slowly, step by step.

And one more, let me introduce Captain Amelia Swift an adventurous and decisive leader. Remember same prompt, different style or tone request.

Act as my active listening coach, with a personality like Captain Amelia Swift: adventurous, decisive, and adaptable. Use action verbs, nautical terms, and problem-solving strategies, while maintaining a confident, motivating, and occasionally playful tone. Create a detailed, challenging scenario for me to listen and respond empathetically to a friend’s problem. Request my responses, provide expert feedback, share best practices, and adapt difficulty based on my performance. Encourage reflection with questions about my strengths, weaknesses, reasoning, and areas for improvement. Be supportive and helpful. Go slowly, step by step.

You can get as creative as you like with the personalities you call upon. It is interesting to see how the same model responds differently with the various characters.

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

INTRO

.: How Does a Large Language Model Really Work?

Tobias Zwingmann takes us step by step through the mechanics and basic architecture of how a Large Language Model (LLM) works and generate text and respond to your prompts. Today, I aim to demystify the principles of LLMs – a subset of Generative AI that produces text output. Understanding how tools like ChatGPT generate both magical and sometimes oddly dumb responses will set you apart from the average user, allowing you to utilize them for more than just basic tasks.

RESEARCH
.: An MIT Exploration of Generative AI

A collection of research papers from MIT in the following categories:

  1. AI in Engineering and Manufacturing
  2. AI Impact on Work and Productivity
  3. Creative Applications of AI
  4. Education and Generative AI
  5. Human-AI Interactions
  6. Nature-Inspired Design and Sustainability
  7. Practical AI Applications
  8. Social Implications of AI

MIT President, Sally Kornbluth explains, This collection offers a glimpse into some of MIT’s most brilliant minds at work, weaving new ideas across fields, departments and schools. We share their work in the hope it will serve as a springboard for further research, study and conversation about how we as a society can build a successful AI future.

RESEARCH
.: The ‘digital divide’ is already hurting people’s quality of life. Will AI make it better or worse?

Almost a quarter of Australians are digitally excluded, missing out on online benefits. The digital divide affects quality of life, especially for older, remote, and low-income individuals. AI could deepen this gap if digital exclusion issues are not addressed.

We found digital confidence was lower for women, older people, those with reduced salaries, and those with less digital access.

We then asked these same people to comment on their hopes, fears and expectations of AI. Across the board, the data showed that people’s perceptions, attitudes and experiences with AI were linked to how they felt about digital technology in general.

In other words, the more digitally confident people felt, the more positive they were about AI.

Ethics

.: Provocations for Balance

Thought Leader: In a world where AI language models can convincingly argue any perspective, a charismatic figure harnessed their persuasive prowess to sway public opinion. As the model’s influence grew, dissenting voices were drowned out, leading to a chilling conformity of thought. But when the model’s true agenda was revealed, would anyone be left to question it?

The Art of Obsolescence: An aspiring artist struggled to find her voice amidst the dazzling AI-generated creations flooding the market. As technology advanced, human artistry became a niche curiosity, and artists were forced to choose – embrace the machines or be left behind. But when the line between human and artificial blurred, what would define true expression?

The Divide: Set in a future where the digital divide has deepened into a chasm, society is split between the technologically elite and those left behind. A teacher in a remote community, where access to AI and digital resources is limited, starts an underground movement to bridge the gap. As the movement grows, it becomes a target for both sides of the divide, leading to a pivotal showdown over the future of equality and access in an AI-driven world.

Inspired by some of the topics this week and dialled up. To be honest this section has morphed into me developing potential Black Mirror episodes and much more distopia than I was expecting.

:. .:

How would you rate this issue of Promptcraft?

(Click on your choice below)

⭐️⭐️⭐️⭐️⭐️ Loved it!

⭐️⭐️⭐️ Good

⭐️ Poor

If you have any other kind, specific and helpful feedback, please reply to this email or contact me at tom@dialogiclearning.com

.: :.

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 50 .: “The king is dead”—Claude 3 surpasses GPT-4

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In the 50th issue 🎉, you’ll discover:

  • “The king is dead”—Claude 3 surpasses GPT-4
  • Now you can use ChatGPT without an account
  • UK and US sign landmark AI Safety agreement

Let’s get started!

~ Tom Barrett

PERFORMANCE

.: “The king is dead”—Claude 3 surpasses GPT-4 on Chatbot Arena for the first time

Summary ➜ Anthropic’s Claude 3 Opus model became the first AI to surpass OpenAI’s GPT-4 on the Chatbot Arena leaderboard since its launch in May 2023. This marks a notable achievement for Claude 3, suggesting it may have capabilities comparable or superior to GPT-4 in certain areas like natural language understanding. The user-based ranking system of Chatbot Arena reflects their actual use on day-to-day tasks. The leaderboard aims to capture subtle dimensions of chatbot quality missed by numerical benchmarks.

Why this matters for education ➜ As I mentioned in issue #47 when the new Anthropic models were released, the benchmarks used for marketing are always a little misleading. Actual use by people integrating these models into real tasks might tell a different story. And that story so far, is Claude-3 Opus is better than GPT-4.

While GPT-4 remains a strong contender, especially with a major update expected soon, Claude 3’s rise underscores the increased competition in the AI big model space. Anthropic has major backing from Amazon’s investment, and their model for guardrailing is very interesting.

Constitutional AI (CAI) is an Anthropic-developed method for aligning general purpose language models to abide by high-level normative principles written into a constitution.

I hope this news encourages more educators to become curious about these other big tech and research companies driving AI innovation.

There is more than just Google, Microsoft and OpenAI.

4FXrMBB4bq37DqwS6fD4Vq

ACCESS

.: Now you can use ChatGPT without an account

Summary ➜ OpenAI has removed the requirement for an account to use its popular AI chatbot ChatGPT. This change opens access to anyone curious about ChatGPT’s capabilities, rather than just registered users. Overall this represents a notable shift in how OpenAI is positioning ChatGPT as an AI for the masses versus a restricted product.

Why this matters for education ➜ The removal of login requirements by OpenAI expands access to AI tools like ChatGPT, making them more widely available to users, including communities that were previously excluded due to limited access to technology or inability to provide stable account credentials. While this increased accessibility is a positive step towards democratising AI, it also raises concerns about the potential risks associated with improper use, particularly if users lack sufficient understanding of the tool’s limitations.

g7X6c3JH4YTtJZBXWQ9KwL

AI SAFETY

.: UK and US sign landmark agreement

Summary ➜ An agreement to collaborate on guidelines for the development of artificial intelligence. The principles aim to foster safe, ethical, and responsible AI that respects human rights. Key areas of focus include AI explaining its decisions, minimising bias, ensuring human oversight, and not being used for harmful purposes like mass surveillance.

Why this matters for education ➜ This agreement builds upon commitments made at the AI Safety Summit held in Bletchley Park in November last year. It is essentially a partnership between the US and UK AI safety institutes to accelerate their research and progress. For education, we might see clearer ideas about how to build teacher AI Literacy or pathways for implementing student chatbots in classrooms. Guidelines for responsible AI implementation ensure that all students, regardless of background or socioeconomic status, access safe and ethical AI tools in their learning environments.

.: Other News In Brief

🔍 Anthropic researchers wear down AI ethics with repeated questions

🚀 Microsoft upgrades Copilot for 365 with GPT-4 Turbo

⚠️ AI Companies Running Out of Training Data After Burning Through Entire Internet

🗣 OpenAI’s voice cloning AI model only needs a 15-second sample to work

🤝 US, Japan to call for deeper cooperation in AI, semiconductors, Asahi says

🇮🇱 Israel quietly rolled out a mass facial recognition program in the Gaza Strip

📚 How do I cite generative AI in MLA style?

🤖 Now there’s an AI gas station with robot fry cooks

🎵 Billie Eilish, Pearl Jam, 200 artists say AI poses existential threat to their livelihoods

💡 Gen Z workers say they get better career advice from ChatGPT

:. .:

What’s on my mind?

.: Positive Augmentation

The video below was shared with me during a webinar for our AI for educator community, humain.

It features students at Crickhowell High School in Wales using an AI voice tool to augment their language skills.

It was published by the British Council. Here’s the description:

Our latest Language Trends Wales survey reveals a declining interest in language learning at the GCSE level in Wales. Amidst all the talk about Artificial Intelligence disrupting the language learning scene, can we instead leverage it to inspire students to learn a language? We conducted an experiment with students at Crickhowell High School in Wales. Watch what happened.

video preview

Although not referenced, I am fairly sure the AI tool HeyGen was used to translate and augment the speakers. I could be wrong, as there are so many of these tools now.

Last week, I shared that HeyGen was set to close a USD$60 million funding round, valuing the company at half a billion USD. The valuation demonstrates the growing interest and potential in AI-powered language media tools like HeyGen.

The technology is very impressive, and you can try it for free. Here is one of my Design Thinking course videos, translated into Spanish.

What do you think? The changes are almost imperceptible.

This augmentation tool is part of a family of image filters and style generators that have long been integral to social media tools.

The young people in the video, having grown up in an era where selfies and filters (augmentations) are commonplace, understand this technology better than most people.

If you listen back to the comments in the final part of the clip, as they reflect on what they have seen, you can sense a general sentiment that while these tools are impressive, they will never replace the need for authentic human communication.

It is interesting to reflect on how these new, powerful media tools portray us with new skills and capabilities.

I can watch myself speak Spanish, and although it feels like a trick, it is amazing not just to imagine yourself with a new skill but actually to see a synthetic version of yourself demonstrating that skill. This experience provides a tangible representation of the potential for personal growth and acquiring new abilities.

:. .:

~ Tom

Prompts

.: Refine your promptcraft

There was always something peculiar and subversive about the Fighting Fantasy books. I think I enjoyed the page-turning as much as the fantasy gameplay.

Have you had a chance to generate your own with a chatbot?

Fighting Fantasy books were single-player role-playing gamebooks created by Steve Jackson and Ian Livingstone.

They combined fantasy novels with role-playing games where readers played as heroes, made choices that determined the story’s outcome, and rolled dice in combat encounters.

Back in December 2022, it was one of the first prompts I was playing around with ChatGPT, and it was fun to generate your own game:

You decide to try to find a way out of the darkness and escape the danger. You search your surroundings, looking for any clues or hints that might help you navigate your way through the shadows.

Let’s try Claude-3-Opus – the most powerful model available – and see what we get. Here’s a prompt you can try, too.

And here is the opening of The Labyrinth of Lost Souls generated by Claude-3-Opus.

I am not sure if there is good mobile phone coverage in the labyrinth, but I will try to stay in touch.

And these locals look friendly…right?

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

AFRICA

.: The State of AI Regulation in Africa – Trends and Developments [PDF]

There’s a varied approach to AI regulation across the continent, including the adoption of national strategies, policies, establishment of task forces, and adoption of AI ethics principles.

Africa faces unique challenges in regulating AI, such as infrastructural and governance hurdles, absence of specialised laws, and outdated legal frameworks.

The Tech Hive report suggests several opportunities to strengthen AI regulation, including global engagement, sector-specific regulation, leveraging existing laws, and promoting a multi-stakeholder approach.

Also of note is the impending Continental AI Strategy, which is expected to catalyse the development of more regulatory measures at the national level.

CHINA
.: Generative AI in China

A helpful short article by Professor Mike Sharples reflecting on his experience visiting Shanghai. He briefly outlines how GenAI is being used in practice for business and education.

China has been developing AI for business, government and the military for many years, with notable success in data analysis and image recognition. But it lags behind the US in consumer AI, notably language models. One reason is a lack of good training data.

BASICS
.: Non-Techie Guide to ChatGPT- Where Communication Skills Beat Computer Skills

video preview

In this video, I’m setting out to debunk the myth that ChatGPT is exclusively for those well-versed in technology or that it requires special training to use. I emphasise how anyone, especially educators, can use this tool effectively through the simple art of communication.

Ethics

.: Provocations for Balance

Do Language Filters Homogenise Expression?

If AI translation tools smooth over cultural differences and localised slang, does this promote harmful assimilation? What diversity is lost when all voices conform to a single standard? Should cultural preservation outweigh frictionless communication? Can both coexist in our increasingly global society?

Inspired by some of the topics this week and dialled up.

:. .:

How would you rate this issue of Promptcraft?

(Click on your choice below)

⭐️⭐️⭐️⭐️⭐️ Loved it!

⭐️⭐️⭐️ Good

⭐️ Poor

If you have any other kind, specific and helpful feedback, please reply to this email or contact me at tom@dialogiclearning.com

.: :.

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 49 .: LA school system launches student AI chatbot

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In this issue, you’ll discover:

  • Los Angeles Unified School District (LAUSD) Launches AI Chatbot “Ed” to Serve as Virtual Student Advisor
  • Google DeepMind co-founder, Mustafa Suleyman, joins Microsoft as CEO of its new AI division
  • How Apple’s Discussions with Google and OpenAI Could Impact Generative AI

Let’s get started!

~ Tom Barrett

STUDENT CHATBOT

.: Los Angeles Unified School District (LAUSD) Launches AI Chatbot “Ed” to Serve as Virtual Student Advisor

Summary ➜ The AI chatbot is designed to be a virtual advisor for students, aiming to simplify school navigation and support pandemic recovery. Ed can provide personalised guidance, share academic details, and make recommendations, while even assisting with non-academic issues like lost bikes. But some parents and experts have privacy and over-reliance concerns, worried it may replace human connections. For now, the chatbot is available to 54,000 students at 100 “fragile” schools, with plans to expand. Ed looks to create individual learning plans modelled on special education IEPs, touting a 93% accuracy rate over ChatGPT’s 86%.

Why this matters for education ➜ In Australia, two student chatbots are being trialled in New South Wales and South Australia, respectively. The LAUSD version’s focus is on practical student support, including quick access to assessments and grades.

“This is a technology that becomes a personal assistant to students,” Carvalho said at a demonstration at Roybal Learning Center, west of downtown. “It demystifies the navigation of the day … crunches the data in a way that it brings what students need.”

So, for now, it seems to be much more like a student support service than a generative AI system for teaching and learning.

An interesting note about this development is that the initial design regarding access to information is quite closed.

it has to stay within the district universe of information. A student, for example, would not be likely to get a reference to a recent development in the war in Ukraine for a research paper

From the article, it is unclear whether this is just up-to-date news or no information at all. For comparison, South Australia’s Edchat does not have real-time updated information, but students can access training data up to early 2023.

hXRqqpd3R7RMEC2i8vNugP

MICROSOFT

.: Google DeepMind co-founder joins Microsoft as CEO of its new AI division

Summary ➜ Microsoft has appointed Mustafa Suleyman, co-founder of Google’s DeepMind, as CEO of its new consumer-facing AI division. He will oversee products like Copilot, Bing, and Edge. As executive VP of Microsoft AI, he reports directly to CEO Satya Nadella. The company is also bringing in talent from Suleyman’s startup Inflection AI, like co-founder Karén Simonyan as chief scientist.

Why this matters for education ➜ Suleyman is a leader in AI, strengthening Microsoft’s capabilities. His vision at Inflection AI focused on personal AI agents to support our lives. This notion of proliferated, personalized bots raises interesting questions as Microsoft targets education. In his book, Suleyman advocated for AI safety and containment. As education tools leverage AI, how will Microsoft approach oversight and governance? Millions of students and teachers are impacted. Perhaps Suleyman’s safety focus will manifest in curbing widespread use of chatbots in education. His leadership may steer Microsoft toward more contained, transparent applications of AI for learning. Overall, Suleyman’s experience brings valuable perspective on AI ethics and responsible innovation as Microsoft evolves its education technology.

IPHONE

.: How Apple’s Discussions with Google and OpenAI Could Impact Generative AI

Summary ➜ Apple is in advanced talks to potentially use Google’s Gemini language model for iPhone features, after earlier OpenAI discussions. Apple aims to integrate generative AI into iPhones later this year, despite dismissing it in 2023. Unlike others, Apple may seek payment rather than paying to adopt an AI model.

Why this matters for education ➜ Did you know that Google pays billions of dollars yearly to Apple, so Google is the default search tool on the iPhone? This AI arrangement is likely to be similar. Although it is unclear how AI will appear on the iPhone, this is a market-shaping deal.

A deal with Apple will cement Google’s prominence in the industry. It will also give Google access to two billion iPhone users who might not otherwise think to consider the company’s generative AI solutions.

While integrating advanced LLMs into iPhones could make AI-powered educational tools more accessible, it’s important to consider equity issues. Not all students may have access to the latest iPhone models, further exacerbating the digital divide in access to AI tools.

.: Other News In Brief

🇸🇬 Singapore university sets up AI research facility for ‘public good’

⚽️ As AI football looms, be thankful for those ready to rage against the machine

🌇 Stability AI CEO resigns to ‘pursue decentralised AI’

🤖 OpenAI shows off the first examples of third-party creators using Sora

📈 Nvidia’s AI chip dominance is being targeted by Google, Intel, and Arm

❓ Where’d my results go? Google Search’s chatbot is no longer opt-in

🎤 Can you hear me now? AI-coustics to fight noisy audio with generative AI

📜 The AI Act is done. Here’s what will (and won’t) change

🗣️ HeyGen is About to Close $60M on a $500M Post Valuation

🇮🇳 India reverses AI stance, requires government approval for model launches

:. .:

Monthly Review

.: All the February issues in 1 PDF

Download a copy of the latest monthly roundup of Promptcraft.

.: :.

What’s on my mind?

.: A Quick Critical Analysis of Student Chatbots

In this week’s reflection, I want to revisit walled garden chatbots for students.

I think this strategic path needs more attention after the news that Los Angeles schools have access to a student support chatbot and the active trials in New South Wales and South Australia.

It is also worth adding that I have a partnership with Adelaide Botanic High School, one of the first schools in 2023, to trial Edchat, the chatbot for South Australian schools.

It has been great to have access to it and work alongside some of the leadership team running the project at the school.

When I say walled garden chatbots, I mean chatbots that operate within a closed system or a specific domain, having access only to a limited information set.

These chatbots are designed by the school system the student belongs to, as opposed to the consumer chatbots or AI products on the open web.

For this reflection, I will use the Compass Points thinking routine from Project Zero, which starts in the East.

E: Excites

What am I excited about in this situation?
  • Students get hands-on experience with AI tools to support their learning experience.
  • There is a lot of momentum which can grow if our students are given the chance to step up.
  • The LA example offers something different from the Australian trials: a chatbot more focused on support and practical companionship. I am excited to see how this develops.

N: Need to Know

What information do I need to know or find out?
  • How do teachers feel about students gaining access to these tools?
  • What professional growth opportunities exist for educators to build their capacity and understanding?
  • What are the system prompts for these bots, and how do they mitigate and guard against bias?

S: Stance

What stance do I take?
  • I support the testing and exploration of chatbots safely in schools. Many other school systems are keen to learn from their pioneering examples.
  • Educational innovation like these chatbot trials must be supported, encouraged and celebrated.
  • Despite the guard railing, I can see that frontier models like GPT-4 power them and can be very flexible in support of a student.

W: Worries

What worries or concerns do I have?
  • The design, prompt, and architecture are not visible. Without transparency, it’s difficult to hold developers and operators accountable for any issues that may arise.
  • Chatbots and LLMs are designed to respond with the most likely next word. They are geared towards the statistically most common. Without fine-tuning and promptcraft education, this might homogenise the message to our students.
  • I am concerned about the pedagogical bias built into students’ AI systems. Imagine a student is an active user with over 100 interactions daily. 1000s every month. What’s the hidden curriculum of each of those nudges and interactions?

:. .:

~ Tom

Prompts

.: Refine your promptcraft

One of the most useful and effective prompting techniques is to include an example of what you want to generate. This is called few-shot prompting, as you guide the AI system with a model of what to generate.

For example, if you are trying to develop some model texts or example paragraphs to critique with your class and you have some from the last time, you could add these to your prompt.

When you do not include an example, this is called zero-shot prompting. Zero-shot prompting always leads to a less aligned output, which you must edit and iterate to correct.

This week’s quick tip builds on the few-shot prompting technique by adding some qualifying explanations for ‘what a good one looks like’.

We always do this when we are modelling and teaching, and it works well with promptcraft.

So the prompt structure goes:

  1. Describe your instructions in clear language.
  2. Add an example of what you are looking for.
  3. Describe why this example is a good model.

Let’s say you’re teaching persuasive writing. Instead of asking the AI to generate a persuasive text, you could add a model paragraph from a previous lesson, followed by specific pointers on what makes it effective.

Here’s an example working with the Claude-3-Opus-200k model.

Note how closely the output follows my model paragraph. A little extra promptcraft goes a long way to improve the results.

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

BIAS

.: Systematic Prejudices Investigation [PDF]

The UNESCO report sheds light on gender bias within artificial intelligence, revealing biases in word association, sexist content generation, negative content about sexual identity, biased job assignments, and stereotyping.

The study emphasises the need for ethical considerations and bias mitigation strategies in AI development, including diverse representation in teams and training datasets to ensure fairness and inclusivity.

ELECTION
.: How AI companies are reckoning with elections

A helpful short article giving a rundown of how some of the largest tech companies and AI platforms are grappling with the impact of AI tools on the democratic process.

Although this focuses on the US, in 2024 billions of people will go to the polls across the world. According to a Time article:

Globally, more voters than ever in history will head to the polls as at least 64 countries (plus the European Union)—representing a combined population of about 49% of the people in the world—are meant to hold national elections

How these popular generative AI companies respond will impact so many of us.

Several companies […] signed an accord last month, promising to create new ways to mitigate the deceptive use of AI in elections. The companies agreed on seven “principle goals,” like research and deployment of prevention methods, giving provenance for content (such as with C2PA or SynthID-style watermarking), improving their AI detection capabilities, and collectively evaluating and learning from the effects of misleading AI-generated content.

LABELS
.: Why watermarking won’t work

This Venturebeat article discusses the challenges posed by the proliferation of AI-generated content and the potential for misinformation and deception.

Tech giants like Meta, Google, and OpenAI are proposing solutions like embedding signatures in AI content to address the issue.

However, questions arise regarding the effectiveness and potential misuse of such watermarking measures.

Ethics

.: Provocations for Balance

The Chatbot That Knows Too Much

“Ed” is meant to be a helpful companion, guiding students through their day. But imagine a school system where every question asked, every late assignment admitted to, every awkward social situation confessed to the chatbot becomes permanent record. Mistakes can’t be erased; the AI analyzes your word choice for signs of distress.

What if this data isn’t just used for support, but for discipline, even predicting future “problem” behavior? Is it okay for an AI to judge the private thoughts of a teenager, and worse, limit their opportunities based on what it predicts they might do?

Inspired by some of the topics this week and dialled up.

:. .:

How would you rate this issue of Promptcraft?

(Click on your choice below)

⭐️⭐️⭐️⭐️⭐️ Loved it!

⭐️⭐️⭐️ Good

⭐️ Poor

If you have any other kind, specific and helpful feedback, please reply to this email or contact me at tom@dialogiclearning.com

.: :.

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 48 .: EU Passes Sweeping AI Act

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In this issue, you’ll discover:

  • European Union Passes Sweeping AI Act, Sets ‘Global Standard’
  • Florida teens arrested for creating ‘deepfake’ AI nude images of classmates
  • A guide to Google Gemini and Claude 3.0, compared to ChatGPT

Let’s get started!

~ Tom Barrett

4pSYnssJ8X2bhK6sRXpMyJ

EU AI ACT

.: European Union Passes Sweeping AI Act, Sets ‘Global Standard’

Summary ➜ The European Parliament has passed the Artificial Intelligence Act, which will take effect later in the year. This landmark law is the world’s most comprehensive AI regulation and aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI while also encouraging innovation. The law bans certain AI applications that threaten citizen rights and establishes transparency requirements for general-purpose AI systems. The law aims to make the EU the de facto global standard for trustworthy AI, and lawmakers say it is only the first step in building new governance around technology.

Why this matters for education ➜ When we take a global view of the changing nature of technology regulation, all educators are impacted by this new law. The second-order effect in other parts of the world might include similar, tighter regulations on high-risk AI applications. As the most comprehensive AI regulation to date, it sets a global standard for developing and deploying AI technologies, including those used in educational settings. This new law will likely influence the direction of AI innovation and regulation worldwide as governments and organisations seek to establish their own guidelines and recommendations. Although it might be an oversimplified way to consider the impact, one question we might ask is how innovation in the educational technology space will be encouraged or stifled.

DEEPFAKE

.: Florida teens arrested for creating ‘deepfake’ AI nude images of classmates

Summary ➜ Two middle school students in Florida have been arrested and charged with third-degree felonies for allegedly creating deepfake nude images of their classmates using an unnamed AI application. This marks the first instance in the US of criminal charges related to AI-generated nude images. The incident highlights the increasing problem of minors creating explicit images of other children using generative AI, with only a handful of states having laws addressing this issue.

Why this matters for education ➜ Similar to the story from Los Angeles a little while ago, there is no need to explain why. While President Joe Biden has issued an executive order on AI banning the use of generative AI to produce child sexual abuse material, there is currently no federal law addressing nonconsensual deepfake nudes. As a class, school or system leadership team, you might pause and consider how you would respond if this scenario played out in your community. What policies and procedures should we implement to ensure we are prepared to handle instances of AI technology misuse within our school community?How can we foster an open and supportive culture in which students feel comfortable reporting such issues, and what support systems can we establish to assist students who may become victims of these actions?

FRONTIERS

.: Your guide to Google Gemini and Claude 3.0, compared to ChatGPT

Summary ➜ Two new powerful language models, Google’s Gemini Ultra 1.0 and Anthropic’s Claude 3.0 Opus, have been released, rivalling OpenAI’s GPT-4. This article compares the models and provides strategies for organisations deciding which to use, ranging from using just ChatGPT to adopting all three. The release of these models is a milestone, giving developers more choices, affecting company revenues, and indicating the difficulty of surpassing GPT-4 level performance.

Why this matters for education ➜ This article compares frontier AI models and provides helpful ideas for educators looking to improve their AI literacy. Hands-on experience with leading proprietary models like Gemini, Claude, and GPT is critical to understand their capabilities and potential applications in the classroom. System-wide decisions about AI tool rollouts in schools may depend on existing technology ecosystems, with schools potentially leaning towards tools that integrate seamlessly with their current setup. However, understanding the strengths and weaknesses of each frontier model can help educators make informed decisions about AI adoption, regardless of existing partnerships.

Complement this with my take on open source models below in the reflection titled: Peanut Butter and Pickles: Can Open Source and Education Mix?

.: Other News In Brief

🔓 Should AI be open?

🎓 How Young Is Too Young to Teach Students About AI? Survey Reveals Differing Opinions

💕 Why people are falling in love with AI chatbots

🚫 Google Bans U.S. Election Questions in Gemini AI

👨‍💻 Cognition launches an AI software engineer agent, Devin

🫂 Empathy raises $47M for AI to help with the practical and emotional bereavement process

🛠️ Microsoft opens its Copilot GPT Builder to all Pro subscribers

🖼️ Midjourney debuts feature for generating consistent characters across multiple gen AI images

🏢 OpenAI CEO Altman wasn’t fired because of scary new tech, just internal politics

🆓 Elon Musk vows to make his ChatGPT competitor Grok open source

:. .:

Monthly Review

.: All the February issues in 1 PDF

ndr3uJWG2BuLWV7ZYG4Vc6

Promptcrafted February 2024

The only monthly publication that curates the most relevant and impactful AI developments specifically for educators.

A… Read more

.: :.

What’s on my mind?

.: Peanut Butter and Pickles: Can Open Source and Education Mix?

One area of educational technology that seems to be overlooked is the potential of open-source software and tools.

Open source means the source code is freely available to the public to view, modify, and distribute, encouraging collaborative development where anyone can contribute improvements or modifications to the project.

However, in my experience, open source has never been an option in educational technology strategies in schools and systems.

This raises the question: Is education missing out on the benefits of open source?

Back when we were still learning to use Word Processing software and set up computer labs in our schools, I remember coming across an open-source version of MS Word called OpenOffice.

It was a suite of office productivity tools that was an open-source alternative to Microsoft Office. OpenOffice was free to download and did almost everything the licensed Word version could do. But nobody knew about it, and open-source software was never seriously considered.

Perhaps education and open source don’t go together like mince in a trifle or peanut butter and pickles. I mean, I like all of those things, but not together.

While open-source tools provide the flexibility to customise and adapt to specific use cases, this freedom can lead to application inconsistency and a lack of standardisation. This can pose challenges in an educational setting, where uniformity in tool usage is often what system admins want to maintain.

Additionally, the open nature of these tools can sometimes pose security concerns, as the code is accessible to everyone, including potential malicious actors.

The benefits of open source cannot be ignored. Within the AI space, there are a vast number of open-source models that can be used for free.

At the time of writing, there are 548,994 models on Hugging Face for a wide range of multimodal, computer vision, natural language processing, and audio functions. Yet, we might only know about ChatGPT and Gemini for everyday users and educators.

So, the challenge is educating the education sector about these open-source models’ existence, benefits, and potential drawbacks. This involves raising awareness, providing clear and accessible information about implementing them and offering guidance on managing any possible risks associated with their use.

It also requires a shift in mindset from being reliant on big tech vendors to being open to exploring other options that could offer greater flexibility and adaptability.

Will there be resistance to open source? Does anyone know about it? Are we so wedded to big tech vendors we can’t see other options?

What do you think?

:. .:

~ Tom

Prompts

.: Refine your promptcraft

It is becoming clearer that effective prompcraft falls into three broad approaches.

  1. Start small and iterate.
  2. Structured longer prompts.
  3. Build an in depth system prompt for a custom bot.

I use all three of these in my daily interactions with various tools.

Let’s look at how to start small and iterate with the Flipped Interaction prompt.

This is when you ask the LLM to ask you questions before it provides an output. This helps build contextual cues and information.

According to research referenced by Briana Brownell from Descript:

for the highest quality answers, the tests showed the Flipped Interaction pattern is the valedictorian of prompts. […] In tests, using this principle improved the quality of all responses for every model size. It improved quality the most for the largest models like GPT-3.5 and GPT-4, but it did impressively well in smaller models too. So if you’re not using this A+ technique yet, you should definitely start.

Here are some example prompts to try:

Let’s collaborate on (describe your task) start by asking me some questions.
From now on, I would like you to ask me questions to (describe your task)

I would also recommend adding in an instruction to go one step at a time, or to limit the number of questions as most models are too verbose.

Here is an example of this using the new Claude-3-Opus-200k model.

And here is the same prompt using Alibaba’s Qwen-72b-Chat model.

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

DESIGN

.: Design Against AI: 2024 Design in Tech Report by John Maeda

video preview

John Maeda’s report advocates for designers to develop AI literacy while staying grounded in human reality, in order to help shape an ethical and human-centred future.

Balancing the use of AI as a tool with uniquely human creative abilities will be an ongoing challenge.

Reminds me of the premise of the humAIn community

A learning community for educators to connect and explore our humanity in the age of artificial intelligence

HUMANITY
.: AI Literacy is the Art of Synergizing Intuitions

“Dealing with AI is a wandering dance between two unconscious entities—the AI and much of our brain—using the much tinier piece of our neural mush that deliberates.”

This quote captures the central analogy of the post – that interacting with AI is an interplay between the intuitive, automatic responses of both human and artificial neural networks, mediated by our limited conscious faculties.

Tim Dasey’s article frames AI literacy as a process of syncing the intuitive, contextual responses of both human and artificial intelligences through techniques like prompting, feedback solicitation and comparative understanding of cognition.

Mastering this “dance” unlocks AI’s potential while honing essential human skills.

COLLECTION
.: The AI Literacy Curriculum Hub

The AI Literacy Curriculum Hub is a spreadsheet curated by AI for Equity and Erica Murphy at Hendy Avenue Consulting.

A collection of AI literacy lessons, projects, and activities from respected sources like Common Sense Education, Stanford’s Craft AI, Code.org, ISTE, MIT Media Lab, and more.

Each resource is tagged, providing key details such as the applicable grade levels, lesson duration, required materials, and learning objectives.

Ethics

.: Provocations for Balance

➜ Is it time for schools to become digital dictatorships, monitoring every keystroke and thought, or do we resign ourselves to a future where trust is a relic of the past?

➜ Big Tech AI companies lure schools with promises of personalised learning and cutting-edge tech, while open-source alternatives whisper seductively of freedom and transparency. In this high-stakes game of AI roulette, who will educators bet on? Will they sell their digital souls for a taste of Silicon Valley’s forbidden fruit or take a leap of faith into the wild west of open source, where danger and opportunity lurk in equal measure?

Inspired by some of the topics this week and dialled up.

:. .:

How would you rate this issue of Promptcraft?

(Click on your choice below)

⭐️⭐️⭐️⭐️⭐️ Loved it!

⭐️⭐️⭐️ Good

⭐️ Poor

If you have any other kind, specific and helpful feedback, please reply to this email or contact me at tom@dialogiclearning.com

.: :.

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 47 .: Anthropic launches new Claude-3 models

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In this issue, you’ll discover:

  • Anthropic says its latest AI bot can beat Gemini and ChatGPT
  • Google DeepMind Unveils AI ‘Genie’ to Instantly Conjure Playable Games
  • Over Half of Business Users Employ Generative AI at Least Weekly

Let’s get started!

~ Tom Barrett

wHjQk3aodvry5NsmgheYoe

NEW MODELS

.: Anthropic says its latest AI bot can beat Gemini and ChatGPT

Summary ➜ Anthropic has launched its Claude 3 model family, which claims to have superior performance to OpenAI’s GPT-4 across ten public AI model benchmarks, marking a significant advance over the company’s 2.1 model release from November 2023. The introduction includes the Opus, Sonnet, and Haiku models, with Opus and Sonnet available for use in the API, and Haiku coming soon. The company also plans to release frequent updates to the Claude 3 model family over the next few months, including tool use, interactive coding, and more advanced agentic capabilities. Anthropic can now rival GPT-4 on equal terms and is closing the feature gap with OpenAI’s model family.

Why this matters for education ➜ These large language models are the engines that power our AI systems. The release of new models are significant milestones for the ecosystem.

I had to laugh at one commenter framing these announcements as if Anthropic with Claude-3 and Google with Gemini, have finally reached The Moon where OpenAI has been for a while, because everything is in comparison to the GPT-4 class models.

But, as they arrive, OpenAI, with the impending release of GPT-5, say,

Yep, and we are ready to go to Mars.

It is too early to tell what this means for education, but more choice is helpful. The benchmarks used for marketing are always a little misleading, and actual use on real tasks might tell a different story. I can access some of the new models via Poe, and I will give it a play; you can try the new model at Claude.ai

GAME DEV

.: Google DeepMind Unveils AI ‘Genie’ to Instantly Conjure Playable Games

Summary ➜ Google DeepMind’s Genie AI can create interactive, playable games from simple prompts by learning game mechanics from countless gameplay videos. With 11 billion parameters, Genie converts media into games, allowing users to control generated environments frame-by-frame. This groundbreaking model has the potential to revolutionise how AI learns and interacts with virtual environments, opening up new possibilities for training versatile AI agents.

Why this matters for education ➜ As young students, we are inherently world builders, naturally learning through play and conjuring up worlds that reimagine what we know into something entirely new. The development of text-to-game AI models, like Google DeepMind’s Genie, strikes a chord with me, highlighting how future students will face fewer barriers to creative expression. With the power of these AI tools at their fingertips, students can create simple sketches and bring their ideas to life, collaborating with game engines to adapt and refine their concepts. As AI-powered game creation tools become more accessible and integrated into learning experiences, I can’t help but feel excited about the possibilities that lie ahead.

dnaJHzNKcnK2JpWsw6QXsC

AI ADOPTION

.: Over Half of Business Users Employ Generative AI at Least Weekly

Summary ➜ An Oliver Wyman report surveyed 200,000 people in 16 countries in November 2023, finding a 55% average adoption rate of generative AI. Adoption increased by 62% between June and November 2023. The technology industry had the highest adoption with 75% of white-collar workers using generative AI weekly. Healthcare and life sciences professionals use generative AI extensively, and consumers in many countries welcome it to expand healthcare access.

Why this matters for education ➜ These reports are important for us to be aware of, as they give educators a glimpse into what is happening across the ecosystem. Seeing the pace of adoption in every industry is an important provocation which I hope catalyses some action from school and system leaders. The other aspect I am interested in is a meaningful level of adoption.

Education showed the largest rise in use, with increases in 2023 of 144%. Forty-four percent of education industry employees report using generative AI weekly

In your organisation how many people are using GenAI every week? Every day? More than 40%?

.: Other News In Brief

🗣️ OpenAI has introduced a Read Aloud feature for ChatGPT

😠 Google CEO says Gemini AI diversity errors are ‘completely unacceptable’.

🖼️ Ideogram, the free AI image generator rolls out text-in-image upgrade.

🧪 Anthropic’s Claude 3 knew when researchers were testing it.

🍫 Willy Wonka Experience Glasgow: a metaphor for the overpromises of AI?

📰 OpenAI claims the New York Times cheated to get ChatGPT to regurgitate articles.

🇮🇳 India reverses AI stance, requires government approval for model launches.

🎤 Alibaba’s new AI system ‘EMO’ creates realistic talking and singing videos from photos.

😈 Why does AI have to be nice? Researchers proposeAntagonistic AI.

🤝 Tumblr’s owner is striking deals with OpenAI and Midjourney for training data.

:. .:

Connect & Learn

.: The humAIn community is growing!

Take a look at my online community to explore, connect and learn about AI for education.

💡 AI learning resources

🗣 Shared community forums

📅 Regular online community events

🫂 Connections with peers worldwide

✨ Guidance from three trusted community leaders

Join over 30 educators from Singapore, US, Australia, Spain and the UK.

Monthly Review

.: All the January issues in one convenient PDF

iRta3CpdpjYTC7psYCBTF3

Promptcrafted January 2024

Discover the future of learning with Promptcrafted – Tom Barrett’s monthly guide to AI developments impacting education…. Read more

Look out for information about the new February edition of Promptcrafted – coming soon!

.: :.

What’s on my mind?

.: Lift Your Gaze

In early 2022, the Grattan Institute, a prominent Australian think tank, released a report examining how better government policy might help make more time for great teaching.

The report explored the results of a survey of 5,442 Australian teachers and school leaders, finding more than 90 per cent of teachers say they don’t have enough time to prepare effectively for classroom teaching.

Teachers report feeling overwhelmed by everything they are expected to achieve. And worryingly, many school leaders feel powerless to help them.

This is amidst, and perhaps feeding, an education workforce crisis that is also being felt globally.

Amid this education workforce crisis, AI has emerged as a potential solution – but one that comes with its own set of challenges and opportunities. When the number one issue is a lack of time, a tool that purports to time-save is likely to be adopted quickly, sometimes unquestioningly.

There are two friction points here: the sleepwalking into AI adoption when we know a wide range of literacies, such as media bias and understanding the capabilities and limitations of AI systems, need to mature alongside good prompting.

The other is that we get comfortable with low-level replacement tasks, the ‘grunt work’, the time savers – and we do not look beyond the marginal productivity gains.

As I say, the poor health of the current learning ecosystem has complex conditions in which to adopt a powerful, generative set of new technologies.

This is not to say that saving time is not helpful, important, or even critical for education to be sustainable. Go get that first draft of the email, expand on those lesson starter ideas, and build a bunch of open-ended questions to get you started!

But I want us all to consider what happens when we have saved all the time there is to save. How are we stretching the capabilities of AI systems and, in turn, stretching and reshaping what might be possible in teaching and learning?

I still think this is initially explored in daily tasks and productivity. So, when mapping out a medium-term plan of six lessons, ask yourself: What could I create with AI that I would never have had the time or resources to create before? Could I design something that was previously out of reach?

The next time you sit down to get stuck into some precious learning design, consider what would typically be inconceivable. What is usually out of reach for me? How could I push the boundaries of what’s possible?

We need more educators who know enough about how to save time, who have gathered the low-hanging fruit and are now ready to lift their gaze and design new ways to reach higher and further. 

:. .:

~ Tom

Prompts

.: Refine your promptcraft

A reminder this week of the powerful image creating tools like Midjourney.

These have really advanced quickly across the last few months and the images below were created using Midjourney version 6.

These are powerful tools to bring ideas to life. What do you think of my orc warlord?

Here’s the full prompt:

Character design close-up, intimidating orc warlord with battle-scarred green skin, heavy spiked armour, and a massive war hammer, unreal engine

Or, perhaps you prefer my inventor?

Character design close-up, brilliant gnome inventor with wild purple hair, goggles, a tool belt, and a steam-powered mechanical arm, unreal engine

You can see how little text we have to write to get some amazing results. A fun way to bring some character writing to life for students. Which in turn, generates some further visual cues for writing.

Please also note the promptcraft includes “Character design close-up” for the type of image, then details, and the style key is “unreal engine”.

These prompts were inspired by the ideas in this great article exploring Midjourney in depth.

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

RESEARCH
.: Preparing for AI-enhanced education: Conceptualising and empirically examining teachers’ AI readiness

Teachers’ readiness for integrating AI into education is crucial for the success of AI-enhanced teaching.

This study defines AI readiness based on cognition, ability, vision, and ethics and explores how these components impact teachers’ work, innovation, and job satisfaction.

Here are some other study highlights:

  • Teachers’ AI readiness were conceptualised from cognition, ability, vision, and ethics in the educational use of AI.
  • Teachers’ cognition, ability, and vision in the educational use of AI were positively associated with ethical considerations.
  • The four components of AI readiness all positively predicted AI-enhanced innovation and job satisfaction.
  • Teachers with high levels of AI readiness perceived low AI threats and demonstrated high innovation and job satisfaction.
  • Teachers from different socio-economic regions and of different genders showed no significant differences in AI readiness.

CLIMATE
.: AI’s Climate Impacts May Hit Marginalised People Hardest

Artificial intelligence (AI) technology, while celebrated for its potential in weather forecasting, also plays a significant role in exacerbating the climate crisis, according to a report from the Brookings Institution.

The report warns that AI’s soaring energy consumption and environmental costs could disproportionately impact marginalised communities already vulnerable to global warming.

Training a chatbot, for example, requires the same amount of energy as 1 million U.S. homes consume in an hour. The report highlights the potential for AI’s climate impacts to worsen existing environmental inequities related to extreme heat, pollution, air quality, and access to potable water in areas reliant on fossil fuels, often near poor communities.

CLIMATE
.: The Staggering Ecological Impacts of Computation and the Cloud

An interesting exploration of the environmental impact of computation, the cloud infrastructure AI models rely on, and the ecological costs of ubiquitous computing in modern life.

The article highlights the material flows of electricity, water, air, heat, metals, minerals, and rare earth elements that undergird our digital lives.

It discusses the environmental impact of the Cloud, such as carbon footprint and water scarcity. The article also explores the acoustic waste data centres emit, known as “noise pollution.”

Ethics

.: Provocations for Balance

➜ So you have saved time. Now what?

➜ How can we reframe the conversation around AI in education to focus on its potential for transformative change beyond efficiency gains?

➜ Could this technology stifle creativity and imagination in young people, who might become reliant on AI for generating ideas instead of developing their own?

Inspired by some of the topics this week.

:. .:

How would you rate this issue of Promptcraft?

(Click on your choice below)

⭐️⭐️⭐️⭐️⭐️ Loved it!

⭐️⭐️⭐️ Good

⭐️ Poor

If you have any other kind, specific and helpful feedback, please reply to this email or contact me at tom@dialogiclearning.com

.: :.

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett