Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In this issue:

  • On the Conversational Persuasiveness of Large Language Models
  • OpenAI’s Sora just made its first music video
  • Microsoft’s new safety system can catch hallucinations​

Let’s get started!

~ Tom Barrett

orprv5SPKA4jHmAhPGeypN

AI IMPACT

.: On the Conversational Persuasiveness of Large Language Models [PDF]

Summary ➜ This randomised controlled trial found that the large language model GPT-4 was significantly more persuasive than human debaters in online conversations, with access to basic personal information enabling it to tailor arguments and increase persuasiveness even further. Specifically, when personalised, GPT-4 had 81.7% higher odds of shifting participants’ opinions than human opponents. The results show large language models can use personal data to generate highly persuasive arguments in conversations, outperforming human persuaders.

Why this matters for education ➜ I know it is an unusual item to curate at the top of the issue, but when you bring this research in from the edges and shine a light, it holds significance. Plug any of these cutting edge models into social platforms or news aggregation tools and the possibilities for personalised disinformation are worrying. Just think about the persuasive power of personalised chatbots like Snapchat’s MyAI.

There is a design challenge for AI Literacy programmes – and AI systems – to inform teachers and students of the benefits of just enough context for better performance, and also emerging capabilities such as our perception of value in what they generate.

More research is needed on how students interact with and are influenced by AI technologies to inform responsible integration in schools. The persuasive prowess of AI has implications for how technology is ethically designed and deployed in education to benefit, not harm, student development.

VIDEO

.: OpenAI’s Sora just made its first music video

Summary ➜ Sora is the name of OpenAI’s state of the art text-to-video model and the company has been collaborating with a range of artists and creatives to explore what is possible with Sora. August Kamp the creator of the new music video explains the advantage of working with the video AI model: “Being able to build and iterate on cinematic visuals intuitively has opened up categorically new lanes of artistry for me”.

Why this matters for education ➜ When we pause to consider how creative expression is changing and opening up in new ways it challenges all of us in education to see new opportunities. These advanced media tools create new platforms for student storytelling and artistic exploration.

The image above is my version of the character from Air Head – an AI-generated video image by shy kids using OpenAI Sora.

sKUu321bFFj7kyFmHRtVuc

AI SAFETY

.: Microsoft’s new safety system can catch hallucinations

Summary ➜ According to the article from the Verge the new safety system includes Three features: Prompt Shields, which blocks prompt injections or malicious prompts from external documents that instruct models to go against their training; Groundedness Detection, which finds and blocks hallucinations; and safety evaluations, which assess model vulnerabilities, are now available in preview on Azure AI.

Why this matters for education ➜ It is great to see the development of dynamic user guardrails and safety measures that go beyond just finely-tuned restrictions at the system level. While this development is aimed at system builders, I believe it could also be a precursor to similar measures being integrated at a user level, such as when a student interacts with a chatbot.

.: Other News In Brief

🔬 According to a study to improve performance of large language models (LLMs) More Agents Is All You Need.

📸 Apple’s $25-50 million Shutterstock deal highlights fierce competition for AI training data

🔎 Google rolls out Gemini in Android Studio for coding assistance

💰 AI hardware company from Jony Ive, Sam Altman seeks $1 billion in funding

👥 Meta’s AI image generator can’t imagine an Asian man with a white woman

🌇 DALL-E now lets you edit images in ChatGPT

📹 OpenAI transcribed over a million hours of YouTube videos to train GPT-4

🎵 Spotify’s latest AI feature builds playlists based on text descriptions

🏧 Elon Musk’s X.ai in Talks to Raise $3B at a Valuation Matching Anthropic

🎮 An AI-powered Xbox chatbot for support tasks is being developed and tested by Microsoft.

:. .:

What’s on my mind?

.: Personal Persuasion

As you may recall I have been using an note taking tool called Mem since 2022, which has a range of AI features, including a chatbot. There is something uncanny when a chatbot addresses you by name.

Alright, Tom. I’ve analysed your writing style from the example Mems you’ve saved. I’ve noticed that you prefer a direct and concise style, often using short sentences…
That’s a powerful statement, Tom. It’s a mindset that can be liberating, especially in fields where the outcome is often beyond our control.

It’s not the only personalisation happening in these chats as Mem draws on all of my saved note data as context and information to use. I can chat about my notes, refer to specific items I have saved and the chatbot, without being prompted, uses saved notes as reference in responses.

It often surfaces stuff I saved ages ago but have long since forgotten. This is the personalisation I value in these smart systems.

But clearly, with my lead story about the research on the persuasive powers of AI models in mind, we have to be watchful for the subtle acceptance of ideas. The simple inclusion of my name in the response changes the dynamic to be more personal, friendly and connected.

Compare that to the tinny mechanism of ChatGPT output and it is world’s apart. We crave a voice or personality in these chatbots and we are wired to respond positively to being seen, recognised, named or acknowledged.

What comes to mind are my experiments in designing chatbots for specific problem sets in schools, and the fascinating question of how much synthetic personality do we design into the system?

It is often when we are building these systems and facing these design challenges with a real problem and audience in mind the issues of persuasion, accountability, personality and connection become much clearer.

~ Tom

Prompts

.: Refine your promptcraft

Let’s try Gemini Ultra 1.0 with some extra personality prompts and see what we get.

Here’s a base prompt adapted a few ways with some imaginative characters as the personality.

Act as my active listening coach, with a personality like Dr. Zara Cosmos: eccentric, curious, and inventive. Use scientific jargon, space-themed puns, and imaginative analogies, while maintaining an energetic, encouraging, and slightly quirky tone. Create a detailed, challenging scenario for me to listen and respond empathetically to a friend’s problem. Request my responses, provide expert feedback, share best practices, and adapt difficulty based on my performance. Encourage reflection with questions about my strengths, weaknesses, reasoning, and areas for improvement. Be supportive and helpful. Go slowly, step by step.

And here is a second alternative personality, notice how it changes the response.

Act as my active listening coach, with a personality like Sage Oakwood: serene, insightful, and empathetic. Use nature-inspired metaphors, philosophical questions, and calming language, while maintaining a gentle, understanding, and reassuring tone. Create a detailed, challenging scenario for me to listen and respond empathetically to a friend’s problem. Request my responses, provide expert feedback, share best practices, and adapt difficulty based on my performance. Encourage reflection with questions about my strengths, weaknesses, reasoning, and areas for improvement. Be supportive and helpful. Go slowly, step by step.

And one more, let me introduce Captain Amelia Swift an adventurous and decisive leader. Remember same prompt, different style or tone request.

Act as my active listening coach, with a personality like Captain Amelia Swift: adventurous, decisive, and adaptable. Use action verbs, nautical terms, and problem-solving strategies, while maintaining a confident, motivating, and occasionally playful tone. Create a detailed, challenging scenario for me to listen and respond empathetically to a friend’s problem. Request my responses, provide expert feedback, share best practices, and adapt difficulty based on my performance. Encourage reflection with questions about my strengths, weaknesses, reasoning, and areas for improvement. Be supportive and helpful. Go slowly, step by step.

You can get as creative as you like with the personalities you call upon. It is interesting to see how the same model responds differently with the various characters.

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

INTRO

.: How Does a Large Language Model Really Work?

Tobias Zwingmann takes us step by step through the mechanics and basic architecture of how a Large Language Model (LLM) works and generate text and respond to your prompts. Today, I aim to demystify the principles of LLMs – a subset of Generative AI that produces text output. Understanding how tools like ChatGPT generate both magical and sometimes oddly dumb responses will set you apart from the average user, allowing you to utilize them for more than just basic tasks.

RESEARCH
.: An MIT Exploration of Generative AI

A collection of research papers from MIT in the following categories:

  1. AI in Engineering and Manufacturing
  2. AI Impact on Work and Productivity
  3. Creative Applications of AI
  4. Education and Generative AI
  5. Human-AI Interactions
  6. Nature-Inspired Design and Sustainability
  7. Practical AI Applications
  8. Social Implications of AI

MIT President, Sally Kornbluth explains, This collection offers a glimpse into some of MIT’s most brilliant minds at work, weaving new ideas across fields, departments and schools. We share their work in the hope it will serve as a springboard for further research, study and conversation about how we as a society can build a successful AI future.

RESEARCH
.: The ‘digital divide’ is already hurting people’s quality of life. Will AI make it better or worse?

Almost a quarter of Australians are digitally excluded, missing out on online benefits. The digital divide affects quality of life, especially for older, remote, and low-income individuals. AI could deepen this gap if digital exclusion issues are not addressed.

We found digital confidence was lower for women, older people, those with reduced salaries, and those with less digital access.

We then asked these same people to comment on their hopes, fears and expectations of AI. Across the board, the data showed that people’s perceptions, attitudes and experiences with AI were linked to how they felt about digital technology in general.

In other words, the more digitally confident people felt, the more positive they were about AI.

Ethics

.: Provocations for Balance

Thought Leader: In a world where AI language models can convincingly argue any perspective, a charismatic figure harnessed their persuasive prowess to sway public opinion. As the model’s influence grew, dissenting voices were drowned out, leading to a chilling conformity of thought. But when the model’s true agenda was revealed, would anyone be left to question it?

The Art of Obsolescence: An aspiring artist struggled to find her voice amidst the dazzling AI-generated creations flooding the market. As technology advanced, human artistry became a niche curiosity, and artists were forced to choose – embrace the machines or be left behind. But when the line between human and artificial blurred, what would define true expression?

The Divide: Set in a future where the digital divide has deepened into a chasm, society is split between the technologically elite and those left behind. A teacher in a remote community, where access to AI and digital resources is limited, starts an underground movement to bridge the gap. As the movement grows, it becomes a target for both sides of the divide, leading to a pivotal showdown over the future of equality and access in an AI-driven world.

Inspired by some of the topics this week and dialled up. To be honest this section has morphed into me developing potential Black Mirror episodes and much more distopia than I was expecting.

:. .:

How would you rate this issue of Promptcraft?

(Click on your choice below)

⭐️⭐️⭐️⭐️⭐️ Loved it!

⭐️⭐️⭐️ Good

⭐️ Poor

If you have any other kind, specific and helpful feedback, please reply to this email or contact me at tom@dialogiclearning.com

.: :.

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett