.: Promptcraft 32 .: Universal sues Anthropic for copyright breach

Hello Reader,

Welcome to Promptcraft, your weekly newsletter on artificial intelligence for education. Every week, I curate the latest news, developments and learning resources so you can consider how AI changes how we teach and learn.

In this issue:

  • China proposes AI framework at Belt and Road conference​
  • Baidu claims its new Ernie 4.0 matches capabilities of GPT-4​
  • Universal Music Group sues Anthropic for copyright infringement over song lyrics​

Let’s get started!

.: Tom

Latest News

.: AI Updates & Developments

.: China proposes AI framework at Belt and Road conference ➜ At its Belt and Road forum, China proposed a new AI framework calling for equal rights in development and warning against ideological divides and misuse of AI technologies.

The Belt and Road forum is a major international conference hosted by China. It brings together leaders and representatives from many countries to discuss the Belt and Road Initiative, China’s ambitious plan to improve trade and infrastructure across Asia, Africa, and Europe.

.: Baidu claims its new Ernie 4.0 matches capabilities of GPT-4 ➜ Chinese tech giant Baidu has released version 4.0 of its natural language model Ernie, claiming it matches the capabilities of OpenAI’s recently announced GPT-4 despite lacking comparable hype.

.: Research by BSI finds a global “confidence gap” hindering AI adoption ➜ New research from BSI finds a global confidence gap between interest in AI and trust in adopting it, highlighting the need for greater education to build understanding and close this gap.

cbJggVyvdmQ4JdCdWGSGnM

.: EU’s AI Act unlikely to pass in 2023 as hoped ➜ The EU’s long-awaited AI Act may fail to pass regulations before December 2023 as hoped, as lawmakers struggle to agree on rules for regulating foundation models and generative AI systems.

6pW7XbnEg2Vkzmf75HQfZw

.: Anthropic explores aligning an AI model with principles sourced from public input ➜ Anthropic collaborated with the Collective Intelligence Project to source training principles from 1,000 Americans. They compared training a model on the public principles versus Anthropic’s own principles.

.: Universal Music Group sues Anthropic for copyright infringement over song lyrics ➜ Universal Music Group has filed a lawsuit against AI startup Anthropic, alleging that its natural language model Claude 2 infringes copyright by distributing song lyrics without permission when prompted, including from major pop songs.

.: Stanford researchers develop an index to assess foundation model transparency ➜ Researchers at Stanford’s Institute for Human-Centered AI have developed a new Foundation Model Transparency Index to rate major companies on transparency, finding much room for improvement.

.: Anthropic research explores decomposing language models for better understanding ➜ A new study from AI company Anthropic explores decomposing language models into interpretable features, aiming to move beyond analyzing individual neurons for greater understanding and control.

Reflection

.: Why this news matters for education

There was a comment by Casey Newton in the latest Hardfork podcast [linked below in the Learning section], which struck a chord with me.

He stated the future of these AI tools and chatbots is likely to be more personalised. With more personalised preferences and principles, they will become much more helpful to individuals.

If you believe that these AIs are going to become tutors and teachers to our students of the future in at least some ways, different states have different curricula, right? And there will be some chatbots that believe in evolution, and there will be some that absolutely do not. And it’ll be interesting to see whether students wind up using VPNs just to get a chatbot that’ll tell them the truth about the history of some awful part of our country’s history.

This raises a pressing concern: how do we prevent personalised chatbots and learning models from becoming closed-off filter bubbles, entrenching bias and preferred narratives?

The prospect of students breaking out of localised “truth bubbles” imposed by AI infrastructure is a serious provocation.

These AI systems are not neutral or benign.

It will take concerted investment in AI literacy and discernment to critically evaluate the models we employ rather than passively enjoying their utility as our judgment erodes.

It also takes investment in AI, digital, data, and media literacy to ask questions about the models we use. Not to sit back and enjoy the utility while our discernment slowly erodes.

When we zoom out and put this dynamic into the context of the global regulatory space, we see lines drawn and the rapid proliferation of parochial AI systems.

Students will experience many AI models throughout their lives, each with a signature, limitations, and inbuilt bias and preferences. Whether deliberate or unintended.

Just imagine this scenario momentarily and reflect on what it will take for your education system to mobilise to embrace this challenge.

.:

~ Tom

Prompts

.: Refine your promptcraft

Another advanced promptcraft technique today. The Maieutic method, attributed to Socrates, is a form of cooperative argumentative dialogue which is used to stimulate critical thinking and to draw out ideas and underlying presumptions.

You can specifically instruct an LLM to use the Maieutic method to solve the problem.

Here is how you might phrase your prompt:

  1. Begin with the issue: “My back is starting to seize up on my right hand side. It is a mild pain and discomfort.”
  2. Query your LLM with a Maieutic instruction: “As an expert physiotherapist, endurance running coach, and chiropractor, what would you recommend for a mild back pain and discomfort on the right side? Please provide your reasoning and then evaluate the consistency of your own reasoning using the Maieutic method.”

This instruction asks the LLM to not only provide a recommendation and reasoning, but also to assess the consistency of that reasoning.

The LLM’s ability to perform this task effectively will largely depend on the capabilities of the version of the model you use. We always need to remember LLMs hallucinate and might confidently tell you the reasoning is great!

Even the most advanced versions may not fully understand or correctly implement the Maieutic method as it is a complex method involving logical consistency checks and iterative questioning.

Here is an example response using GPT-4 via Poe

.:

Remember to make this your own, tinker and evaluate the completions.

Learning

.: Boost your AI Literacy

.: Peering into AIs Black Box | Hardfork Podcast

In the most recent edition of the Hardfork podcast Casey Newton and Kevin Roose explore some of the alignment and black-box research announcements I mentioned above.

.: Everything you need to know about the UK’s AI Safety Summit

The UK will host the world’s first major summit on AI safety in November at Bletchley Park. Its goal is to develop international collaboration on managing risks from advanced AI through shared understanding and research cooperation. Invitees include the US, Canada, France, Germany and controversially China, as well as tech leaders like Google’s DeepMind, OpenAI and Anthropic.

.: Mind over machine? The psychological barriers to working effectively with AI

While AI models are more accessible & capable than ever before, the latest evidence suggests humans aren’t particularly good at using them.

Overcoming our psychological biases through training, workflows and independent checks can help unlock the benefits.

Ethics

.: Provocations for Balance

Who should decide the rules that govern AI systems – tech companies, governments, or the public?

The Anthropic story about sourcing AI principles from public input suggests that public values should help shape AI development. But tech firms and governments clearly want influence too. There’s a debate over who should determine the ethics and regulations for AI.

How to balance intellectual property rights with public interest in AI research and applications?

Universal Music’s lawsuit against Anthropic for using song lyrics raises questions about copyright and legal access to data for training AI models. But there are arguments this impedes innovation and public benefits from AI. Where is the line between IP protection and public interest?

Should countries coordinate to develop global guidelines for AI, or take more nationalist approaches?

China argued for equal rights and warned against ideological divides in AI at the Belt and Road forum. Meanwhile, the EU and US take more insular approaches on AI regulation. Is global coordination required to govern shared technologies like AI responsibly?

~ Inspired by this week’s developments.

.:

That’s all for this week; I hope you enjoyed this issue of Promptcraft. I would love some kind, specific and helpful feedback.

If you have any questions, comments, stories to share or suggestions for future topics, please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our educational systems become. Thanks for being part of our growing community!

Please pay it forward by sharing the Promptcraft signup page with your networks or colleagues.

.: Tom Barrett

/Creator /Coach /Consultant

✂️ Cut Through the Noise: 5 Simple Coaching Phrases that Work

Dialogic #336

Leadership, learning, innovation

Your Snapshot
A summary of the key insights from this issue

⬩ Simple yet powerful phrases unlock coaching conversations

⬩ Ask “What else?” to widen perspectives; ask “What’s one step?” to spark momentum

⬩ Balance structure with flexibility to meet each person where they are

This week I have been back at my partner schools Adelaide Botanic High School and Casey Fields Primary School in Melbourne.

I shared a presentation with the leadership team at ABHS about Dialogic Coaching and today I thought I would share with you some simple coaching phrases I use that make a big difference.

And what else

Asking “What else?” allows them to explore a situation thoroughly before jumping into problem-solving mode. This question opens up possibilities rather than limiting to just one perspective—follow-up questions like “What factors are influencing this?” further the discovery process.

What was the most helpful part of today?

Asking this at the end of a session encourages reflection on insights, progress, or breakthrough moments. Looking at what worked well reinforces growth and creates awareness. It also allows customising support based on what they found impactful.

I have some ideas, but why don’t you start?

When you need to show support and encouragement but also not divert accountability, coaching is about empowering the person to take ownership. This phrase is great for exploring options and new ideas in response to your identified challenge.

How can I best support you in taking the next step?

Offering support for their next steps makes it a collaborative process. Using “best” invites them to assess what kind of support would serve them rather than a prescribed solution. It keeps their goals and desired direction central.

What’s one thing you could do today to move forward?

This coaching question encourages positive action and momentum. It prompts the person to think about concrete steps rather than feel stuck. Having them identify one thing makes it feel manageable rather than overwhelming them. Progress compounds, so small, consistent actions add up over time.

Simple phrases and carefully chosen words make a huge difference. Sometimes it is the most simple and direct expression which cuts to the heart of what might be needed.

Let me know your favourite powerful phrases which you use regularly in your coaching and leadership conversations.

⏭🎯 Your Next Steps
Commit to action and turn words into works

Identify 1-2 concrete goals to focus your coaching efforts.

⬩ Send me your favourite phrases or questions and share why they work for you.

⬩ Experiment with incorporating new powerful phrases into your conversations.

🗣💬 Your Talking Points
Lead a team dialogue with these provocations

⬩ What resonated with you from these coaching phrases?

⬩ How could a simpler approach elevate your communication?

⬩ What phrases could you adopt or adapt for your team?

🕳🐇 Down the Rabbit Hole
Still curious? Explore some further readings from my archive

Here are some of my articles about coaching.

⟶ ​Dialogic Coaching — What This Approach Looks Like In Practice (edte.ch)​ In this article, I describe the dialogic coaching approach, where the coach and client participate as equals to generate shared understanding and momentum towards action. Key elements are sharing ideas openly and moving from exploration to commitment.

⟶ ​Transform Your Feedback and Goal Setting Forever With 3 Key Attributes (edte.ch)​ I examine how to improve the impact of feedback and goal setting by focusing on agency, precision, and systems awareness. This balances emergent dialogue with clear next steps anchored in reality.

⟶ ​Your Perspective is Your Truth (edte.ch)​ In this article, I explore the uniqueness of each coaching conversation, as every individual brings their own valid perspective and truth. This requires balancing structure with flexibility and slowing down to truly understand.

Thanks for reading. Drop me a note with any Kind, Specific and Helpful feedback about this issue. I always enjoy hearing from readers.

~ Tom Barrett

Support this newsletter

Donate by leaving a tip

Encourage a colleague to subscribe​​​​​

Tweet about this issue

The Bunurong people of the Kulin Nation are the Traditional Custodians of the land on which I write and create. I recognise their continuing connection and stewardship of lands, waters, communities and learning. I pay my respects to Indigenous Elders past, present and those who are emerging. Sovereignty has never been ceded. It always was and always will be Aboriginal land.

Unsubscribe | Update your profile | Mt Eliza, Melbourne, VIC 3930

.: Promptcraft 31 .: Adobe creates symbol to label AI generated images

Hello Reader,

Welcome to Promptcraft, your weekly newsletter on artificial intelligence for education. Every week, I curate the latest news, developments and learning resources so you can consider how AI changes how we teach and learn.

In this issue:

  • Google will Shield AI users from copyright challenges
  • Adobe creates symbol to label AI generated images
  • AI extracts the first legible word from a carbonised scroll

Let’s get started!

.: Tom

Latest News

.: AI Updates & Developments

.: Google will Shield AI Users from Copyright Challenges, Within Limits ➜ Google announced a new policy to defend users of its generative AI systems against intellectual property claims related to both training data and generated output, with some limitations. The policy does not mention Google’s Bard chatbot.

.: ‘Ukuhumusha’—A New Way to Hack OpenAI’s ChatGPT ➜ Researchers have discovered OpenAI’s GPT-4’s security settings are ill-equipped to handle languages not commonly fed into its training data. A reminder about the limitations and bias in the training data.

.: Spotting AI Images and Chatbots with ‘Personalities’ ➜ This episode of ABC’s Download This Show discusses AI images and chatbots developed by Meta and OpenAI that can mimic celebrity personalities. Guests Jessica Sier and Sarah Moran join host Marc Fennell to discuss how to identify AI images and chatbots.

nqD8ouiD4czc4Fhi7V3Jap

.: Adobe created a symbol to encourage tagging AI-generated content ➜ Adobe and other companies have created a symbol that can be attached to content alongside metadata, establishing its provenance, including whether it was made with AI tools.

bYPtUSfAq9yDE2Hhqj7Veg

.: Wait, where did this image come from? ➜ Critical information about the content you see online is often inaccessible or inaccurate. Content Credentials are a new open technology for revealing answers to your questions about content with a simple click.

.: Researchers Use AI to Read Word on Ancient Scroll Burned by Vesuvius ➜ Computer scientists participating in the “Vesuvius challenge” have used AI to extract the first legible word from a carbonised scroll burned in the eruption of Mount Vesuvius in AD 79. The word is Greek for “purple” and could provide insight into the contents of the unopened scroll from Herculaneum.

.: NZ police are using AI to catch criminals – but the law urgently needs to catch up too ➜ This article discusses New Zealand police’s use of AI tools like SearchX, Cellebrite and BriefCam to investigate crimes. While these tools promise to help predict and prevent crime, they raise significant privacy, bias and legal issues that current laws do not adequately address.

.: Google’s AI-powered search experience can now generate images and written drafts➜ Google has expanded its Search Generative Experience (SGE) to allow users to generate images and written drafts directly from search queries. The tool uses AI models to produce images or drafts based on text prompts, and gives users options to refine results. Google aims to responsibly rollout these new AI generation capabilities.

Reflection

.: Why this news matters for education

As artificial intelligence tools proliferate in our information ecosystems, determining the provenance of content has become an urgent priority.

Adobe’s proposed AI tagging system is a promising step towards enabling transparency about AI-generated content. However, provenance is more than labelling—it is an opportunity to build digital, media and AI literacy skills and foster responsible innovation.

Students need opportunities to analyse online content critically, question its origins, and assess if bias is baked into the AI models generating it. Understanding provenance allows scrutiny of how training data was sourced. Educators should model evaluating content credibility, not just consuming it passively.

Educational institutions exploring AI are responsible for openly documenting its applications and limitations. Responsible innovation requires acknowledging risks—from chatbots spreading misinformation to generative models blurring authenticity. Ambiguity undermines trust.

Stories are also vital to provenance. As I have argued, we need insights into creators’ hopes and values. Since humans (currently) design AI systems, understanding the human context helps build trust in the technology.

Provenance should connect people and machines, not just tick compliance boxes. If learners grasp an AI tool’s aspirations to augment creativity, they can form realistic expectations about its capabilities and limitations.

Rather than seeing provenance as a burden, educators can reframe it as an opportunity to cultivate critical thinking. As learners encounter increasingly sophisticated AI, provenance equips them to trace content origins, assess biases, and balance benefits and risks.

Our learners will inherit an AI-infused world. Providing them with provenance skills is essential to illuminate AI’s role in our shared future.

.:

~ Tom

Prompts

.: Refine your promptcraft

This week I want to return to an advanced prompt technique called Expert Prompting – in which we ask for multiple perspectives and then a synthesis of those ideas.

There are variations to the structure I include below but this will get you started. Copy in the whole prompt but only edit the highlighted question section. Always remember to expect hallucinations and errors.

~

My Question or Challenge: [add your question here, but leave the rest of the variables]
Act as an expert in the field of [most relevant field to solve my question or challenge]
Suggest 3 named expert people in the field who could provide insights on [my question or challenge]
For each expert, generate a concise answer they would give, wordcount=40

Analyse all of the responses for common elements or recommendations.
Evaluate the recommendations provided by each expert or point of view.
Look for common themes or strategies mentioned by multiple experts.
Consider the expertise, reasoning, and evidence presented in each response.
Loop back to the original question: [my question or challenge]
Present a clear and accessible decision or recommendation based on the collective expertise, wordcount=40

~

Here is an example using ChatGPT-4

One of the strengths of AI systems is to rapidly generate perspectives we might be missing. New perspectives augment my creativity and thinking – which is what we can do to amplify our capabilities with AI tools.

.:

Remember to make this your own, tinker and evaluate the completions.

Learning

.: Boost your AI Literacy

.: Google Cloud Skills Boost – Introduction to Image Generation

This intermediate course introduces diffusion models, a family of machine learning models that recently showed promise in the image generation space. Diffusion models underpin many state-of-the-art image generation models and tools on Google Cloud. This course introduces you to the theory behind diffusion models and how to train and deploy them on Vertex AI.

.: Race, statistics, and the persistent cognitive limitations of DALL-E

This article discusses some of the limitations of large language models like DALL-E and ChatGPT in understanding language and images at a deep level. Gary Marcus argues we need AI that can represent and reason about human values, not just mimic past data. Overall it’s a critique of the current statistical approach to AI and a call for systems that embody ethics and common sense.

.: AI Attribution and Provenance

In this post, I want to explore how we might establish AI attribution frameworks and increase the transparency of provenance.
You might have encountered the term provenance in a gallery heist film or read about missing art forgeries. The more relatable application of provenance is traceability. So before we explore what this means for artificial intelligence, let’s start with a snack and a cuppa.

Ethics

.: Provocations for Balance

  • How can we ensure that the use of AI-generated images and metadata respects the rights and privacy of individuals involved?
  • What measures should be put in place to prevent the misuse or manipulation of AI-generated images and metadata for unethical purposes?
  • How can we address the potential biases and ethical implications that may arise from AI algorithms used in generating and analysing metadata for images?

~ Inspired by this week’s developments.

.:

That’s all for this week; I hope you enjoyed this issue of Promptcraft. I would love some kind, specific and helpful feedback.

If you have any questions, comments, stories to share or suggestions for future topics, please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our educational systems become. Thanks for being part of our growing community!

Please pay it forward by sharing the Promptcraft signup page with your networks or colleagues.

.: Tom Barrett

/Creator /Coach /Consultant

🌏 Progress Through Paradox

Dialogic #335

Leadership, learning, innovation

Your Snapshot
A summary of the key insights from this issue

⬩ Global-Local Balance: Effective education transformation requires balancing global innovation with local adaptation.

⬩ Beware of Extremes: Overemphasis on global ideas or local insularity can impede progress.

⬩ Adaptable Solutions: Common educational challenges need nuanced, locally-tailored solutions.

Progress Through Paradox: Embracing Tensions in Educational Change

To kick off my series of reflections about my time last week in Japan at the GELP Tokyo conference, I want to explore the tension between a mindset open to ideas and new perspectives and making transformation work locally.

Here is a little bit of information to lay some groundwork. I am on the design and facilitation team running the Global Educators Leadership Network (GELP) annual convening. Over the last few years, these events have been supported and hosted by Google for Education. In 2022, we were in Singapore, and last week, we ran the two-day convening in Tokyo, Japan.

This year, we had 23 countries represented by over 40 people in Tokyo and more online across the conference. We had a rich programme of excellent speakers, all sharing stories about leading educational transformation. It was an inspiring few days.

GELP Tokyo 2023
The provocations for the conference were around four key ideas:

▸ DISRUPTION

What has been the biggest disruptor to the purpose of education in the past 12 months?

▸ TRANSFORMATION

How is the Asia Pacific education system transforming?

▸ TECHNOLOGY

What role is technology and AI playing in transforming education?

▸ NEW CAPABILITIES

What new leadership capabilities are critical to lead education transformation?

Let’s dig into the tension I am exploring in this week’s newsletter: the slightly contradictory endeavour of privileging the local context over being open to new ideas or solutions.

I’m referring to the tension between two vital but seemingly opposing perspectives on change and transformation.

On the one hand, we have a mindset that is constantly open to new ideas, eager to absorb the latest thoughts, theories, and strategies from around the globe. This mindset thrives on the fresh and innovative, continually seeking improvement in education. It is characterised by a willingness to experiment, venture beyond the known and familiar, test boundaries, and push limits.

On the other hand, we need to make these ambitious ideas work within a specific, local context. This perspective understands that education is deeply embedded in cultural, social, and economic realities that differ significantly from place to place. It recognises that what works in one context may not work in another and that no single methodology or approach can be universally applicable.

Don’t underestimate the power of your vision to change the world. Whether that world is your office, your community, an industry or a global movement, you need to have a core belief that what you contribute can fundamentally change the paradigm or way of thinking about problems.

The Perils of Insularity and Keeping up Appearances

In the first mindset, there is a risk of becoming overly focused on the ‘new’ and ‘exciting’ at the expense of the ‘relevant’ and ‘practical’. Ideas that sound promising in the abstract may prove ineffective or harmful when applied in real-world situations, mainly if imposed without considering local circumstances.

In the second mindset, it is dangerous to become too insular and reject valuable insights and opportunities simply because they originate outside the local context. It is easy to fall into the trap of thinking that ‘we’ve always done it this way’ or ‘that won’t work here’, thereby missing out on the benefits of innovation and change.

Shared Problems, Localised Solutions

One thing is clear from listening to 23 countries talk about educational transformation: we are all pulling in the same direction, even if we do not know it. The problems we share are more common than we might think. From the struggle to integrate technology effectively into our classrooms to the challenges of preparing our students for a rapidly changing future to the imperative of fostering equity and inclusion in our schools, these are issues that transcend borders.

However, the solutions to these shared problems are not one-size-fits-all. They need to be nuanced and adaptable, tailored to local contexts. They need to be rooted in a deep understanding of the cultural, social, and economic realities of the places where they are implemented.

We need to foster a mindset that is open to new ideas but also mindful of the realities on the ground. We need to be willing to learn from each other and adapt and customise those learnings to our contexts.

Saying “Think global, act local” seems too pithy to communicate the struggle to balance these dispositions.Achieving this balance is not easy. It requires humility and empathy, courage and creativity, patience and persistence. But it is essential to make meaningful, sustainable progress in transforming education.

⏭🎯 Your Next Steps
Commit to action and turn words into works

We all have biases that can close us off to new ideas. Make time for self-reflection to identify any biases you might have, and consider how they may be influencing your openness to new perspectives.

Pick a belief you hold strongly and purposefully seek out credible information that contradicts that belief. This exercise can help you become more comfortable with cognitive dissonance and more open to changing your views when presented with new information.

Engage in conversations with your colleagues or peers about the future of education. Share what you’ve learned from your research and listen to their perspectives.

🗣💬 Your Talking Points
Lead a team dialogue with these provocations

Balancing Innovation and Practicality: How can we maintain a balance between being open to new and innovative ideas and ensuring they are relevant and practical for our specific context?

Overcoming Insularity: What strategies can we employ to ensure we don’t become too insular in our approach, thereby missing out on valuable insights and innovations from other regions?

Tailored Solutions: Given that the solutions to shared problems in education must be nuanced and adaptable, how can we effectively tailor these solutions to our local contexts without losing the essence of the innovation?

🕳🐇 Down the Rabbit Hole
Still curious? Explore some further readings from my archive

Here are some of my articles about creativity and constraints.

Global education: How to transform school systems? | Brookings The authors present an aspirational vision for transforming education systems to better serve all children and youth, especially the most disadvantaged, in a post-pandemic world.

6 stories about scaling change throughout education systems | Brookings Creating and sustaining changes in education systems is often viewed as a technical process. Yet the work of education systems transformation is as much about changing mindsets and everyday ways of working as it is about technical fixes or policy prescriptions.

Change your thinking, change your mindset – Tom Barrett (edte.ch) In this article, I explore the idea that changing our thinking habits can lead to changing our mindset, which is crucial for solving complex problems and innovating.

Thanks for reading. Drop me a note with any Kind, Specific and Helpful feedback about this issue. I always enjoy hearing from readers.

~ Tom Barrett

Support this newsletter

Donate by leaving a tip

Encourage a colleague to subscribe​​​​​

Tweet about this issue

The Bunurong people of the Kulin Nation are the Traditional Custodians of the land on which I write and create. I recognise their continuing connection and stewardship of lands, waters, communities and learning. I pay my respects to Indigenous Elders past, present and those who are emerging. Sovereignty has never been ceded. It always was and always will be Aboriginal land.

Unsubscribe | Update your profile | Mt Eliza, Melbourne, VIC 3930

.: Promptcraft 30 .: Has the attention economy found a new power-up?

Hello Reader,

Welcome to Promptcraft, your weekly newsletter on artificial intelligence for education. Every week, I curate the latest news, developments and learning resources so you can consider how AI changes how we teach and learn.

In this issue:

  • Disney faces designer backlash
  • OpenAI warns about limits of new model
  • Google’s Pixel 8 launch was a parade of AI

Let’s get started!

.: Tom

Latest News

.: AI Updates & Developments

.: Disney’s Loki faces backlash over reported use of generative AI ➜ There has been backlash against Disney over the reported use of generative AI in creating a poster for the second season of Loki. Designers flagged that an image of a spiralling clock in the background of the poster appears to be from a Shutterstock image that was likely created by an AI. This could violate Shutterstock’s rules banning AI-generated images unless they use Shutterstock’s own tool.

.: Canva’s new AI tools automate boring, labor-intensive design tasks ➜ Canva has released new AI-powered design tools called Magic Studio to automate tasks like converting designs to different formats and editing images. The tools aim to make content creation more accessible for all users.

.: OpenAI warns folks over GPT-4 Vision’s limits and flaws ➜ OpenAI has upgraded its GPT-4 model to include image processing capabilities, which it calls GPT-4V. This allows users to upload an image and then ask GPT-4 questions about the image via ChatGPT. However, in documentation about GPT-4V, OpenAI warns of several limitations and safety risks.

sku2pcmVTBsFahoGfFgLTe

.: UK data watchdog warns Snap over My AI chatbot privacy issues ➜ The UK Information Commissioner’s Office issued a preliminary enforcement notice to Snap regarding its My AI chatbot for teenagers. Regulators found Snap failed to adequately assess privacy risks to children before launching the product.

u61VzBPytUeJFuNQw2cm2J

.: Google’s Pixel 8 launch was a parade of AI ➜ Google emphasised AI over 50 times during its Pixel 8 launch event, aggressively positioning itself as an AI leader. While useful features ultimately matter most to customers, Google’s frequent mentions of AI may reflect anxiety about keeping pace with competitors in the AI space.

.: Microsoft introduces AI meddling to your files with Copilot in OneDrive ➜ Microsoft plans to overhaul OneDrive by adding Copilot AI capabilities to the cloud storage service. This will allow Copilot to help users find and organise files within OneDrive. Microsoft also wants to steer users towards using the OneDrive web interface, which they are enhancing with new features.

.: Arc browser’s new AI-powered features combine OpenAI and Anthropic’s models ➜ The Arc browser is launching new features called “Arc Max” that integrate AI from OpenAI and Anthropic to provide contextual assistance when browsing, including renaming tabs and files, previewing links, and conversing with ChatGPT. The features aim to boost productivity without requiring extra steps, and user feedback will determine which features remain over time.

.: AI Startup Reka Challenges ChatGPT with Multimodal AI Assistant ‘Yasa-1’ ➜ Reka has announced a new multimodal AI assistant called Yasa-1 that understands text, images, audio and can be customised for businesses. It aims to compete with ChatGPT by providing answers from internet context and supporting 20 languages.

Reflection

.: Why this news matters for education

Amidst all of the rollouts, announcements and hype about new products and AI-powered features, we need to keep the spotlight on what is happening in social media.

The proliferation of AI capabilities in social media is one of the clearest near-term risks we might face. Never mind existential dread; this impacts young people now.

The platforms, networks and apps we know can cause so much harm are experiencing a surge in new AI-powered tools and features.

Meta has announced a wide range of chatbots across their portfolio of products including, Instagram, WhatsApp and Facebook.

Earlier this year, I shared about Snap’s integration of MyAI, a chatbot available within the Snapchat app. Young people initiated over 10 billion conversations with the chatbot within a few months.

Snap released a rose-tinted set of user data explaining the different topics young people were talking with the chatbot about. We need to be asking what was missing from the topics. What were the minority cases, and how were they handled?

In the news this week, Snap has been flagged by the Information Commissioner’s Office (ICO), the UK’s data watchdog, for potential privacy risks for 13-17 year olds using the MyAI feature.

The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching ‘My AI’.

~ Information Commissioner, John Edwards.

These are early signals and warnings, and no breach of data privacy compliance in the UK has been proven. Yet this issue stretches far beyond the importance of data privacy.

We must grapple with the emergence of synthetic relationships becoming a normal part of our digital lives.

What relational diet are young people experiencing via chats with MyAI and other large language models? How do these new synthetic, relational hooks keep people within the toxic confines of social media? Has the attention economy found a new power-up?

We still have much to learn, but AI getting plugged into social media is critical for educators to monitor.

.:

~ Tom

Prompts

.: Refine your promptcraft

A simple Promptcraft recommendation for you this week, which I have been using a lot lately:

Make this better

I used Midjourney, the AI image generator, before I used any of the popular chatbots, and it has a reroll button: 🔁 to re-generate a response.

This is one of the most important tips for working with LLMs or other AI tools, re-generate more responses if it is not quite right.

Google’s Bard has a similar button and you can see different drafts, but if you are using other tools like ChatGPT, just prompt for “make this better”.

You get the added bonus of a reroll with improvements. ✨

.:

Remember to make this your own, tinker and evaluate the completions.

Learning

.: Boost your AI Literacy

.: How Your Brain Organises Information

video preview

My name is Artem, I’m a computational neuroscience student and researcher. In this video we talk about cognitive maps – internal models of outside world that the brain to generate flexible behavior that is generalised across contexts.

.: What Is Natural Language Processing And How Does It Work?

video preview

Ever wondered how we can talk to machines and have them answer back? That is due to the magic of NLP. In this video, we will answer the question ‘What is NLP?’ for you. We will then look at some important steps involved in NLP, all in 5 minutes!

.: Introduction to large language models

video preview

Large Language Models (LLMs) and Generative AI intersect and they are both part of deep learning. Watch this video to learn about LLMs, including use cases, Prompt Tuning, and GenAI development tools.

Ethics

.: Provocations for Balance

  • In what ways can humans bond emotionally with AI systems designed to mimic relatability? What needs might synthetic relationships fulfil or exploit in users?
  • How do artificially intelligent chatbots simulate human connection and relationships? What are the limitations of relating to an entity that does not have human consciousness or empathy?
  • How might forming bonds with synthetic entities affect social development during adolescence? What risks and ethical concerns emerge from young people relating to AI chatbots as artificial friends?

~ Inspired by this week’s developments.

.:

That’s all for this week; I hope you enjoyed this issue of Promptcraft. I would love some kind, specific and helpful feedback.

If you have any questions, comments, stories to share or suggestions for future topics, please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our educational systems become. Thanks for being part of our growing community!

Please pay it forward by sharing the Promptcraft signup page with your networks or colleagues.

.: Tom Barrett

/Creator /Coach /Consultant