Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In this issue, you’ll discover:

  • Google pauses Gemini’s ability to generate AI images of people after diversity errors;
  • AI-Generated Nude Photos Of Middle Schoolers Found In Beverly Hills;
  • Google Is Giving Away Some of the AI That Powers Chatbots.

Let’s get started!

~ Tom Barrett

GUARDRAILS

.: Google pauses Gemini’s ability to generate AI images of people after diversity errors

Summary ➜ The decision was made after the tool inaccurately generated images of historical figures like US Founding Fathers and Nazi-era German soldiers, leading to conspiracy theories online. Google aims to address the issues with Gemini’s image generation feature and plans to release an improved version soon. This pause on generating pictures of people comes after Gemini users noticed non-white AI-generated individuals in historical contexts where they should not have appeared, erasing the history of racial and gender discrimination.

Why this matters for education ➜ The system failed; it likely caused unintentional harm. This is an important reminder for us all about how we move forward with cautious optimism about the emergence of these technologies. Google has since explained.

First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive.

Smart educators will see the teachable moment here. Which is a perfect example of the intersection of literacies – see more below.

These models are imperfect, and despite the PR disaster, this type of practical feedback from millions of users will improve the models. But as I have said before, at what cost?

3ivVJNnKLgE2i3G6vVertA

OPEN-SOURCE

.: Google Is Giving Away Some of the AI That Powers Chatbots

Summary ➜ Google has decided to open source some chatbot models, similar to Meta’s move last year. The company released two A.I. language models, Gemma 2B and Gemma 7B, to help developers create chatbots resembling Google’s own. While Google is not offering its most powerful A.I. model for free, it aims to engage the developer community and promote the adoption of modern A.I. standards.

Why this matters for education ➜ Keep in mind that AI and ChatGPT are not the same thing. Even though ChatGPT is rapidly becoming as ubiquitous as Hoover or Google. There are more than 300,000 open-source models available on platforms such as Hugging Face. One of the most powerful models, Gemma, has been made available by one of the most important organisations in the AI field. I can envision students and educators using these tools to create something amazing in the near future. It’s worth noting that OpenAI is conspicuously absent from the list of companies and research labs releasing open-source models. 🤔

vC6LTqSMu2gnyvQLv9CR7w

DEEPFAKES

.: AI-Generated Nude Photos Of Middle Schoolers Found In Beverly Hills

Summary ➜ Beverly Hills school officials have warned that anyone caught making AI-generated nude photos of middle schoolers could be expelled. The warning came after administrators at Beverly Vista Middle School discovered the images, which were created and disseminated by students with other students’ faces superimposed onto them. BHUSD officials said they are working with the Beverly Hills Police Department during the investigation

Why this matters for education ➜ I don’t have to state the obvious do I? The recent story about deepfake incidents in education is just the tip of the iceberg. There are likely many more cases that go unreported. This is not a hypothetical risk, it’s real harm. With powerful AI media tools readily available to everyone, we need to ask ourselves: how are our education organisations and systems helping us understand AI literacy? And how are we, as educators, helping young people navigate these uncertain waters? Ignoring the problem is not an option. It is an abdication of our professional responsibility.

.: Other News In Brief

📉 A recent report by Copyleaks reveals that 60% of OpenAI’s GPT-3.5 responses show signs of plagiarism.

🔗 OpenAI users can now directly link to pre-filled prompts for immediate execution.

❤️ Studies show that emotive prompts trigger improved responses from AI models.

🖖 Encouraging Star Trek affinity improves mathematical reasoning results with the AI model Llama2-70B.

⚡️ Lightning fast Groq AI goes viral and rivals ChatGPT, challenges Elon Musk’s Grok

🧠 An AI algorithm can predict Alzheimer’s disease risk up to seven years in advance with 72% accuracy.

🚧 Tyler Perry puts $800 million studio expansion on hold because of OpenAI’s Sora.

🎨 Stable Diffusion has unveiled Stable Diffusion 3, a powerful image-generating AI model designed to compete with offerings from OpenAI and Google.

🇫🇷 Microsoft partners with Mistral in second AI deal beyond OpenAI.

💰 Reddit has a new AI training deal to sell user content.

:. .:

Discount Available

.: The humAIn community is growing!

Take a look at my online community to explore, connect and learn about AI for education.

💡 AI learning resources

🗣 Shared community forums

📅 Regular online community events

🫂 Connections with peers worldwide

✨ Guidance from three trusted community leaders

You will be joining fellow educators from Singapore, US, Australia, Spain and the UK.

Find out more and grab the final membership offer before the price goes up at the end of February.

Monthly Review

.: All the January issues in one convenient PDF

iRta3CpdpjYTC7psYCBTF3

Promptcrafted January 2024

Discover the future of learning with Promptcrafted – Tom Barrett’s monthly guide to AI developments impacting education…. Read more

Look out for information about the new February edition of Promptcrafted – coming soon!

.: :.

What’s on my mind?

.: A Collision of Literacies

I think when I first saw the images generated in error by ​Gemini​, I winced. Here is one of the image sets from the Verge article in case you have not seen them before:

These images caused controversy by inaccurately depicting historical events and figures. This is one of many examples shared on social media illustrating how the system failed and how Google got it wrong, likely causing unintentional harm.

As I mentioned earlier, this is a perfect example of the intersection of literacy for us and the young people we support.

Some might brush these examples off in the pursuit of improvement, but these emerging missteps can help us calibrate our disposition and understanding of AI for societal good. Any educator will see the teachable moment here.

The imperfection, mistake and harm can be interrogated and used to learn. What learning opportunity do you see?

As I made sense of the image above, I experienced a simultaneous stressing of literacies.

There were lights on across the board of literacies:

  • AI literacy
  • Digital literacy
  • Media literacy
  • Algorithmic literacy
  • Historical literacy
  • Ethical literacy
  • Cultural literacy

As adults, we are also experiencing some of these collisions for the first time. Checking your understanding and literacy gaps is crucial, too, especially for educators.

While some extreme conspiracies around this story point to big tech attempting to rewrite history, what is perhaps more worrisome is the way AI content is flooding the internet.

Although the image in question might be easy to spot as inaccurate, there will be thousands, if not millions, of others in the future whose flaws are harder to see.

Navigating this landscape will require an amalgam of multidimensional literacies, a collision of competencies in ethics, critical thinking, history, futures, humanity and technology.

:. .:

~ Tom

Prompts

.: Refine your promptcraft

A helpful aspect of your promptcraft is remembering that humans create the training data for text-based large language models (LLMs).

This means that a positive, polite, collaborative conversation will likely yield better results than if you robotically ordered the chatbot around. Just as you would communicate with a coworker or a team member, engaging in a constructive dialogue with the AI model can lead to more effective outcomes.

This might seem like we are over-anthropomorphising the technology, but many studies have shown improvements in performance from respectful, polite interactions.

Here are a few tactics I try to use when I am prompted.

  • Initiate with Enthusiasm: Start by expressing excitement about the collaboration, for example, “I am excited to collaborate with you on [TASK]. Shall we get started?” This sets a positive tone for the interaction.
  • Provide Constructive Feedback: Offer kind, specific, and helpful feedback periodically. This can guide the model towards more accurate and relevant responses.
  • Maintain Politeness and Positivity: Engage politely and cheerfully with the model, avoiding toxicity. This makes the interaction more pleasant and can influence the quality of the responses.
  • Encouragement: When facing an impasse, offer encouragement. LLMs might “hallucinate” a lack of ability, which gentle coaxing can overcome. Think of this in the spirit of Mrs. Doyle from Father Ted, encouraging persistence and creativity.
  • Close with Gratitude: Conclude interactions by thanking the LLM for its assistance. This reinforces the collaborative nature of the exchange and sets a positive tone for future engagements, leveraging the memory feature of platforms like ChatGPT.

One curious example of the connection to training data I discovered this week is the case of Star Trek affinity.

A study about optimising prompts discovered that when Llama2-70B (an open-source LLM) was prompted to be a Star Trek fan, it was better at mathematical reasoning.

The full system prompt reads as follows:

System Message:

«Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation.»

Answer Prefix:

Captain’s Log, Stardate [insert date here]: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.

In many ways, this is a nuanced extension of the adoption of expertise or role, and I wonder if it taps into popular culture and fictional contexts in new ways.

I think this is weird, but I also appreciate the technical logic.

The responses and performance draw from the sum of all human text. So, it would be expected for the LLM to be familiar with Star Trek, given its cultural prominence and the likely prevalence of related content in its training data.

This includes the scripts and countless articles, fan fiction, and forum discussions about the series. Therefore, it’s plausible that adopting the persona of a Star Trek character could potentially activate relevant knowledge structures within the LLM, improving its ability to generate creative and contextually appropriate responses.

It’s an interesting demonstration of how the model’s performance can be influenced by the content of the prompts and the framing or persona that’s implicitly or explicitly adopted.

I wonder what other performance enhancements we might see from these types of creative activations.

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

AFRICA

.: Rural Kenyans power West’s AI revolution. Now they want more

Rural Kenyans are increasingly becoming data annotators, providing the building blocks for training artificial intelligence (AI) to recognise patterns in real life.

Despite the challenges, including low pay and difficult subject matter, this work has become a backbone of the country’s economy, with at least 1.2 million Kenyans working online, most of them informally.

The annotation industry has spread far beyond Nairobi, with Kenya emerging as a hub for such online work, rising to compete with countries like India and the Philippines.

While AI might help small-scale businesses thrive, education systems need an overhaul to create an AI innovation hub in African countries.

BIAS
.: To benefit all, diverse voices must take part in leading the growth and regulation of AI

The absence of Latinx/e founders and leaders in discussions about the growth and regulation of AI is a concerning trend. Diverse founders often bring unique perspectives and address critical social needs through their startups. However, their voices remain largely absent from policy discussions.

despite their entrepreneurial talent and determination, Latinx/e founders remain overlooked and undervalued, receiving less than 2% of startup investment funding. Even when they receive it, it’s typically just a fraction of what’s awarded to their non-Hispanic counterparts.

TOOLKIT
.: Learning With AI

Rather than try to ban this technology from classrooms outright, the Learning With AI project asks if this moment offers an opportunity to introduce students to the ethical and economic questions wreaked by these new tools, as well as to experiment with progressive forms of pedagogy that can exploit them.

The University of Maine launched Learning With AI, which includes a range of curated resources, strategies and learning pathways. The toolkit is built on a database of resources which you can explore here.

Ethics

.: Provocations for Balance

➜ What mechanisms can be established to close the gap between where AI innovation happens and who truly benefits?

➜ If diverse voices are absent in AI leadership, how can we broaden participation to harness unique perspectives?

➜ What’s the best approach for introducing young people to AI’s promises and perils?

Inspired by some of the topics this week.

:. .:

Which topic would you like to see featured in a future issue of Promptcraft?

(Click on your choice below)

❤️ The State of Companionship AI

🛠️ How to design your own chatbot

🪞 How AI Is a Mirror to Our Humanity

🦋 AI Augmented Feedback and Critique

🛡️ Walled Gardens – Student Safe Chatbots

.: :.

Questions, comments or suggestions? Please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett