Join 80 educators on the waitlist for my new learning community about AI for education.

Hello Reader,

Promptcraft is a weekly AI-focused newsletter for education, improving AI literacy and enhancing the learning ecosystem.

In this issue, you’ll discover:

  • How explicit deepfake images of Taylor Swift have sparked calls for new laws;
  • Google showcases new edu AI tools to help teachers save time;
  • Nightshade – like putting hot sauce in your lunch so it doesn’t get stolen.

Let’s get started!

~ Tom Barrett

rnsfJcE1owzJUpZdwNMM2e

DEEPFAKE

.: Explicit Deepfake Images of Taylor Swift Spark Calls for New Laws

Summary ➜ Explicit deepfake images of singer Taylor Swift were widely shared online, viewed millions of times. This has led US lawmakers to call for new legislation criminalising deepfake creation. Currently no federal laws exist against deepfakes in the US. The BBC notes the UK recently banned deepfake porn in its Online Safety Act.

Why this matters for education ➜ It has been suggested that this story brings to light the rapid advancements in deepfake technology, which is being used to target women specifically. However, it is important to note that these tools are not exclusive to deepfake technology, but rather AI image generators from companies such as Microsoft and Midjourney. In some cases, these tools are even freely available.

Over 99% of deepfake pornography depicts women without consent and there has been a 550% rise in the creation of doctored images since 2019. It’s a reminder that students need guidance on how to evaluate sources and credibility online. Media literacy skills and critical thinking are the shared territory of AI Literacy and we need to help young people so they can identify manipulated or synthetic media. Discussing these topics provide an opportunity to reflect on ethical issues like consent and privacy in the digital age. We must equip the next generation to navigate an information landscape where technological advances have outpaced regulation.

oGWPVdkyqFPeTGCta1QiTs

US ELECTION

.: Fake Biden Robocall Creator Suspended from AI Voice Startup ElevenLabs

Summary ➜ An audio deepfake impersonating President Biden was used to disseminate false information telling New Hampshire voters not to participate in the state’s primary election. The call wrongly claimed citizens’ votes would not make a difference in the primary, in an apparent attempt to suppress voter turnout. ElevenLabs, the AI voice generation startup whose technology was likely used to create the fake Biden audio, has now suspended the account responsible after being alerted to the disinformation campaign.

Why this matters for education ➜ In the past few weeks, I have been sharing various articles and links that discuss the threat posed by deepfake technology to democratic processes across the world. Unfortunately, this issue is not isolated and needs to be considered in the larger context of the spread of non-consensual synthetic explicit media featuring celebrities and other individuals. It is crucial for educators to take note of this trend. Additionally, it is worth noting that AI is increasingly generating articles on the internet. This raises the question of how we can develop new guidelines for young learners to navigate this new landscape.

7uus84WeJbRwdc3fg3QaHu

GOOGLE AI

.: Google showcases new edu AI tools to help teachers save time and support students

Summary ➜ At the BETT edtech conference in London, Google showcased over 30 upcoming tools for educators in Classroom, Meet, Chromebooks and more. Key highlights include new AI features like Duet in Docs to aid lesson planning, interactive video activities and practice sets in Classroom, data insights for teachers, accessibility upgrades, and strengthened security controls.

Why this matters for education ➜ As I mentioned in previous issues, it’s important to keep an eye on Google’s advancements in AI because of their huge user base. This is a significant update in AI for education, which is a notable development considering education has not been a primary focus in their previous tool integrations with Bard and others. Google has been very active in AI this past week, and it will be interesting to see how their momentum builds going forward. Additionally, based on user evaluations rather than academic benchmarks, the performance of Google’s AI tool Bard and the Gemini Pro model has improved significantly. As of now, Bard is ranked second on the LMSYS Chatbot Arena Leaderboard, just behind GPT-4 Turbo.

.: Other News In Brief

Nightshade, the tool that ‘poisons’ data, gives artists a fighting chance against AI

Chrome OS has been updated with a few experimental AI features.

Speaking of web browsers my preferred choice is Arc, and they just shipped a connection to Perplexity AI as a default search tool.

Google’s Lumiere brings AI video closer to real than unreal

OpenAI has released a new ChatGPT mention feature in BETA, which allows a user to connect different GPTs or bots in a single chat.

This feature is on for me so once I have had a play I will share more with you in the next Promptcraft. TB

Google and Hugging Face have established a partnership to offer affordable supercomputing access for open models.

:. .:

.: Join the community waitlist

On 5 February, we’re opening up the humAIn community – a space for forward-thinking educators to connect and learn together as we navigate the age of AI.

By joining, you’ll:

  • Build connections with like-minded peers
  • Attend exclusive webinars and virtual events
  • Join lively discussions on AI’s emerging role in education
  • Access member-only resources and Q&A forums

It’s a chance to be part of something meaningful – a space to share ideas, find inspiration, and focus on our shared humanity.

Get your name on the waitlist for information, so you don’t miss out on early bird subscriptions.

.: :.

What’s on my mind?

.: Unreal Engine

Last week, while sifting through the latest in media and AI developments, a term caught my attention and refused to let go: the ‘liar’s dividend.’ It’s a concept that feels almost dystopian yet undeniably real in our current digital landscape.

This term refers to a disturbing new trend: the growing ease with which genuine information can be dismissed as fake, thanks to the ever-looming shadow of AI and digital manipulation.

‘Liar’s dividend’ was coined by Hany Farid, a professor at UC Berkeley who specialises in digital forensics., and I discovered it via Casey Newton on the Hardfork podcast:

because there is so much falseness in the world, it becomes easier for politicians or other bad guys to stand up and say, hey, that’s just another deepfake.

Where AI and digital tools are adept at crafting convincing falsehoods, even the truth can be casually brushed aside as fabrication. It’s a modern twist on gaslighting, but on a global scale, where collective sanity is at stake.

This concept hit home for me this week amidst the flurry of stories about deepfakes, robocalls and synthetic media.

It’s like watching the web transform into a murky pool of half-truths and potential lies. This shift isn’t just about technology; it’s a fundamental change in how we perceive and interact with information and each other.

I can’t ignore the profound challenge this presents. Big tech promotes AI tools as miraculous timesavers, but they also enable new forms of deception. What first seemed a distant threat now feels palpably close as the risks become a reality. The trade-off has become unsettlingly clear – these tools streamline our lives and distort our reality.

Not long ago, many viewed the risks of AI as distant, almost theoretical concerns. But today, these risks are palpably close. As I see it, the real threat isn’t in the AI itself but in how it erodes our trust in what we see and hear. As AI tools become more sophisticated, the task of discerning truth in the media becomes daunting.

This draws my attention to the shared territory between media literacy, critical thinking and AI literacy efforts. For years, schools have emphasised the importance of the ‘big Cs’ – critical thinking, creativity, curiosity, etc. But now, we must urgently enact and evolve these concepts. Students require a new kind of literacy, a blend of traditional critical thinking with a nuanced understanding of AI and digital manipulation.

Truth has become a fluid concept, shaped by algorithms and artificial voices; how do we prepare students to think critically and exercise discernment in an era of manipulated realities?

They need more than knowledge; they need a toolkit for learning and discerning and the ability to navigate a reality where AI blurs the lines between fact and fiction.

:. .:

~ Tom

Prompts

.: Refine your promptcraft

This week I want you to focus on exploring a structured template for your promptcraft. Last year I shared CREATE as a handy acronym for the elements of good prompting.

Let’s take a look at another helpful framework, CO-STAR, from Sheila Teo and the GovTech Singapore’s Data Science & AI team, the winners of a recent Singaporean prompt engineering competition.

Context :.

Provide specific background information to aid the LLM’s understanding of the scenario, while ensuring data privacy is respected.

Objective :.

Concisely state the specific goal or purpose of the task to provide clear direction to the LLM.

Style :.

Indicate the preferred linguistic register, diction, syntax, or other stylistic choices to guide the LLM’s responses.

Tone :.

Set the desired emotional tone using descriptive words to shape the sentiment and attitude conveyed by the LLM.

Audience :.

Outline relevant attributes of the target audience, such as background knowledge or perspectives, to adapt the LLM’s language appropriately.

Response :.

Specify the expected output format, such as text, a table, formatted with Markdown, or another structured response, to direct the LLM.

Context: The students are 10-11 years old and have a basic understanding of food production and transportation. The project aims to teach about the environmental impacts of imported foods. Privacy should be respected.
Objective: Generate a draft planning outline for a 4-week unit on food miles including learning objectives, activities, and resources. Focus on Science and Tech concepts.
Style: Use clear headings and bullet points. Write in an educational style suitable for teachers.
Tone: The tone should be factual and enthusiastic about student learning.
Audience: The materials are for a Year 5 teacher familiar with the national curriculum.
Response: Return the draft outline formatted in Markdown. Include main headings, sub-headings, and bullet points.

Remember to make this your own, try different language models and evaluate the completions.

Learning

.: Boost your AI Literacy

ENERGY
.: Rethinking Concerns About AI’s Energy Use | Center for Data Innovation

many of the early claims about the consumption of energy by AI have proven to be inflated and misleading. This report provides an overview of the debate, including some of the early missteps and how they have already shaped the policy conversation, and sets the record straight about AI’s energy footprint and how it will likely evolve in the coming years.

ESAFETY
.: Deepfake trends and challenges — position statement

The Australian eSafety Commissioner published guidance on the potential risks and challenges posed by deepfake technology.

Their position statement is a helpful introduction, including background details about deepfake technology, recent coverage (but not up to date) eSafety approach and advice for deal with deepfakes.

DIGITAL DECEPTION
.: Deepfakes: How to empower youth to fight the threat of misinformation and disinformation

An extensive exploration of this issue from Nadia Naffi including some highlights from her research into how to counter the proliferation of deepfakes and mitigate the impact:

Youth need to be encouraged in active, yet safe, well-informed and strategic, participation in the fight against malicious deepfakes in digital spaces.

She also offers these helpful guiding strategies, tactics and concrete actions

  • teaching the detrimental effects of disinformation on society;
  • providing spaces for youth to reflect on and challenge societal norms, inform them about social media policies and outlining permissible and prohibited content;
  • training students in recognizing deepfakes through exposure to the technology behind them;
  • encouraging involvement in meaningful causes while staying alert to disinformation and guiding youth in respectfully and productively countering disinformation.

Ethics

.: Provocations for Balance

  1. How are you increasing your understanding of deepfake technology to effectively educate students about its risks?
  2. What methods have you seen which integrate deepfake recognition into your media literacy curriculum?
  3. How do you facilitate classroom discussions about the ethical implications and societal impacts of deepfakes?
  4. What strategies are you teaching students to identify and respond to deepfake disinformation, especially online?
  5. What measures does your school or system have in place to address incidents involving deepfakes targeting students or staff?

Inspired by all the deepfake news.

:. .:

.: :.

Questions, comments or suggestions for future topics? Please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett