If you're new here, you may want to try my weekly newsletter. Thanks for visiting!
Join 80 educators on the waitlist for my new learning community about AI for education.
Hello Reader,
Promptcraft is a weekly AI-focused newsletter for education, improving AI literacy and enhancing the learning ecosystem.
In this issue, you’ll discover:
- How explicit deepfake images of Taylor Swift have sparked calls for new laws;
- Google showcases new edu AI tools to help teachers save time;
- Nightshade – like putting hot sauce in your lunch so it doesn’t get stolen.
Let’s get started!
~ Tom Barrett
DEEPFAKE .: Explicit Deepfake Images of Taylor Swift Spark Calls for New Laws Summary ➜ Explicit deepfake images of singer Taylor Swift were widely shared online, viewed millions of times. This has led US lawmakers to call for new legislation criminalising deepfake creation. Currently no federal laws exist against deepfakes in the US. The BBC notes the UK recently banned deepfake porn in its Online Safety Act. Why this matters for education ➜ It has been suggested that this story brings to light the rapid advancements in deepfake technology, which is being used to target women specifically. However, it is important to note that these tools are not exclusive to deepfake technology, but rather AI image generators from companies such as Microsoft and Midjourney. In some cases, these tools are even freely available. Over 99% of deepfake pornography depicts women without consent and there has been a 550% rise in the creation of doctored images since 2019. It’s a reminder that students need guidance on how to evaluate sources and credibility online. Media literacy skills and critical thinking are the shared territory of AI Literacy and we need to help young people so they can identify manipulated or synthetic media. Discussing these topics provide an opportunity to reflect on ethical issues like consent and privacy in the digital age. We must equip the next generation to navigate an information landscape where technological advances have outpaced regulation. |
US ELECTION .: Fake Biden Robocall Creator Suspended from AI Voice Startup ElevenLabs Summary ➜ An audio deepfake impersonating President Biden was used to disseminate false information telling New Hampshire voters not to participate in the state’s primary election. The call wrongly claimed citizens’ votes would not make a difference in the primary, in an apparent attempt to suppress voter turnout. ElevenLabs, the AI voice generation startup whose technology was likely used to create the fake Biden audio, has now suspended the account responsible after being alerted to the disinformation campaign. Why this matters for education ➜ In the past few weeks, I have been sharing various articles and links that discuss the threat posed by deepfake technology to democratic processes across the world. Unfortunately, this issue is not isolated and needs to be considered in the larger context of the spread of non-consensual synthetic explicit media featuring celebrities and other individuals. It is crucial for educators to take note of this trend. Additionally, it is worth noting that AI is increasingly generating articles on the internet. This raises the question of how we can develop new guidelines for young learners to navigate this new landscape. |
GOOGLE AI .: Google showcases new edu AI tools to help teachers save time and support students Summary ➜ At the BETT edtech conference in London, Google showcased over 30 upcoming tools for educators in Classroom, Meet, Chromebooks and more. Key highlights include new AI features like Duet in Docs to aid lesson planning, interactive video activities and practice sets in Classroom, data insights for teachers, accessibility upgrades, and strengthened security controls. Why this matters for education ➜ As I mentioned in previous issues, it’s important to keep an eye on Google’s advancements in AI because of their huge user base. This is a significant update in AI for education, which is a notable development considering education has not been a primary focus in their previous tool integrations with Bard and others. Google has been very active in AI this past week, and it will be interesting to see how their momentum builds going forward. Additionally, based on user evaluations rather than academic benchmarks, the performance of Google’s AI tool Bard and the Gemini Pro model has improved significantly. As of now, Bard is ranked second on the LMSYS Chatbot Arena Leaderboard, just behind GPT-4 Turbo. |
.: Other News In Brief
|
New interactive video activities in Google Classroom |
:. .:
.: Join the community waitlist
|
Join 80 educators on the waitlist |
.: :.
What’s on my mind?.: Unreal Engine Last week, while sifting through the latest in media and AI developments, a term caught my attention and refused to let go: the ‘liar’s dividend.’ It’s a concept that feels almost dystopian yet undeniably real in our current digital landscape. This term refers to a disturbing new trend: the growing ease with which genuine information can be dismissed as fake, thanks to the ever-looming shadow of AI and digital manipulation. ‘Liar’s dividend’ was coined by Hany Farid, a professor at UC Berkeley who specialises in digital forensics., and I discovered it via Casey Newton on the Hardfork podcast: because there is so much falseness in the world, it becomes easier for politicians or other bad guys to stand up and say, hey, that’s just another deepfake. Where AI and digital tools are adept at crafting convincing falsehoods, even the truth can be casually brushed aside as fabrication. It’s a modern twist on gaslighting, but on a global scale, where collective sanity is at stake. This concept hit home for me this week amidst the flurry of stories about deepfakes, robocalls and synthetic media. It’s like watching the web transform into a murky pool of half-truths and potential lies. This shift isn’t just about technology; it’s a fundamental change in how we perceive and interact with information and each other. I can’t ignore the profound challenge this presents. Big tech promotes AI tools as miraculous timesavers, but they also enable new forms of deception. What first seemed a distant threat now feels palpably close as the risks become a reality. The trade-off has become unsettlingly clear – these tools streamline our lives and distort our reality. Not long ago, many viewed the risks of AI as distant, almost theoretical concerns. But today, these risks are palpably close. As I see it, the real threat isn’t in the AI itself but in how it erodes our trust in what we see and hear. As AI tools become more sophisticated, the task of discerning truth in the media becomes daunting. This draws my attention to the shared territory between media literacy, critical thinking and AI literacy efforts. For years, schools have emphasised the importance of the ‘big Cs’ – critical thinking, creativity, curiosity, etc. But now, we must urgently enact and evolve these concepts. Students require a new kind of literacy, a blend of traditional critical thinking with a nuanced understanding of AI and digital manipulation. Truth has become a fluid concept, shaped by algorithms and artificial voices; how do we prepare students to think critically and exercise discernment in an era of manipulated realities? They need more than knowledge; they need a toolkit for learning and discerning and the ability to navigate a reality where AI blurs the lines between fact and fiction. :. .: ~ Tom |
Prompts.: Refine your promptcraft This week I want you to focus on exploring a structured template for your promptcraft. Last year I shared CREATE as a handy acronym for the elements of good prompting. Let’s take a look at another helpful framework, CO-STAR, from Sheila Teo and the GovTech Singapore’s Data Science & AI team, the winners of a recent Singaporean prompt engineering competition. Context :.Provide specific background information to aid the LLM’s understanding of the scenario, while ensuring data privacy is respected. Objective :.Concisely state the specific goal or purpose of the task to provide clear direction to the LLM. Style :.Indicate the preferred linguistic register, diction, syntax, or other stylistic choices to guide the LLM’s responses. Tone :.Set the desired emotional tone using descriptive words to shape the sentiment and attitude conveyed by the LLM. Audience :.Outline relevant attributes of the target audience, such as background knowledge or perspectives, to adapt the LLM’s language appropriately. Response :.Specify the expected output format, such as text, a table, formatted with Markdown, or another structured response, to direct the LLM. Context: The students are 10-11 years old and have a basic understanding of food production and transportation. The project aims to teach about the environmental impacts of imported foods. Privacy should be respected.
Objective: Generate a draft planning outline for a 4-week unit on food miles including learning objectives, activities, and resources. Focus on Science and Tech concepts.
Style: Use clear headings and bullet points. Write in an educational style suitable for teachers.
Tone: The tone should be factual and enthusiastic about student learning.
Audience: The materials are for a Year 5 teacher familiar with the national curriculum.
Response: Return the draft outline formatted in Markdown. Include main headings, sub-headings, and bullet points.
Remember to make this your own, try different language models and evaluate the completions. |
Learning.: Boost your AI Literacy
|
Ethics.: Provocations for Balance
|
:. .:
Which section was the most helpful?(Click on your choice below) 📰 Curated news and updates |
.: :.
Questions, comments or suggestions for future topics? Please reply to this email or contact me at tom@dialogiclearning.com
The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!
|
|