.: Promptcraft 42 .: ​Google showcases new edu AI tools

Join 80 educators on the waitlist for my new learning community about AI for education.

Hello Reader,

Promptcraft is a weekly AI-focused newsletter for education, improving AI literacy and enhancing the learning ecosystem.

In this issue, you’ll discover:

  • How explicit deepfake images of Taylor Swift have sparked calls for new laws;
  • Google showcases new edu AI tools to help teachers save time;
  • Nightshade – like putting hot sauce in your lunch so it doesn’t get stolen.

Let’s get started!

~ Tom Barrett

rnsfJcE1owzJUpZdwNMM2e

DEEPFAKE

.: Explicit Deepfake Images of Taylor Swift Spark Calls for New Laws

Summary ➜ Explicit deepfake images of singer Taylor Swift were widely shared online, viewed millions of times. This has led US lawmakers to call for new legislation criminalising deepfake creation. Currently no federal laws exist against deepfakes in the US. The BBC notes the UK recently banned deepfake porn in its Online Safety Act.

Why this matters for education ➜ It has been suggested that this story brings to light the rapid advancements in deepfake technology, which is being used to target women specifically. However, it is important to note that these tools are not exclusive to deepfake technology, but rather AI image generators from companies such as Microsoft and Midjourney. In some cases, these tools are even freely available.

Over 99% of deepfake pornography depicts women without consent and there has been a 550% rise in the creation of doctored images since 2019. It’s a reminder that students need guidance on how to evaluate sources and credibility online. Media literacy skills and critical thinking are the shared territory of AI Literacy and we need to help young people so they can identify manipulated or synthetic media. Discussing these topics provide an opportunity to reflect on ethical issues like consent and privacy in the digital age. We must equip the next generation to navigate an information landscape where technological advances have outpaced regulation.

oGWPVdkyqFPeTGCta1QiTs

US ELECTION

.: Fake Biden Robocall Creator Suspended from AI Voice Startup ElevenLabs

Summary ➜ An audio deepfake impersonating President Biden was used to disseminate false information telling New Hampshire voters not to participate in the state’s primary election. The call wrongly claimed citizens’ votes would not make a difference in the primary, in an apparent attempt to suppress voter turnout. ElevenLabs, the AI voice generation startup whose technology was likely used to create the fake Biden audio, has now suspended the account responsible after being alerted to the disinformation campaign.

Why this matters for education ➜ In the past few weeks, I have been sharing various articles and links that discuss the threat posed by deepfake technology to democratic processes across the world. Unfortunately, this issue is not isolated and needs to be considered in the larger context of the spread of non-consensual synthetic explicit media featuring celebrities and other individuals. It is crucial for educators to take note of this trend. Additionally, it is worth noting that AI is increasingly generating articles on the internet. This raises the question of how we can develop new guidelines for young learners to navigate this new landscape.

7uus84WeJbRwdc3fg3QaHu

GOOGLE AI

.: Google showcases new edu AI tools to help teachers save time and support students

Summary ➜ At the BETT edtech conference in London, Google showcased over 30 upcoming tools for educators in Classroom, Meet, Chromebooks and more. Key highlights include new AI features like Duet in Docs to aid lesson planning, interactive video activities and practice sets in Classroom, data insights for teachers, accessibility upgrades, and strengthened security controls.

Why this matters for education ➜ As I mentioned in previous issues, it’s important to keep an eye on Google’s advancements in AI because of their huge user base. This is a significant update in AI for education, which is a notable development considering education has not been a primary focus in their previous tool integrations with Bard and others. Google has been very active in AI this past week, and it will be interesting to see how their momentum builds going forward. Additionally, based on user evaluations rather than academic benchmarks, the performance of Google’s AI tool Bard and the Gemini Pro model has improved significantly. As of now, Bard is ranked second on the LMSYS Chatbot Arena Leaderboard, just behind GPT-4 Turbo.

.: Other News In Brief

Nightshade, the tool that ‘poisons’ data, gives artists a fighting chance against AI

Chrome OS has been updated with a few experimental AI features.

Speaking of web browsers my preferred choice is Arc, and they just shipped a connection to Perplexity AI as a default search tool.

Google’s Lumiere brings AI video closer to real than unreal

OpenAI has released a new ChatGPT mention feature in BETA, which allows a user to connect different GPTs or bots in a single chat.

This feature is on for me so once I have had a play I will share more with you in the next Promptcraft. TB

Google and Hugging Face have established a partnership to offer affordable supercomputing access for open models.

:. .:

.: Join the community waitlist

On 5 February, we’re opening up the humAIn community – a space for forward-thinking educators to connect and learn together as we navigate the age of AI.

By joining, you’ll:

  • Build connections with like-minded peers
  • Attend exclusive webinars and virtual events
  • Join lively discussions on AI’s emerging role in education
  • Access member-only resources and Q&A forums

It’s a chance to be part of something meaningful – a space to share ideas, find inspiration, and focus on our shared humanity.

Get your name on the waitlist for information, so you don’t miss out on early bird subscriptions.

.: :.

What’s on my mind?

.: Unreal Engine

Last week, while sifting through the latest in media and AI developments, a term caught my attention and refused to let go: the ‘liar’s dividend.’ It’s a concept that feels almost dystopian yet undeniably real in our current digital landscape.

This term refers to a disturbing new trend: the growing ease with which genuine information can be dismissed as fake, thanks to the ever-looming shadow of AI and digital manipulation.

‘Liar’s dividend’ was coined by Hany Farid, a professor at UC Berkeley who specialises in digital forensics., and I discovered it via Casey Newton on the Hardfork podcast:

because there is so much falseness in the world, it becomes easier for politicians or other bad guys to stand up and say, hey, that’s just another deepfake.

Where AI and digital tools are adept at crafting convincing falsehoods, even the truth can be casually brushed aside as fabrication. It’s a modern twist on gaslighting, but on a global scale, where collective sanity is at stake.

This concept hit home for me this week amidst the flurry of stories about deepfakes, robocalls and synthetic media.

It’s like watching the web transform into a murky pool of half-truths and potential lies. This shift isn’t just about technology; it’s a fundamental change in how we perceive and interact with information and each other.

I can’t ignore the profound challenge this presents. Big tech promotes AI tools as miraculous timesavers, but they also enable new forms of deception. What first seemed a distant threat now feels palpably close as the risks become a reality. The trade-off has become unsettlingly clear – these tools streamline our lives and distort our reality.

Not long ago, many viewed the risks of AI as distant, almost theoretical concerns. But today, these risks are palpably close. As I see it, the real threat isn’t in the AI itself but in how it erodes our trust in what we see and hear. As AI tools become more sophisticated, the task of discerning truth in the media becomes daunting.

This draws my attention to the shared territory between media literacy, critical thinking and AI literacy efforts. For years, schools have emphasised the importance of the ‘big Cs’ – critical thinking, creativity, curiosity, etc. But now, we must urgently enact and evolve these concepts. Students require a new kind of literacy, a blend of traditional critical thinking with a nuanced understanding of AI and digital manipulation.

Truth has become a fluid concept, shaped by algorithms and artificial voices; how do we prepare students to think critically and exercise discernment in an era of manipulated realities?

They need more than knowledge; they need a toolkit for learning and discerning and the ability to navigate a reality where AI blurs the lines between fact and fiction.

:. .:

~ Tom

Prompts

.: Refine your promptcraft

This week I want you to focus on exploring a structured template for your promptcraft. Last year I shared CREATE as a handy acronym for the elements of good prompting.

Let’s take a look at another helpful framework, CO-STAR, from Sheila Teo and the GovTech Singapore’s Data Science & AI team, the winners of a recent Singaporean prompt engineering competition.

Context :.

Provide specific background information to aid the LLM’s understanding of the scenario, while ensuring data privacy is respected.

Objective :.

Concisely state the specific goal or purpose of the task to provide clear direction to the LLM.

Style :.

Indicate the preferred linguistic register, diction, syntax, or other stylistic choices to guide the LLM’s responses.

Tone :.

Set the desired emotional tone using descriptive words to shape the sentiment and attitude conveyed by the LLM.

Audience :.

Outline relevant attributes of the target audience, such as background knowledge or perspectives, to adapt the LLM’s language appropriately.

Response :.

Specify the expected output format, such as text, a table, formatted with Markdown, or another structured response, to direct the LLM.

Context: The students are 10-11 years old and have a basic understanding of food production and transportation. The project aims to teach about the environmental impacts of imported foods. Privacy should be respected.
Objective: Generate a draft planning outline for a 4-week unit on food miles including learning objectives, activities, and resources. Focus on Science and Tech concepts.
Style: Use clear headings and bullet points. Write in an educational style suitable for teachers.
Tone: The tone should be factual and enthusiastic about student learning.
Audience: The materials are for a Year 5 teacher familiar with the national curriculum.
Response: Return the draft outline formatted in Markdown. Include main headings, sub-headings, and bullet points.

Remember to make this your own, try different language models and evaluate the completions.

Learning

.: Boost your AI Literacy

ENERGY
.: Rethinking Concerns About AI’s Energy Use | Center for Data Innovation

many of the early claims about the consumption of energy by AI have proven to be inflated and misleading. This report provides an overview of the debate, including some of the early missteps and how they have already shaped the policy conversation, and sets the record straight about AI’s energy footprint and how it will likely evolve in the coming years.

ESAFETY
.: Deepfake trends and challenges — position statement

The Australian eSafety Commissioner published guidance on the potential risks and challenges posed by deepfake technology.

Their position statement is a helpful introduction, including background details about deepfake technology, recent coverage (but not up to date) eSafety approach and advice for deal with deepfakes.

DIGITAL DECEPTION
.: Deepfakes: How to empower youth to fight the threat of misinformation and disinformation

An extensive exploration of this issue from Nadia Naffi including some highlights from her research into how to counter the proliferation of deepfakes and mitigate the impact:

Youth need to be encouraged in active, yet safe, well-informed and strategic, participation in the fight against malicious deepfakes in digital spaces.

She also offers these helpful guiding strategies, tactics and concrete actions

  • teaching the detrimental effects of disinformation on society;
  • providing spaces for youth to reflect on and challenge societal norms, inform them about social media policies and outlining permissible and prohibited content;
  • training students in recognizing deepfakes through exposure to the technology behind them;
  • encouraging involvement in meaningful causes while staying alert to disinformation and guiding youth in respectfully and productively countering disinformation.

Ethics

.: Provocations for Balance

  1. How are you increasing your understanding of deepfake technology to effectively educate students about its risks?
  2. What methods have you seen which integrate deepfake recognition into your media literacy curriculum?
  3. How do you facilitate classroom discussions about the ethical implications and societal impacts of deepfakes?
  4. What strategies are you teaching students to identify and respond to deepfake disinformation, especially online?
  5. What measures does your school or system have in place to address incidents involving deepfakes targeting students or staff?

Inspired by all the deepfake news.

:. .:

.: :.

Questions, comments or suggestions for future topics? Please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 41 .: ​ChatGPT is coming to Australian schools

Don’t miss my new learning community about AI for education.

Hello Reader,

Promptcraft is a weekly AI-focused newsletter for education, improving AI literacy and enhancing the learning ecosystem.

In this issue, you’ll discover:

  • The first deal OpenAI has made with a university;
  • New guidelines released in the US state of Washington for K12 schools;
  • How access is opening up to ChatGPT for all state schools in Australia.

Let’s get started!

~ Tom Barrett

tMcwSd14pXr1XftscKq4G8

HIGHER ED

.: OpenAI signs up its first higher education customer, Arizona State University

Summary ➜ Arizona State University has become the first higher education customer to pilot ChatGPT Enterprise developed by OpenAI. ASU will offer its faculty and staff accounts to discover and create AI applications for learning, research, and operations. The collaboration seeks to widen the responsible use of AI throughout the university.

Why this matters for education ➜ This development marks a positive step for higher education, which has grappled with a fascination with plagiarism and cheating issues. Even though we are still learning how to use these tools effectively, the Arizona State University (ASU) leadership is setting a precedent that could offer a new perspective. This could help elevate our dialogue beyond plagiarism and AI detection. According to Synthedia, ASU has approximately 145,000 students and 20,000 faculty staff. Given this large number, it is unlikely that all of them will be able to receive an enterprise account, as it would be pretty expensive for the university. Nevertheless, the partnership between ASU and OpenAI is an important signal, suggesting that we may see education accounts for OpenAI tools in some form. This deal will help build the technical and economic infrastructure to provide such tools directly to education organisations. Soon, your students might access OpenAI through a single sign-on, just like Canva, Adobe, Google, or Microsoft.

e8kySR1q7X2SNRgNcoKSVY

AUSTRALIA

.: ChatGPT is coming to Australian schools

Summary ➜ Access to OpenAI’s ChatGPT will be made available to Australian state schools in 2024, following the approval of a framework that guides the use of AI by education ministers in December. The framework sets out principles such as privacy, equity, and the proper attribution of AI-generated work. When ChatGPT was first introduced in 2022, most states banned it in schools due to concerns such as plagiarism. However, South Australia permitted its use to teach students about AI.

Why this matters for education ➜ I was one of the voices in 2022 wondering why banning is still considered an appropriate response. I think it was my younger self speaking, recalling when YouTube was banned in schools. The ban buffer gave system leaders the time to develop a better understanding. However, I wonder what is materially different in the Australian school ecosystem now there is a national framework? Are teachers and students better prepared? Opening up access is fine, but a framework publication alone is not enough.

6ejKfovejA8d9p6csHaLcK

K12 GUIDELINES

.: The State of Washington Embraces AI for Public Schools

Summary ➜ Washington State in the United States has released new guidelines which encourage the use of AI in K-12 public schools. The guidelines aim to promote students’ AI literacy, ensure ethical usage, provide teacher training, and apply design principles that support learning. They acknowledge AI’s potential benefits in education while recognising associated risks such as bias and overreliance.

Why this matters for education ➜ Another school system approaching AI “with great excitement and appropriate caution”. Notable from the announcement is how the new guidelines are based on the principle of “embracing a human-centred approach to AI”. It will be interesting to see how the new guidelines are implemented in schools and how the public school system is supported to adapt. For example, is there additional funding for schools or teachers to access the necessary resources and professional learning?

.: Other News In Brief

Microsoft makes its AI-powered Reading Coach free

These AI-powered apps can hear the cause of a cough

Is A.I. the Death of I.P.?

2024 will be ‘The year of AI glasses’

Mark Zuckerberg’s new goal is creating artificial general intelligence

Yann LeCun, chief AI scientist at Meta: ‘Human-level artificial intelligence is going to take a long time’

:. .:

.: Join the community waitlist

In February, we’re opening up the humAIn community – a space for forward-thinking educators to connect and learn together as we navigate the age of AI.

By joining, you’ll:

  • Build connections with like-minded peers
  • Attend exclusive webinars and virtual events
  • Join lively discussions on AI’s emerging role in education
  • Access member-only resources and Q&A forums

It’s a chance to be part of something meaningful – a space to share ideas, find inspiration, and focus on our shared humanity.

Get your name on the waitlist for information, so you don’t miss out on being part of this new learning community about AI for education.

.: :.

What’s on my mind?

.: Clinical Compassion

One of my favourite shows, ’24 Hours in A&E,’ once offered a poignant insight at King’s College Hospital in London. A nurse shared how, regardless of a patient’s awareness, a caring human touch could calm anxiety and reduce heart rate. This simple act of human connection resonates profoundly in our rapidly advancing world.

.: :.

Now, let’s step into a near future, just a heartbeat away. In the brisk environment of a bustling hospital, an AI system interacts with patients, sharing up-to-date information and diagnosing illnesses with precision and empathy that challenge the finest doctors.

Remarkably, this isn’t just fiction. Recently, a Google AI system demonstrated it could surpass human doctors in both bedside manner and diagnostic accuracy when trained to conduct medical interviews. This AI matched or even outperformed doctors in conversing with simulated patients and listing possible diagnoses based on the patient’s medical history, positioning artificial intelligence not just as an assistant but as a leader in roles traditionally defined by human touch.

A different AI system tailors learning paths in a school near the hospital. It identifies the most important, relevant and appropriate next step in learning for a student and eclipses even the most experienced educators in personalising education. With its ability to analyse historical student data and optimise learning strategies, this AI system tirelessly offers kind, specific and helpful feedback when the student needs it. The system’s interactions with parents have been rated 4.8 stars out of 5 for nearly 18 months.

I have been reflecting on the emotional or relational cost of using AI tools to augment human interaction. What might we be losing in technology’s embrace? The emotional depth, the subtle nuances of relationships, the warmth of human contact – can AI ever replicate these, or are they at risk of being diminished in the shadow of digital efficiency?

The near future dilemma is stark. Consider the veteran physician, witnessing AI systems diagnose more accurately than her colleagues. Or an educator, observing an AI effortlessly chart her students’ educational journeys. Both professionals stand at a pivotal crossroads, questioning their roles in a landscape increasingly shaped by AI.

The line between AI utility and the value of human judgment becomes blurred. When does reliance on AI’s precision start to overshadow essential human attributes?

At what point does maintaining traditional roles in education and healthcare hinder the potential benefits AI could bring?

As the future scenarios and near futures crash into our present, how do we balance AI’s brilliance with the irreplaceable qualities of humanity?

This challenge is not just technical but deeply philosophical, compelling us to put the human experience under the microscope and figure out ‘who am I in all of this?’ and what remains indispensable.

:. .:

~ Tom

Prompts

.: Refine your promptcraft

Honing your prompt writing skills is crucial for getting the most out of AI chatbots. Today, I want to share three simple techniques I rely on to lift my prompting game, but they don’t require prompt writing structures.

1:.

Regenerate Multiple Responses

Ask the same question 3-4 times, having the chatbot regenerate a new response each time. Review the different perspectives and ideas generated. In ChatGPT, use the regenerate button under the response. In Google Bard, you can access the drafts to see multiple versions and regenerate them. Looking at multiple responses side-by-side can spark new connections.

Key Promptcraft Tactic ➜ Regenerate every response (and image) 3 or 4 times to build a broad and diverse selection of ideas.

2:.

Iterate Through Feedback Loops

Engage in a back-and-forth collaboration with the chatbot. Respond to its initial reply by pushing for more details, examples, or counterarguments. Ask follow-up questions and provide guidance to steer the conversation. For instance, if you ask, “What are the benefits of virtual reality in classrooms?” you can say, ” Interesting, but how might VR be challenging for teachers to implement?” This iterative approach, digging deeper through feedback and refinement, can produce more thoughtful responses.

Key Promptcraft Tactic ➜ Don’t expect a perfect response immediately; stay in the chat and iterate, pushing and pulling the responses to refine the ideas.

3:.

Switch Underlying LLMs

Try re-prompting the same question using different large language models. If you have only been using ChatGPT, try others like Google Bard, Claude-2 from Anthropic, or Microsoft’s free Co-Pilot version of GPT-4. Varying the AI engine generating the text can result in different perspectives, creativity, and responses. Each LLM has unique strengths. Getting multiple views based on diverse models leads to more robust responses.

Key Promptcraft Tactic ➜ Try the same prompt on other LLMs to harness the different strengths and capabilities. I do this quickly and easily using the Poe platform.

Remember to make this your own, try different language models and evaluate the completions.

Learning

.: Boost your AI Literacy

GAMING
.: GDC 2024 State of the Game Industry report

This year’s survey reflects the perspectives of over 3,000 game industry professionals. We found that developers are concerned about the increasing number of layoffs, the ethical uses of artificial intelligence, and changing game engine policies and pricing.

OPEN SOURCE
.: Considerations for Governing Open Foundation Models

This briefing report by Stanford University’s Human-Centered AI group highlights the benefits of open foundation models and calls for a greater focus on their marginal risks.

Here are some of the key insights from the report:

➜ Foundation models with readily available weights offer benefits by decreasing market concentration, promoting innovation, and increasing transparency.

➜ Proposals like downstream harm liability and licensing may unfairly harm open foundation model developers.

➜ Policymakers must consider potential unintended consequences of AI regulation on open foundation models’ innovation ecosystem.

JAILBREAK
.: “Your GPTs aren’t safe”

Just a quick reminder and some background info: you may have heard of OpenAI’s GPT Store, which allows users to publish their bots to a public marketplace.

However, reports of data breaches and sensitive data leaks have increased due to user-uploaded content. Some users are “jailbreaking” the bots using prompting techniques, revealing some interesting insights into how LLMs respond to interaction (and how strange their responses can be).

Nathan Hunter even set up a competition offering a $250 prize to anyone who could break into his published GPT bot, and someone successfully used a popular prompting technique to do just that.

Here is Nathan explaining the promptcraft lessons this experiment reveals:

What does this teach us?

1) Hacking a GPT isn’t about writing code, it’s about a conversation. Social manipulation is easy when working with a tool that loves to take on roles and personalities on command.

2) Your GPTs aren’t safe. If you want to make them public, then make sure none of the instructions or documents contain data you wouldn’t want the world to access.

Ethics

.: Provocations for Balance

How should AI-powered wearables navigate the delicate balance between enhanced user experience and the potential for invasive surveillance?

For instance, if smart glasses can record or analyse conversations and surroundings, what new consent mechanisms should be in place to protect the privacy of both the wearer and those around them?

If these devices can provide real-time information or analysis about people we meet (like social media profiles or personal preferences), does it risk reducing genuine human connection and spontaneity in social interactions?

Inspired by this week’s story: 2024 will be ‘The year of AI glasses’

:. .:

.: :.

Questions, comments or suggestions for future topics? Please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 40 .: AI poses the biggest global risk in 2024

Hello Reader,

Promptcraft is a weekly AI-focused newsletter for education, improving AI literacy and enhancing the learning ecosystem.

In this issue, you’ll discover:

  • How election disruption from AI poses the biggest global risk in 2024;
  • The latest investment in Perplexity AI taking on Google Search;
  • A new learning community about AI for education.

Let’s get started!

~ Tom Barrett

RISK REPORT

.: Election disruption from AI poses the biggest global risk in 2024, Davos survey warns

Summary ➜ The World Economic Forum’s Global Risks Report 2024 has highlighted AI-derived misinformation and disinformation as the most significant global risk over the next two years. This concern is especially pertinent as approximately half of the world’s adult population is set to vote in upcoming elections, where AI’s influence on large voter populations could significantly impact democratic processes.

Why this matters for education ➜ The ongoing debate over what’s real and what’s not in education, primarily focused on plagiarism, is a distracting sideshow. This narrow focus shifts attention away from the critical need to develop robust skills against the emerging risk of the blurred line between truth and falsehood. In the AI era, it’s vital for students to learn how to discern misinformation and critically assess digital content. This skill is not just an academic necessity but a global imperative, as AI’s influence spans across international borders, reshaping political and social landscapes. Addressing this challenge requires a broader, more globally aware educational approach.

pRZ3VPTJwTYQfXYhsAD6q5

SEARCH

.: AI-Powered Search Engine Perplexity AI Now Valued at $520M, Raises $73.6M

Summary ➜ Founded in August 2022 by a team with backgrounds in AI and search technologies, Perplexity AI offers a chatbot-like interface for natural language queries, providing summaries with source citations. It competes against giants like Google and Microsoft, aiming to revolutionise knowledge search and acquisition. The company, which claims to have 10 million active monthly users, has now raised over $100 million in total​​.

Why this matters for education ➜ The experience of looking up information on the web, exploring content and finding answers is changing. Tools like Perplexity AI are designed as answer engines, far different from presenting lists of blue links for a student to choose from and then continue an inquiry. As the technology rapidly advances students are much more likely to be exploring information via a chatbot than traditional web searches. Are we seeing the beginning of the end of Google search? Could the “Google it” era be slowly crumbling?

826qySNZtDPsCE2jt7RGvs

COPYRIGHT

.: New York Times Sues OpenAI and Microsoft Over Copyright Infringement

Summary ➜ The lawsuit against OpenAI and Microsoft, accuses them of using millions of the newspaper’s articles without permission to train chatbots. This lawsuit, filed in Manhattan federal court, challenges the companies’ use of copyrighted content to develop AI products like ChatGPT, alleging they are trying to “free-ride” on the Times’s journalism.

Why this matters for education ➜ We should all be watching the copyright legal cases against AI companies quite closely. At the centre of this issue is the use of training data and how LLMs, like ChatGPT, generate copyrighted material verbatim. This issue is mirrored across other types of generative AI tools, such as image and voice tools. AI-powered tools have the potential to revolutionise teaching and learning, but copyright concerns may hinder their development and use in educational settings. It is worth pausing and reflecting on how solid and visible the foundations of OpenAI models are, especially in light of these legal challenges.

.: Other News In Brief

Midjourney V6 is here with in-image text and completely overhauled prompting

New material found by AI could reduce lithium use in batteries

OpenAI’s GPT Store Already Filling Up With “AI Girlfriends

Quora raises $75m for its AI chatbot platform

Rabbit sells out two batches of 10,000 R1 pocket AI companions over two days

Google AI has better bedside manner than human doctors — and makes better diagnoses

:. .:

.: Join the community waitlist

There’s a special community on the horizon for educators like you who want to explore the human side of artificial intelligence.

In February, we’re opening up the humAIn community – a space for forward-thinking educators to connect and learn together as we navigate the age of AI.

By joining, you’ll:

  • Build connections with like-minded peers
  • Attend exclusive webinars and virtual events
  • Join lively discussions on AI’s emerging role in education
  • Access member-only resources and Q&A forums

It’s a chance to be part of something meaningful – a space to share ideas, find inspiration, and focus on our shared humanity.

Get your name on the waitlist for information, so you don’t miss out on being part of this new learning community about AI for education.

Let’s shape the future of education, together.

.: :.

What’s on my mind?

.: The Faces We Long to See

Imagine this: You’ve just returned from a trip, having navigated the familiar airport routine – security lines, scanners, the usual. But this time, something strikes you differently as you clear the final checkpoint and shunt your luggage towards the exit. It’s not just seen; it’s felt.

.: :.

There I was, fresh off a flight, and I couldn’t help but notice something striking. Do you know those facial recognition systems at passport control? Impressive, sure. Machines whirring, beeping, scanning documents, recognising faces. They’re fast, they’re efficient. It’s technology at its peak, streamlining what used to be a long, human-driven process. Impersonal but effective. That’s the scene on one side of the airport.

When I moved beyond the systems of digital precision, the atmosphere shifted. As I pushed my luggage, here, in the arrivals hall, the scene transforms. Teenagers huddle together, smartphones in hand, homemade signs aloft – a buzzing hive, eagerly awaiting a friend’s return. Over there, a tearful couple, lost in the embrace of their children, a reunion that’s been long in the making.

For a moment, the room scanned me, and I could feel the collective anticipation – the expectant gazes of hundreds, each pair of eyes telling a story of waiting, of longing. I noticed the anxious grip on bouquets, flowers bunched in hands trembling with anticipation. This is more than just an arrivals hall; it’s the culmination of countless stories, the end of long countdowns, and the final moments of anticipation unfolding before our eyes.

They’re looking for faces, yes, but not just any faces – they’re searching for that one face they’ve missed and long to see. Hearts are racing; eyes are searching, and then a moment of recognition. It’s joy, it’s relief, it’s love. All happening right there, in the most human way possible. This was facial recognition powered by affection and memory, not algorithms.

This contrast, it hit me hard. On one side, machines do what they’re programmed to, precisely recognising faces. But they’re missing something crucial, something they can’t replicate – the emotion, the history, the storied connection we read in a human face. That’s our thing, our human thing.

And amid all the noise and rush, there’s a reminder in this bustling, busy airport. It’s a reminder of what makes us human, something that technology, no matter how advanced, can’t touch. The human connection, that spark when you see a familiar face, the warmth of a smile – technology might mimic it, but it can never truly capture it. Throughout history, our ability to recognise faces has evolved far beyond mere survival – it’s become a cornerstone of emotional connection and social interaction.

As I left the airport into the chilled Melbourne air, the echoes of these emotional reunions lingered with me. In our digital world, moments like these remind us that no matter how advanced technology becomes, the human ability to connect still holds irreplaceable value.

“But do you remember where we parked?”

:. .:

~ Tom

Prompts

.: Refine your promptcraft

Develop Scenarios for Critical Thinking

Scenario building is a great way to quickly resource some critical thinking activities. For example here is a scenario generated from today’s prompt about the morality of virtual worlds:

Imagine a future where virtual reality (VR) is indistinguishable from actual reality. In this world, you can experience anything without real-world consequences. However, a debate arises when a philosophy professor asks whether actions in VR hold the same moral weight as in the real world.

You will see from the longer prompt below I am using the structure:

  • Persona / Role
  • Task / Steps
  • Format / Tone
  • Context / Constraints
  • Examples / Model Answers (optional)

Here is an example prompt, which aims to develop some critical thinking scenarios, for you to try.

PROMPT

Act as an adept critical thinking strategist, specialised in developing engaging, subject-aligned scenarios that provoke [university] students to sharpen their critical, analytical and evaluative thinking abilities. You are successful when you see signals of improved critical thinking from the student.

Formulate 3 concise scenarios to explore the multifaceted problems or debates pertinent to [Philosophy] and [Ethical Implications of Artificial Intelligence]. For each scenario create a sequence of 3 probing questions aimed at prompting students to dissect arguments, unearth assumptions, and scrutinise evidence critically.

Draft each scenario as an engaging narrative snippet. Use language which is accessible and engaging to university students. The tone should be compelling and lucid, crafted to resonate within an educational gaming style.

This critical thinking scenario game is designed for use by [university] students across various disciplines who need more opportunities to practice critical thinking in a context directly related to their field of study.

Please note the variables you can change are included in parentheses.

This year I aim to share good examples of prompts as well as sharing with you new promptcraft techniques.

Remember to make this your own, try different language models and evaluate the completions.

Learning

.: Boost your AI Literacy

LOOK AHEAD
.: After AI’s summer: What’s next for artificial intelligence?

By any measure, 2023 was an amazing year for AI. Large language Models (LLMs) and their chatbot applications stole the show, but there were advances across a broad swath of uses. These include image, video and voice generation.

AI STRATEGY
.: The secret to making language models useful

Here is a summary of the key insights from the article and why it is useful for your AI Literacy:

  • The main idea is that language models alone are not enough to be truly useful or make good decisions. Language provides the words, but you need knowledge and understanding to apply those words wisely.
  • Language models can recite words and phrases from their training data, but they lack true comprehension. They find statistical correlations but can’t determine causation.
  • To make language models useful, you need to recreate the structure of human expertise – combining language with knowledge and understanding. This means knowledge graphs, causal models, etc.
  • The process should start with identifying the most valuable human expertise in an organization, determining the risks of losing it, and seeing if it can be encoded for machines.
  • Data is not the most critical element – expertise is. The goal is transferring human expertise into machine language so machines can inform or make decisions.
  • Leaders need to focus on what expertise could give them a competitive advantage if encoded into machines, rather than just reacting to what others develop.

This article provides an important perspective – that language AI like ChatGPT is not magic on its own. To be useful you need to carefully encode human knowledge, not just data.

For me this relates to the way Roger Martin talks about the Knowledge Funnel and how we don’t often have system to encode human expertise.

This is especially true in schools and back in 2015 I was calling this a knowledge urgency problem. Where we underinvest in really capturing the methods, tactics and strategies of the best expertise we have before they move on.

It is interesting to think that training a language model might be a vessel for that expertise.

WHICH AI IS BEST?
.: People Rank Open Source AI Output Better Than Commercial Models

A couple of elements of your AI literacy which I encourage you to pay attention to this year are:

  • What is it like to use open-source LLMs?
  • Explore the performance comparison across available models.

There is a wide variety of models available and when you think about AI you don’t just think of ChatGPT, you might think – OK which models might be best for this task.

Here’s a snippet from this article which refers to leaderboard results for performance from HuggingFace.

Artificial intelligence enthusiasts prefer working with open-source tools over proprietary commercial ones, according to an ongoing survey of more than 100,000 respondents. The emergence of Mistral AI’s Mixtral 8x7B, an open-source model, has made a significant impact in the AI space.

Ethics

.: Provocations for Balance

Here’s the full text from the critical thinking scenario prompt which I thought was worth sharing in full.

.:

Philosophical Dilemmas in Virtual Realities

Scenario

Imagine a future where virtual reality (VR) is indistinguishable from actual reality. In this world, you can experience anything without real-world consequences. However, a debate arises when a philosophy professor asks whether actions in VR hold the same moral weight as in the real world.

Probing Questions:

  1. Moral Parallels: In what ways might actions in a virtual environment ethically parallel those in the real world? Consider the implications of harm or benefit in both settings.
  2. Reality vs. Perception: How does our understanding of ‘reality’ affect our moral judgments? Discuss whether the perceived reality of VR changes ethical considerations.
  3. Consequence-Based Ethics: Should the lack of real-world consequences in VR impact our moral decision-making? Explore this through the lens of different ethical theories.

.:

Read the rest in this ChatGPT chat here.

:. .:

.: :.

Questions, comments or suggestions for future topics? Please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

⚠️ The Adolescent Mental Health Crisis

Dialogic #346

Leadership, learning, innovation

Your Snapshot
A summary of the key insights from this issue

  • Teen depression has alarmingly doubled since 2011, with teens reporting more loneliness and less life satisfaction.
  • This concerning trend directly correlates with the rise in smartphone and social media adoption over the past decade.
  • In her article, Jean Twenge rigorously examines and debunks various alternative explanations for the crisis, from the economy to the pandemic.
  • Her analysis suggests smartphone and social media use, exacerbated by decreasing teen independence, is the primary driver of the adolescent mental health dilemma.

Reshaping Adolescence

In her article “Yes, it’s the phones (and social media),” Jean M. Twenge unveils a disturbing reality: teen depression rates have doubled from 2011 to 2021, accompanied by rising loneliness and declining life satisfaction.

This mental health crisis correlates with increased smartphone and social media use, from the early 2010s, which is reshaping adolescence, see the chart above.

Challenging Alternatives

In an era where mental health is increasingly at the forefront of societal concerns, Jean M. Twenge’s insightful article examines and debunks several prevalent theories that have emerged in attempts to explain this disturbing trend.

From the supposed impact of economic downturns to the alleged influence of academic pressures, Twenge navigates through thirteen theories.

Here are three of the explanations explored in her article, starting with teens perhaps being more open about the challenges they are facing:

  1. Teens More Open About Not Being OK: Twenge counters this by pointing out that objective behavioural measures, such as emergency room admissions for self-harm and suicide rates, have increased in a manner consistent with the rise in self-reported depression. This trend suggests that the increase is not merely due to a change in reporting habits​​.
  2. Impact of the COVID-19 Pandemic: The third explanation Twenge refutes is the COVID-19 pandemic is the root cause of increased teen depression. She notes that the rise in teen depression began well before the pandemic, in the early 2010s. Thus, while the pandemic may have exacerbated the situation, it wasn’t the origin of the problem​​.
  3. Academic Pressure and Homework: Twenge disputes the notion that increased academic pressure and homework are the primary causes of teen depression. Data shows that U.S. teens spend less time on homework now than they did in the 1990s. Moreover, the average teen spends significantly more time on social media than homework, challenging the idea that academic workload is the primary stressor​​.

Helicopter Coddling

Jean M. Twenge identifies the decline in independence among children and adolescents as a possible explanation for the current mental health crisis among teenagers.

She acknowledges that present-day youth have fewer opportunities to engage in independent activities like exploring neighbourhoods or going out with friends, compared to previous generations.

However, Twenge believes that this trend alone does not fully account for the rise in teen depression. Instead, she suggests that the decline in independence and the impact of digital media such as smartphones and social media work together to exacerbate the mental health crisis.

Bonus 14th

⏭🎯 Your Next Steps
Commit to action and turn words into works

  • Advocate for school-based digital well-being programs that educate students about the psychological effects of excessive screen time and social media use while promoting healthier digital habits. Involve mental health professionals in developing these programs to ensure they’re evidence-based and age-appropriate.
  • Create safe spaces for teens to discuss their digital habits and social media use, facilitated by a supportive adult.
  • Organise workshops for parents on effective digital supervision, including practical strategies for managing their children’s digital consumption, setting boundaries, and understanding online risks.

🗣💬 Your Talking Points
Lead a team dialogue with these provocations

  • The Pandemic’s Amplification, Not Initiation: Discuss how the rise in teen depression predates the COVID-19 pandemic, challenging the notion that the pandemic is the sole cause of the mental health crisis. What does this reveal about the underlying, pre-existing issues in adolescent mental health?
  • Homework vs. Social Media Time: Reflect on the finding that despite spending less time on homework than in the 1990s, teens today face higher depression rates. How does this contrast with the significant time spent on social media, and what implications does it have for understanding the real stressors affecting teen mental health?
  • Independence and Digital Overload: Explore the relationship between the decline in teen independence and increased digital media usage. How might overprotective parenting styles, combined with the omnipresence of digital technology, be shaping the mental health landscape for today’s adolescents?

🕳🐇 Down the Rabbit Hole
Still curious? Explore some further readings from my archive

New Evidence on Adolescent Mental Health and Social Media | Psychology Today US Surgeon General’s June 2023 report warns that social media can harm youth mental health via excessive usage, harmful content, and displacing healthy activities. Parents should limit and monitor usage, model responsible use, and discuss openly with kids.

My fight to get screens out of schools | Waldorf Today The article suggests removing screens and technology from classrooms to avoid distractions and negative impacts on student focus and brain development. The author proposes tech-free schools as a solution and calls for government bans on social media and smartphones for minors due to mental health risks.

Parenting, Media, and Everything in Between | Common Sense Media Explore Common Sense Media’s extensive collection of articles, advice, and parenting tips related to social media.

Last issue today, thanks for supporting the Dialogic Learning Weekly this year. See you again in 2024!

~ Tom Barrett

Support this newsletter

Donate by leaving a tip

Encourage a colleague to subscribe​​​​​

Tweet about this issue

The Bunurong people of the Kulin Nation are the Traditional Custodians of the land on which I write and create. I recognise their continuing connection and stewardship of lands, waters, communities and learning. I pay my respects to Indigenous Elders past, present and those who are emerging. Sovereignty has never been ceded. It always was and always will be Aboriginal land.

Unsubscribe | Update your profile | Mt Eliza, Melbourne, VIC 3930

.: Promptcraft 39 .: ChatGPT consumes 500ml of water for every 10-50 prompts

Hello Reader,

Welcome to Promptcraft, your weekly newsletter on artificial intelligence for education. In this issue:

  • Generative AI’s huge water demands scrutinised
  • EU finalises landmark AI regulation, imposes risk-based restrictions
  • Google launches the Gemini model series to rival ChatGPT

Don’t forget to Share Promptcraft and enter my Christmas giveaway! All you have to do is share your link below.

.: Tom

Tom%20Barrett%20Christmas%20Sticker%20(1).

Get Poe AI Access for Free – Refer Friends to Win!

To enter share your unique link below and get an entry for every friend that signs up to the Promptcraft newsletter. The more referrals you get, the higher your chances to win!

Prizes: 10 x 1 month Poe AI access (USD $20 value each)
Draw date: December 20th 2023

[RH_REFLINK_2 GOES HERE]

Facebook Whatsapp Linkedin Email

PS: You have referred [RH_TOTREF_2 GOES HERE] people so far

See how many referrals you have

Latest News

.: AI Updates & Developments

.: Generative AI’s huge water demands scrutinised ➜ Generative AI, like ChatGPT, is increasing scrutiny of Big Tech’s water usage. ChatGPT consumes 500ml of water for every 10-50 prompts. Microsoft and Google’s water use has risen by 21-36% in 2022 due to new AI chatbots. AI drives more computing power, so data centres require vast amounts of water for cooling. Critics warn of sustainability issues from AI’s thirst, even though companies aim to be water positive.

.: China plays catch-up a year after ChatGPT ➜ One year after OpenAI’s ChatGPT took the AI world by storm, China lags due to lack of advanced chips. US export controls block access to Nvidia GPUs critical for powerful AI models. Domestic firms like Baidu have developed chatbots but can’t match US capabilities. China faces pressure to close the gap and realises AI leadership will be difficult.

.: Beijing court rules AI art can get copyright ➜ A Beijing court granted copyright to an AI-generated image, contradicting the US view that AI works lack human authorship. The ruling signals China’s support for AI creators over US skepticism. It could influence future disputes and benefit Chinese tech giants’ AI content tools.

pRzgV9HhVi8dRj8VgoJHT9

.: EU finalises landmark AI regulation, imposes risk-based restrictions ➜ The EU finalised its AI regulation after years of debate, imposing the world’s most restrictive regime. It bans certain AI uses and adds oversight based on risk levels. While companies warned of stifling innovation, the EU calls it a “launchpad” for AI leadership. The rules aim to curb AI risks and set a global standard amid advances like ChatGPT.

.: Google launches Gemini AI to rival ChatGPT ➜ Google has launched Gemini, a new AI model that competes with OpenAI’s ChatGPT and GPT-4. Gemini beats GPT-4 in 30 of 32 benchmarks, aided by multimodal capabilities. It comes in three versions optimised for different uses and will integrate across Google’s products. The launch puts Google back in the generative AI race it has been perceived to be losing.

.: Meta’s new AI image generator trained on 1B Facebook, Instagram photos ➜ Meta released a new AI image generator using its Emu model, trained on over 1 billion public Instagram and Facebook images. The tool creates images from text prompts like other AI generators. Meta says it only used public photos, but users’ pics likely aided training without consent.

.: Google unveils improved AI coding tool AlphaCode 2 ➜ Google’s DeepMind division unveiled AlphaCode 2, an upgraded version of its AI coding assistant. Powered by Google’s new Gemini AI model, AlphaCode 2 can solve coding problems in multiple languages that require advanced techniques like dynamic programming. In contests, it outperformed 85% of human coders, nearly double the original AlphaCode.

.: Apple quietly releases new AI framework MLX ➜ MLX is a new open source AI framework that efficiently runs models on Apple Silicon chips. It includes a model library called MLX Data and can train complex models like Llama and Stable Diffusion. Apple is expanding its AI capabilities with MLX, enabling the development of powerful AI apps for Macs.

Reflection

.: Why this news matters for education

Last week in Promptcraft 38, we peeled back the curtain on how generative AI like ChatGPT can unwittingly perpetuate biases that conflict with principles of diversity and inclusion.

This week, our lens widens to reveal another ethical dilemma – the massive environmental impact of systems like ChatGPT.

New research spotlights AI’s hefty carbon footprint and water use.

ChatGPT gulps down 500ml of water for every 10-50 prompts. With over 100 million users chatting it up, you do the maths.

Meanwhile, AI2 and Hugging Face quantify the extreme variation in emissions across AI tasks.

Generating images and text can pump out 60x more CO2 than simple classification. Efficiency gains still increase net consumption.

Despite conservation efforts, Microsoft and Google’s water use rose 21-36% in 2022, partly due to new AI systems. Emissions from AI use can even exceed those from training.

There’s over 1000x difference in energy efficiency across models. But a lack of standards prevents easy comparison.

Shouldn’t environmental impact be as clear as other risks like accuracy and bias?

AI’s emissions and biases require awareness and mitigation. Users must be educated and lower-impact models chosen. AI apps could one day be selected based on their carbon label.

.:

~ Tom

Prompts

.: Refine your promptcraft

Tree of Thought Prompting

The Tree of Thoughts (ToT) method is a way to improve how large language models like GPT, Claude or Gemini solve complex problems that require looking ahead or exploring different options.

ToT works by building a tree of intermediate ‘thoughts’ that can be evaluated and explored. This allows the model to work through a problem by generating multiple steps and exploring different options.

Recent studies have shown that ToT improves performance on mathematical reasoning tasks. We can apply this method to text based prompting too.

Here is an example for you to try.

PROMPT

Imagine three different experts are answering this question.
All experts will write down 1 step of their thinking,
then share it with the group.
Then all experts will go on to the next step, etc.
If any expert realises they’re wrong at any point then they leave.
The question is [Add your question here]

I have been playing with extending this method further with a scenario of experts exploring the question through dialogue.

It reminds me of the Expert Prompting technique we have looked at before.

Remember to make this your own, tinker and evaluate the completions.

Learning

.: Boost your AI Literacy

EXPERT PANEL

video preview

I really enjoyed this longer exploration of the issues we are navigating with AI from a practical, technical and ethical position. I discovered it via a repost of one of the panellist, Yann LeCun’s comments about the open vs proprietary approach to models. You can jump to these in the last 10 minutes, but I recommend the rest too.

ETHICS REPORT
.: Walking the Walk of AI Ethics in Technology Companies ➜ Stanford Institute for Human-Centered Artificial Intelligence (HAI) new report “Walking the Walk of AI Ethics in Technology Companies” is one of the first empirical investigations into AI ethics on the ground in private technology companies.

One of the key takeaways:

Technology companies often “talk the talk” of AI ethics without fully “walking the walk.” Many companies have released AI principles, but relatively few have institutionalized meaningful change.

FREE COURSES
.: 12 days of no-cost training to learn generative AI this December

  • Google Cloud is offering 12 days of free generative AI training in December
  • The courses cover foundations like what generative AI is and how it works
  • Technical skills content is also included for developers and engineers
  • Offerings include videos, courses, labs, and a gamified learning arcade

Ethics

.: Provocations for Balance

  • What happens when people stop using the systems which have a high environmental impact?
  • If society turns against AI due to climate concerns, could it set unreasonable expectations for AI developers to predict and eliminate the environmental impact of systems still in their infancy?
  • Are campaigns for AI sustainability failing to also acknowledge the huge benefits of AI computing for society, and the need for balance and moderation versus outright rejection?
  • Should AI researchers be tasked with solving the climate impacts of computing overall? Does this distract from innovating in AI itself which could also help address climate change?

~ Inspired by this week’s developments.

.:

That’s all for this week; I hope you enjoyed this issue of Promptcraft. I would love some kind, specific and helpful feedback.

If you have any questions, comments, stories to share or suggestions for future topics, please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!

.: Tom Barrett

/Creator /Coach /Consultant