.: Promptcraft 41 .: ​ChatGPT is coming to Australian schools

Don’t miss my new learning community about AI for education.

Hello Reader,

Promptcraft is a weekly AI-focused newsletter for education, improving AI literacy and enhancing the learning ecosystem.

In this issue, you’ll discover:

  • The first deal OpenAI has made with a university;
  • New guidelines released in the US state of Washington for K12 schools;
  • How access is opening up to ChatGPT for all state schools in Australia.

Let’s get started!

~ Tom Barrett

tMcwSd14pXr1XftscKq4G8

HIGHER ED

.: OpenAI signs up its first higher education customer, Arizona State University

Summary ➜ Arizona State University has become the first higher education customer to pilot ChatGPT Enterprise developed by OpenAI. ASU will offer its faculty and staff accounts to discover and create AI applications for learning, research, and operations. The collaboration seeks to widen the responsible use of AI throughout the university.

Why this matters for education ➜ This development marks a positive step for higher education, which has grappled with a fascination with plagiarism and cheating issues. Even though we are still learning how to use these tools effectively, the Arizona State University (ASU) leadership is setting a precedent that could offer a new perspective. This could help elevate our dialogue beyond plagiarism and AI detection. According to Synthedia, ASU has approximately 145,000 students and 20,000 faculty staff. Given this large number, it is unlikely that all of them will be able to receive an enterprise account, as it would be pretty expensive for the university. Nevertheless, the partnership between ASU and OpenAI is an important signal, suggesting that we may see education accounts for OpenAI tools in some form. This deal will help build the technical and economic infrastructure to provide such tools directly to education organisations. Soon, your students might access OpenAI through a single sign-on, just like Canva, Adobe, Google, or Microsoft.

e8kySR1q7X2SNRgNcoKSVY

AUSTRALIA

.: ChatGPT is coming to Australian schools

Summary ➜ Access to OpenAI’s ChatGPT will be made available to Australian state schools in 2024, following the approval of a framework that guides the use of AI by education ministers in December. The framework sets out principles such as privacy, equity, and the proper attribution of AI-generated work. When ChatGPT was first introduced in 2022, most states banned it in schools due to concerns such as plagiarism. However, South Australia permitted its use to teach students about AI.

Why this matters for education ➜ I was one of the voices in 2022 wondering why banning is still considered an appropriate response. I think it was my younger self speaking, recalling when YouTube was banned in schools. The ban buffer gave system leaders the time to develop a better understanding. However, I wonder what is materially different in the Australian school ecosystem now there is a national framework? Are teachers and students better prepared? Opening up access is fine, but a framework publication alone is not enough.

6ejKfovejA8d9p6csHaLcK

K12 GUIDELINES

.: The State of Washington Embraces AI for Public Schools

Summary ➜ Washington State in the United States has released new guidelines which encourage the use of AI in K-12 public schools. The guidelines aim to promote students’ AI literacy, ensure ethical usage, provide teacher training, and apply design principles that support learning. They acknowledge AI’s potential benefits in education while recognising associated risks such as bias and overreliance.

Why this matters for education ➜ Another school system approaching AI “with great excitement and appropriate caution”. Notable from the announcement is how the new guidelines are based on the principle of “embracing a human-centred approach to AI”. It will be interesting to see how the new guidelines are implemented in schools and how the public school system is supported to adapt. For example, is there additional funding for schools or teachers to access the necessary resources and professional learning?

.: Other News In Brief

Microsoft makes its AI-powered Reading Coach free

These AI-powered apps can hear the cause of a cough

Is A.I. the Death of I.P.?

2024 will be ‘The year of AI glasses’

Mark Zuckerberg’s new goal is creating artificial general intelligence

Yann LeCun, chief AI scientist at Meta: ‘Human-level artificial intelligence is going to take a long time’

:. .:

.: Join the community waitlist

In February, we’re opening up the humAIn community – a space for forward-thinking educators to connect and learn together as we navigate the age of AI.

By joining, you’ll:

  • Build connections with like-minded peers
  • Attend exclusive webinars and virtual events
  • Join lively discussions on AI’s emerging role in education
  • Access member-only resources and Q&A forums

It’s a chance to be part of something meaningful – a space to share ideas, find inspiration, and focus on our shared humanity.

Get your name on the waitlist for information, so you don’t miss out on being part of this new learning community about AI for education.

.: :.

What’s on my mind?

.: Clinical Compassion

One of my favourite shows, ’24 Hours in A&E,’ once offered a poignant insight at King’s College Hospital in London. A nurse shared how, regardless of a patient’s awareness, a caring human touch could calm anxiety and reduce heart rate. This simple act of human connection resonates profoundly in our rapidly advancing world.

.: :.

Now, let’s step into a near future, just a heartbeat away. In the brisk environment of a bustling hospital, an AI system interacts with patients, sharing up-to-date information and diagnosing illnesses with precision and empathy that challenge the finest doctors.

Remarkably, this isn’t just fiction. Recently, a Google AI system demonstrated it could surpass human doctors in both bedside manner and diagnostic accuracy when trained to conduct medical interviews. This AI matched or even outperformed doctors in conversing with simulated patients and listing possible diagnoses based on the patient’s medical history, positioning artificial intelligence not just as an assistant but as a leader in roles traditionally defined by human touch.

A different AI system tailors learning paths in a school near the hospital. It identifies the most important, relevant and appropriate next step in learning for a student and eclipses even the most experienced educators in personalising education. With its ability to analyse historical student data and optimise learning strategies, this AI system tirelessly offers kind, specific and helpful feedback when the student needs it. The system’s interactions with parents have been rated 4.8 stars out of 5 for nearly 18 months.

I have been reflecting on the emotional or relational cost of using AI tools to augment human interaction. What might we be losing in technology’s embrace? The emotional depth, the subtle nuances of relationships, the warmth of human contact – can AI ever replicate these, or are they at risk of being diminished in the shadow of digital efficiency?

The near future dilemma is stark. Consider the veteran physician, witnessing AI systems diagnose more accurately than her colleagues. Or an educator, observing an AI effortlessly chart her students’ educational journeys. Both professionals stand at a pivotal crossroads, questioning their roles in a landscape increasingly shaped by AI.

The line between AI utility and the value of human judgment becomes blurred. When does reliance on AI’s precision start to overshadow essential human attributes?

At what point does maintaining traditional roles in education and healthcare hinder the potential benefits AI could bring?

As the future scenarios and near futures crash into our present, how do we balance AI’s brilliance with the irreplaceable qualities of humanity?

This challenge is not just technical but deeply philosophical, compelling us to put the human experience under the microscope and figure out ‘who am I in all of this?’ and what remains indispensable.

:. .:

~ Tom

Prompts

.: Refine your promptcraft

Honing your prompt writing skills is crucial for getting the most out of AI chatbots. Today, I want to share three simple techniques I rely on to lift my prompting game, but they don’t require prompt writing structures.

1:.

Regenerate Multiple Responses

Ask the same question 3-4 times, having the chatbot regenerate a new response each time. Review the different perspectives and ideas generated. In ChatGPT, use the regenerate button under the response. In Google Bard, you can access the drafts to see multiple versions and regenerate them. Looking at multiple responses side-by-side can spark new connections.

Key Promptcraft Tactic ➜ Regenerate every response (and image) 3 or 4 times to build a broad and diverse selection of ideas.

2:.

Iterate Through Feedback Loops

Engage in a back-and-forth collaboration with the chatbot. Respond to its initial reply by pushing for more details, examples, or counterarguments. Ask follow-up questions and provide guidance to steer the conversation. For instance, if you ask, “What are the benefits of virtual reality in classrooms?” you can say, ” Interesting, but how might VR be challenging for teachers to implement?” This iterative approach, digging deeper through feedback and refinement, can produce more thoughtful responses.

Key Promptcraft Tactic ➜ Don’t expect a perfect response immediately; stay in the chat and iterate, pushing and pulling the responses to refine the ideas.

3:.

Switch Underlying LLMs

Try re-prompting the same question using different large language models. If you have only been using ChatGPT, try others like Google Bard, Claude-2 from Anthropic, or Microsoft’s free Co-Pilot version of GPT-4. Varying the AI engine generating the text can result in different perspectives, creativity, and responses. Each LLM has unique strengths. Getting multiple views based on diverse models leads to more robust responses.

Key Promptcraft Tactic ➜ Try the same prompt on other LLMs to harness the different strengths and capabilities. I do this quickly and easily using the Poe platform.

Remember to make this your own, try different language models and evaluate the completions.

Learning

.: Boost your AI Literacy

GAMING
.: GDC 2024 State of the Game Industry report

This year’s survey reflects the perspectives of over 3,000 game industry professionals. We found that developers are concerned about the increasing number of layoffs, the ethical uses of artificial intelligence, and changing game engine policies and pricing.

OPEN SOURCE
.: Considerations for Governing Open Foundation Models

This briefing report by Stanford University’s Human-Centered AI group highlights the benefits of open foundation models and calls for a greater focus on their marginal risks.

Here are some of the key insights from the report:

➜ Foundation models with readily available weights offer benefits by decreasing market concentration, promoting innovation, and increasing transparency.

➜ Proposals like downstream harm liability and licensing may unfairly harm open foundation model developers.

➜ Policymakers must consider potential unintended consequences of AI regulation on open foundation models’ innovation ecosystem.

JAILBREAK
.: “Your GPTs aren’t safe”

Just a quick reminder and some background info: you may have heard of OpenAI’s GPT Store, which allows users to publish their bots to a public marketplace.

However, reports of data breaches and sensitive data leaks have increased due to user-uploaded content. Some users are “jailbreaking” the bots using prompting techniques, revealing some interesting insights into how LLMs respond to interaction (and how strange their responses can be).

Nathan Hunter even set up a competition offering a $250 prize to anyone who could break into his published GPT bot, and someone successfully used a popular prompting technique to do just that.

Here is Nathan explaining the promptcraft lessons this experiment reveals:

What does this teach us?

1) Hacking a GPT isn’t about writing code, it’s about a conversation. Social manipulation is easy when working with a tool that loves to take on roles and personalities on command.

2) Your GPTs aren’t safe. If you want to make them public, then make sure none of the instructions or documents contain data you wouldn’t want the world to access.

Ethics

.: Provocations for Balance

How should AI-powered wearables navigate the delicate balance between enhanced user experience and the potential for invasive surveillance?

For instance, if smart glasses can record or analyse conversations and surroundings, what new consent mechanisms should be in place to protect the privacy of both the wearer and those around them?

If these devices can provide real-time information or analysis about people we meet (like social media profiles or personal preferences), does it risk reducing genuine human connection and spontaneity in social interactions?

Inspired by this week’s story: 2024 will be ‘The year of AI glasses’

:. .:

.: :.

Questions, comments or suggestions for future topics? Please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 40 .: AI poses the biggest global risk in 2024

Hello Reader,

Promptcraft is a weekly AI-focused newsletter for education, improving AI literacy and enhancing the learning ecosystem.

In this issue, you’ll discover:

  • How election disruption from AI poses the biggest global risk in 2024;
  • The latest investment in Perplexity AI taking on Google Search;
  • A new learning community about AI for education.

Let’s get started!

~ Tom Barrett

RISK REPORT

.: Election disruption from AI poses the biggest global risk in 2024, Davos survey warns

Summary ➜ The World Economic Forum’s Global Risks Report 2024 has highlighted AI-derived misinformation and disinformation as the most significant global risk over the next two years. This concern is especially pertinent as approximately half of the world’s adult population is set to vote in upcoming elections, where AI’s influence on large voter populations could significantly impact democratic processes.

Why this matters for education ➜ The ongoing debate over what’s real and what’s not in education, primarily focused on plagiarism, is a distracting sideshow. This narrow focus shifts attention away from the critical need to develop robust skills against the emerging risk of the blurred line between truth and falsehood. In the AI era, it’s vital for students to learn how to discern misinformation and critically assess digital content. This skill is not just an academic necessity but a global imperative, as AI’s influence spans across international borders, reshaping political and social landscapes. Addressing this challenge requires a broader, more globally aware educational approach.

pRZ3VPTJwTYQfXYhsAD6q5

SEARCH

.: AI-Powered Search Engine Perplexity AI Now Valued at $520M, Raises $73.6M

Summary ➜ Founded in August 2022 by a team with backgrounds in AI and search technologies, Perplexity AI offers a chatbot-like interface for natural language queries, providing summaries with source citations. It competes against giants like Google and Microsoft, aiming to revolutionise knowledge search and acquisition. The company, which claims to have 10 million active monthly users, has now raised over $100 million in total​​.

Why this matters for education ➜ The experience of looking up information on the web, exploring content and finding answers is changing. Tools like Perplexity AI are designed as answer engines, far different from presenting lists of blue links for a student to choose from and then continue an inquiry. As the technology rapidly advances students are much more likely to be exploring information via a chatbot than traditional web searches. Are we seeing the beginning of the end of Google search? Could the “Google it” era be slowly crumbling?

826qySNZtDPsCE2jt7RGvs

COPYRIGHT

.: New York Times Sues OpenAI and Microsoft Over Copyright Infringement

Summary ➜ The lawsuit against OpenAI and Microsoft, accuses them of using millions of the newspaper’s articles without permission to train chatbots. This lawsuit, filed in Manhattan federal court, challenges the companies’ use of copyrighted content to develop AI products like ChatGPT, alleging they are trying to “free-ride” on the Times’s journalism.

Why this matters for education ➜ We should all be watching the copyright legal cases against AI companies quite closely. At the centre of this issue is the use of training data and how LLMs, like ChatGPT, generate copyrighted material verbatim. This issue is mirrored across other types of generative AI tools, such as image and voice tools. AI-powered tools have the potential to revolutionise teaching and learning, but copyright concerns may hinder their development and use in educational settings. It is worth pausing and reflecting on how solid and visible the foundations of OpenAI models are, especially in light of these legal challenges.

.: Other News In Brief

Midjourney V6 is here with in-image text and completely overhauled prompting

New material found by AI could reduce lithium use in batteries

OpenAI’s GPT Store Already Filling Up With “AI Girlfriends

Quora raises $75m for its AI chatbot platform

Rabbit sells out two batches of 10,000 R1 pocket AI companions over two days

Google AI has better bedside manner than human doctors — and makes better diagnoses

:. .:

.: Join the community waitlist

There’s a special community on the horizon for educators like you who want to explore the human side of artificial intelligence.

In February, we’re opening up the humAIn community – a space for forward-thinking educators to connect and learn together as we navigate the age of AI.

By joining, you’ll:

  • Build connections with like-minded peers
  • Attend exclusive webinars and virtual events
  • Join lively discussions on AI’s emerging role in education
  • Access member-only resources and Q&A forums

It’s a chance to be part of something meaningful – a space to share ideas, find inspiration, and focus on our shared humanity.

Get your name on the waitlist for information, so you don’t miss out on being part of this new learning community about AI for education.

Let’s shape the future of education, together.

.: :.

What’s on my mind?

.: The Faces We Long to See

Imagine this: You’ve just returned from a trip, having navigated the familiar airport routine – security lines, scanners, the usual. But this time, something strikes you differently as you clear the final checkpoint and shunt your luggage towards the exit. It’s not just seen; it’s felt.

.: :.

There I was, fresh off a flight, and I couldn’t help but notice something striking. Do you know those facial recognition systems at passport control? Impressive, sure. Machines whirring, beeping, scanning documents, recognising faces. They’re fast, they’re efficient. It’s technology at its peak, streamlining what used to be a long, human-driven process. Impersonal but effective. That’s the scene on one side of the airport.

When I moved beyond the systems of digital precision, the atmosphere shifted. As I pushed my luggage, here, in the arrivals hall, the scene transforms. Teenagers huddle together, smartphones in hand, homemade signs aloft – a buzzing hive, eagerly awaiting a friend’s return. Over there, a tearful couple, lost in the embrace of their children, a reunion that’s been long in the making.

For a moment, the room scanned me, and I could feel the collective anticipation – the expectant gazes of hundreds, each pair of eyes telling a story of waiting, of longing. I noticed the anxious grip on bouquets, flowers bunched in hands trembling with anticipation. This is more than just an arrivals hall; it’s the culmination of countless stories, the end of long countdowns, and the final moments of anticipation unfolding before our eyes.

They’re looking for faces, yes, but not just any faces – they’re searching for that one face they’ve missed and long to see. Hearts are racing; eyes are searching, and then a moment of recognition. It’s joy, it’s relief, it’s love. All happening right there, in the most human way possible. This was facial recognition powered by affection and memory, not algorithms.

This contrast, it hit me hard. On one side, machines do what they’re programmed to, precisely recognising faces. But they’re missing something crucial, something they can’t replicate – the emotion, the history, the storied connection we read in a human face. That’s our thing, our human thing.

And amid all the noise and rush, there’s a reminder in this bustling, busy airport. It’s a reminder of what makes us human, something that technology, no matter how advanced, can’t touch. The human connection, that spark when you see a familiar face, the warmth of a smile – technology might mimic it, but it can never truly capture it. Throughout history, our ability to recognise faces has evolved far beyond mere survival – it’s become a cornerstone of emotional connection and social interaction.

As I left the airport into the chilled Melbourne air, the echoes of these emotional reunions lingered with me. In our digital world, moments like these remind us that no matter how advanced technology becomes, the human ability to connect still holds irreplaceable value.

“But do you remember where we parked?”

:. .:

~ Tom

Prompts

.: Refine your promptcraft

Develop Scenarios for Critical Thinking

Scenario building is a great way to quickly resource some critical thinking activities. For example here is a scenario generated from today’s prompt about the morality of virtual worlds:

Imagine a future where virtual reality (VR) is indistinguishable from actual reality. In this world, you can experience anything without real-world consequences. However, a debate arises when a philosophy professor asks whether actions in VR hold the same moral weight as in the real world.

You will see from the longer prompt below I am using the structure:

  • Persona / Role
  • Task / Steps
  • Format / Tone
  • Context / Constraints
  • Examples / Model Answers (optional)

Here is an example prompt, which aims to develop some critical thinking scenarios, for you to try.

PROMPT

Act as an adept critical thinking strategist, specialised in developing engaging, subject-aligned scenarios that provoke [university] students to sharpen their critical, analytical and evaluative thinking abilities. You are successful when you see signals of improved critical thinking from the student.

Formulate 3 concise scenarios to explore the multifaceted problems or debates pertinent to [Philosophy] and [Ethical Implications of Artificial Intelligence]. For each scenario create a sequence of 3 probing questions aimed at prompting students to dissect arguments, unearth assumptions, and scrutinise evidence critically.

Draft each scenario as an engaging narrative snippet. Use language which is accessible and engaging to university students. The tone should be compelling and lucid, crafted to resonate within an educational gaming style.

This critical thinking scenario game is designed for use by [university] students across various disciplines who need more opportunities to practice critical thinking in a context directly related to their field of study.

Please note the variables you can change are included in parentheses.

This year I aim to share good examples of prompts as well as sharing with you new promptcraft techniques.

Remember to make this your own, try different language models and evaluate the completions.

Learning

.: Boost your AI Literacy

LOOK AHEAD
.: After AI’s summer: What’s next for artificial intelligence?

By any measure, 2023 was an amazing year for AI. Large language Models (LLMs) and their chatbot applications stole the show, but there were advances across a broad swath of uses. These include image, video and voice generation.

AI STRATEGY
.: The secret to making language models useful

Here is a summary of the key insights from the article and why it is useful for your AI Literacy:

  • The main idea is that language models alone are not enough to be truly useful or make good decisions. Language provides the words, but you need knowledge and understanding to apply those words wisely.
  • Language models can recite words and phrases from their training data, but they lack true comprehension. They find statistical correlations but can’t determine causation.
  • To make language models useful, you need to recreate the structure of human expertise – combining language with knowledge and understanding. This means knowledge graphs, causal models, etc.
  • The process should start with identifying the most valuable human expertise in an organization, determining the risks of losing it, and seeing if it can be encoded for machines.
  • Data is not the most critical element – expertise is. The goal is transferring human expertise into machine language so machines can inform or make decisions.
  • Leaders need to focus on what expertise could give them a competitive advantage if encoded into machines, rather than just reacting to what others develop.

This article provides an important perspective – that language AI like ChatGPT is not magic on its own. To be useful you need to carefully encode human knowledge, not just data.

For me this relates to the way Roger Martin talks about the Knowledge Funnel and how we don’t often have system to encode human expertise.

This is especially true in schools and back in 2015 I was calling this a knowledge urgency problem. Where we underinvest in really capturing the methods, tactics and strategies of the best expertise we have before they move on.

It is interesting to think that training a language model might be a vessel for that expertise.

WHICH AI IS BEST?
.: People Rank Open Source AI Output Better Than Commercial Models

A couple of elements of your AI literacy which I encourage you to pay attention to this year are:

  • What is it like to use open-source LLMs?
  • Explore the performance comparison across available models.

There is a wide variety of models available and when you think about AI you don’t just think of ChatGPT, you might think – OK which models might be best for this task.

Here’s a snippet from this article which refers to leaderboard results for performance from HuggingFace.

Artificial intelligence enthusiasts prefer working with open-source tools over proprietary commercial ones, according to an ongoing survey of more than 100,000 respondents. The emergence of Mistral AI’s Mixtral 8x7B, an open-source model, has made a significant impact in the AI space.

Ethics

.: Provocations for Balance

Here’s the full text from the critical thinking scenario prompt which I thought was worth sharing in full.

.:

Philosophical Dilemmas in Virtual Realities

Scenario

Imagine a future where virtual reality (VR) is indistinguishable from actual reality. In this world, you can experience anything without real-world consequences. However, a debate arises when a philosophy professor asks whether actions in VR hold the same moral weight as in the real world.

Probing Questions:

  1. Moral Parallels: In what ways might actions in a virtual environment ethically parallel those in the real world? Consider the implications of harm or benefit in both settings.
  2. Reality vs. Perception: How does our understanding of ‘reality’ affect our moral judgments? Discuss whether the perceived reality of VR changes ethical considerations.
  3. Consequence-Based Ethics: Should the lack of real-world consequences in VR impact our moral decision-making? Explore this through the lens of different ethical theories.

.:

Read the rest in this ChatGPT chat here.

:. .:

.: :.

Questions, comments or suggestions for future topics? Please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

⚠️ The Adolescent Mental Health Crisis

Dialogic #346

Leadership, learning, innovation

Your Snapshot
A summary of the key insights from this issue

  • Teen depression has alarmingly doubled since 2011, with teens reporting more loneliness and less life satisfaction.
  • This concerning trend directly correlates with the rise in smartphone and social media adoption over the past decade.
  • In her article, Jean Twenge rigorously examines and debunks various alternative explanations for the crisis, from the economy to the pandemic.
  • Her analysis suggests smartphone and social media use, exacerbated by decreasing teen independence, is the primary driver of the adolescent mental health dilemma.

Reshaping Adolescence

In her article “Yes, it’s the phones (and social media),” Jean M. Twenge unveils a disturbing reality: teen depression rates have doubled from 2011 to 2021, accompanied by rising loneliness and declining life satisfaction.

This mental health crisis correlates with increased smartphone and social media use, from the early 2010s, which is reshaping adolescence, see the chart above.

Challenging Alternatives

In an era where mental health is increasingly at the forefront of societal concerns, Jean M. Twenge’s insightful article examines and debunks several prevalent theories that have emerged in attempts to explain this disturbing trend.

From the supposed impact of economic downturns to the alleged influence of academic pressures, Twenge navigates through thirteen theories.

Here are three of the explanations explored in her article, starting with teens perhaps being more open about the challenges they are facing:

  1. Teens More Open About Not Being OK: Twenge counters this by pointing out that objective behavioural measures, such as emergency room admissions for self-harm and suicide rates, have increased in a manner consistent with the rise in self-reported depression. This trend suggests that the increase is not merely due to a change in reporting habits​​.
  2. Impact of the COVID-19 Pandemic: The third explanation Twenge refutes is the COVID-19 pandemic is the root cause of increased teen depression. She notes that the rise in teen depression began well before the pandemic, in the early 2010s. Thus, while the pandemic may have exacerbated the situation, it wasn’t the origin of the problem​​.
  3. Academic Pressure and Homework: Twenge disputes the notion that increased academic pressure and homework are the primary causes of teen depression. Data shows that U.S. teens spend less time on homework now than they did in the 1990s. Moreover, the average teen spends significantly more time on social media than homework, challenging the idea that academic workload is the primary stressor​​.

Helicopter Coddling

Jean M. Twenge identifies the decline in independence among children and adolescents as a possible explanation for the current mental health crisis among teenagers.

She acknowledges that present-day youth have fewer opportunities to engage in independent activities like exploring neighbourhoods or going out with friends, compared to previous generations.

However, Twenge believes that this trend alone does not fully account for the rise in teen depression. Instead, she suggests that the decline in independence and the impact of digital media such as smartphones and social media work together to exacerbate the mental health crisis.

Bonus 14th

⏭🎯 Your Next Steps
Commit to action and turn words into works

  • Advocate for school-based digital well-being programs that educate students about the psychological effects of excessive screen time and social media use while promoting healthier digital habits. Involve mental health professionals in developing these programs to ensure they’re evidence-based and age-appropriate.
  • Create safe spaces for teens to discuss their digital habits and social media use, facilitated by a supportive adult.
  • Organise workshops for parents on effective digital supervision, including practical strategies for managing their children’s digital consumption, setting boundaries, and understanding online risks.

🗣💬 Your Talking Points
Lead a team dialogue with these provocations

  • The Pandemic’s Amplification, Not Initiation: Discuss how the rise in teen depression predates the COVID-19 pandemic, challenging the notion that the pandemic is the sole cause of the mental health crisis. What does this reveal about the underlying, pre-existing issues in adolescent mental health?
  • Homework vs. Social Media Time: Reflect on the finding that despite spending less time on homework than in the 1990s, teens today face higher depression rates. How does this contrast with the significant time spent on social media, and what implications does it have for understanding the real stressors affecting teen mental health?
  • Independence and Digital Overload: Explore the relationship between the decline in teen independence and increased digital media usage. How might overprotective parenting styles, combined with the omnipresence of digital technology, be shaping the mental health landscape for today’s adolescents?

🕳🐇 Down the Rabbit Hole
Still curious? Explore some further readings from my archive

New Evidence on Adolescent Mental Health and Social Media | Psychology Today US Surgeon General’s June 2023 report warns that social media can harm youth mental health via excessive usage, harmful content, and displacing healthy activities. Parents should limit and monitor usage, model responsible use, and discuss openly with kids.

My fight to get screens out of schools | Waldorf Today The article suggests removing screens and technology from classrooms to avoid distractions and negative impacts on student focus and brain development. The author proposes tech-free schools as a solution and calls for government bans on social media and smartphones for minors due to mental health risks.

Parenting, Media, and Everything in Between | Common Sense Media Explore Common Sense Media’s extensive collection of articles, advice, and parenting tips related to social media.

Last issue today, thanks for supporting the Dialogic Learning Weekly this year. See you again in 2024!

~ Tom Barrett

Support this newsletter

Donate by leaving a tip

Encourage a colleague to subscribe​​​​​

Tweet about this issue

The Bunurong people of the Kulin Nation are the Traditional Custodians of the land on which I write and create. I recognise their continuing connection and stewardship of lands, waters, communities and learning. I pay my respects to Indigenous Elders past, present and those who are emerging. Sovereignty has never been ceded. It always was and always will be Aboriginal land.

Unsubscribe | Update your profile | Mt Eliza, Melbourne, VIC 3930

.: Promptcraft 39 .: ChatGPT consumes 500ml of water for every 10-50 prompts

Hello Reader,

Welcome to Promptcraft, your weekly newsletter on artificial intelligence for education. In this issue:

  • Generative AI’s huge water demands scrutinised
  • EU finalises landmark AI regulation, imposes risk-based restrictions
  • Google launches the Gemini model series to rival ChatGPT

Don’t forget to Share Promptcraft and enter my Christmas giveaway! All you have to do is share your link below.

.: Tom

Tom%20Barrett%20Christmas%20Sticker%20(1).

Get Poe AI Access for Free – Refer Friends to Win!

To enter share your unique link below and get an entry for every friend that signs up to the Promptcraft newsletter. The more referrals you get, the higher your chances to win!

Prizes: 10 x 1 month Poe AI access (USD $20 value each)
Draw date: December 20th 2023

[RH_REFLINK_2 GOES HERE]

Facebook Whatsapp Linkedin Email

PS: You have referred [RH_TOTREF_2 GOES HERE] people so far

See how many referrals you have

Latest News

.: AI Updates & Developments

.: Generative AI’s huge water demands scrutinised ➜ Generative AI, like ChatGPT, is increasing scrutiny of Big Tech’s water usage. ChatGPT consumes 500ml of water for every 10-50 prompts. Microsoft and Google’s water use has risen by 21-36% in 2022 due to new AI chatbots. AI drives more computing power, so data centres require vast amounts of water for cooling. Critics warn of sustainability issues from AI’s thirst, even though companies aim to be water positive.

.: China plays catch-up a year after ChatGPT ➜ One year after OpenAI’s ChatGPT took the AI world by storm, China lags due to lack of advanced chips. US export controls block access to Nvidia GPUs critical for powerful AI models. Domestic firms like Baidu have developed chatbots but can’t match US capabilities. China faces pressure to close the gap and realises AI leadership will be difficult.

.: Beijing court rules AI art can get copyright ➜ A Beijing court granted copyright to an AI-generated image, contradicting the US view that AI works lack human authorship. The ruling signals China’s support for AI creators over US skepticism. It could influence future disputes and benefit Chinese tech giants’ AI content tools.

pRzgV9HhVi8dRj8VgoJHT9

.: EU finalises landmark AI regulation, imposes risk-based restrictions ➜ The EU finalised its AI regulation after years of debate, imposing the world’s most restrictive regime. It bans certain AI uses and adds oversight based on risk levels. While companies warned of stifling innovation, the EU calls it a “launchpad” for AI leadership. The rules aim to curb AI risks and set a global standard amid advances like ChatGPT.

.: Google launches Gemini AI to rival ChatGPT ➜ Google has launched Gemini, a new AI model that competes with OpenAI’s ChatGPT and GPT-4. Gemini beats GPT-4 in 30 of 32 benchmarks, aided by multimodal capabilities. It comes in three versions optimised for different uses and will integrate across Google’s products. The launch puts Google back in the generative AI race it has been perceived to be losing.

.: Meta’s new AI image generator trained on 1B Facebook, Instagram photos ➜ Meta released a new AI image generator using its Emu model, trained on over 1 billion public Instagram and Facebook images. The tool creates images from text prompts like other AI generators. Meta says it only used public photos, but users’ pics likely aided training without consent.

.: Google unveils improved AI coding tool AlphaCode 2 ➜ Google’s DeepMind division unveiled AlphaCode 2, an upgraded version of its AI coding assistant. Powered by Google’s new Gemini AI model, AlphaCode 2 can solve coding problems in multiple languages that require advanced techniques like dynamic programming. In contests, it outperformed 85% of human coders, nearly double the original AlphaCode.

.: Apple quietly releases new AI framework MLX ➜ MLX is a new open source AI framework that efficiently runs models on Apple Silicon chips. It includes a model library called MLX Data and can train complex models like Llama and Stable Diffusion. Apple is expanding its AI capabilities with MLX, enabling the development of powerful AI apps for Macs.

Reflection

.: Why this news matters for education

Last week in Promptcraft 38, we peeled back the curtain on how generative AI like ChatGPT can unwittingly perpetuate biases that conflict with principles of diversity and inclusion.

This week, our lens widens to reveal another ethical dilemma – the massive environmental impact of systems like ChatGPT.

New research spotlights AI’s hefty carbon footprint and water use.

ChatGPT gulps down 500ml of water for every 10-50 prompts. With over 100 million users chatting it up, you do the maths.

Meanwhile, AI2 and Hugging Face quantify the extreme variation in emissions across AI tasks.

Generating images and text can pump out 60x more CO2 than simple classification. Efficiency gains still increase net consumption.

Despite conservation efforts, Microsoft and Google’s water use rose 21-36% in 2022, partly due to new AI systems. Emissions from AI use can even exceed those from training.

There’s over 1000x difference in energy efficiency across models. But a lack of standards prevents easy comparison.

Shouldn’t environmental impact be as clear as other risks like accuracy and bias?

AI’s emissions and biases require awareness and mitigation. Users must be educated and lower-impact models chosen. AI apps could one day be selected based on their carbon label.

.:

~ Tom

Prompts

.: Refine your promptcraft

Tree of Thought Prompting

The Tree of Thoughts (ToT) method is a way to improve how large language models like GPT, Claude or Gemini solve complex problems that require looking ahead or exploring different options.

ToT works by building a tree of intermediate ‘thoughts’ that can be evaluated and explored. This allows the model to work through a problem by generating multiple steps and exploring different options.

Recent studies have shown that ToT improves performance on mathematical reasoning tasks. We can apply this method to text based prompting too.

Here is an example for you to try.

PROMPT

Imagine three different experts are answering this question.
All experts will write down 1 step of their thinking,
then share it with the group.
Then all experts will go on to the next step, etc.
If any expert realises they’re wrong at any point then they leave.
The question is [Add your question here]

I have been playing with extending this method further with a scenario of experts exploring the question through dialogue.

It reminds me of the Expert Prompting technique we have looked at before.

Remember to make this your own, tinker and evaluate the completions.

Learning

.: Boost your AI Literacy

EXPERT PANEL

video preview

I really enjoyed this longer exploration of the issues we are navigating with AI from a practical, technical and ethical position. I discovered it via a repost of one of the panellist, Yann LeCun’s comments about the open vs proprietary approach to models. You can jump to these in the last 10 minutes, but I recommend the rest too.

ETHICS REPORT
.: Walking the Walk of AI Ethics in Technology Companies ➜ Stanford Institute for Human-Centered Artificial Intelligence (HAI) new report “Walking the Walk of AI Ethics in Technology Companies” is one of the first empirical investigations into AI ethics on the ground in private technology companies.

One of the key takeaways:

Technology companies often “talk the talk” of AI ethics without fully “walking the walk.” Many companies have released AI principles, but relatively few have institutionalized meaningful change.

FREE COURSES
.: 12 days of no-cost training to learn generative AI this December

  • Google Cloud is offering 12 days of free generative AI training in December
  • The courses cover foundations like what generative AI is and how it works
  • Technical skills content is also included for developers and engineers
  • Offerings include videos, courses, labs, and a gamified learning arcade

Ethics

.: Provocations for Balance

  • What happens when people stop using the systems which have a high environmental impact?
  • If society turns against AI due to climate concerns, could it set unreasonable expectations for AI developers to predict and eliminate the environmental impact of systems still in their infancy?
  • Are campaigns for AI sustainability failing to also acknowledge the huge benefits of AI computing for society, and the need for balance and moderation versus outright rejection?
  • Should AI researchers be tasked with solving the climate impacts of computing overall? Does this distract from innovating in AI itself which could also help address climate change?

~ Inspired by this week’s developments.

.:

That’s all for this week; I hope you enjoyed this issue of Promptcraft. I would love some kind, specific and helpful feedback.

If you have any questions, comments, stories to share or suggestions for future topics, please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!

.: Tom Barrett

/Creator /Coach /Consultant

😨 Crack the Code on Change Resistance

Dialogic #345

Leadership, learning, innovation

Your Snapshot
A summary of the key insights from this issue

  • We resist losing what we have (status quo bias), identify with groups affected (social identity), and avoid potential losses (loss aversion).
  • Balancing stability and risk (personal risk portfolio) and embracing uncertainty (negative capability) are key.
  • Grasping these mental models helps anticipate reactions, facilitate dialogue, and design effective change strategies.

Understanding how we and those around us react to change is crucial in a world where change is the only constant. This issue of The Dialogic Learning Weekly delves into five vital mental models that provide deep insights into the psychological underpinnings of how people respond to change, particularly in educational settings. As educators and leaders, grasping these models equips us with the tools to navigate and guide others through the often tumultuous waters of change.

The models we explore – Status Quo Bias, Social Identity Theory, Loss Aversion (Prospect Theory), Personal Risk Portfolio, and Negative Capability – each shed light on different aspects of human behaviour in the face of change. From our inherent resistance to losing what we have to our ability to thrive in uncertainty, these models offer a comprehensive view of the multifaceted nature of change management. They help us understand the ‘what’ and ‘how’ of change and the ‘why’ behind the reactions it elicits.

The mental models serve as a roadmap for anticipating, understanding, and addressing challenges when introducing new ideas or practices. As you read on, consider how these models play out in your own experiences, how you see colleagues react and even in your behaviour. Use the insights to design better dialogue with your teams and weave the ideas into how you approach your future projects.

Status Quo Bias

The status quo bias is the tendency to prefer the current state of affairs and resist beneficial changes. It originates from decision theory and behavioural economics and explains the resistance to change.

For example, some teachers might hesitate to adopt new teaching methods despite solid evidence supporting their effectiveness. This resistance can be due to comfort with established routines and fear of the unknown.

  • It helps in understanding where resistance comes from.
  • Emphasises the need for clear communication to overcome inertia.
  • Aids in developing effective strategies that consider natural resistance to change.

Social Identity Theory

A concept from social psychology that examines how group memberships impact behaviour and attitudes. It’s pivotal in understanding motivations, identity and group dynamics within organisations.

This theory applies to most change situations in schools. Educators often associate their role with their identity. Hence, any change affecting their role can impact their identity.

  • Awareness of group dynamics can prevent divisiveness during transitions.
  • Helps foster a unified organisational identity, which is crucial during change.
  • Assists in designing sensitive change initiatives that respect various group cultures.

Loss Aversion (Prospect Theory)

The theory of loss aversion, a vital aspect of Prospect Theory in psychology and economics, states that people prioritise avoiding losses more than acquiring equivalent gains when making decisions.

An example is educators’ reluctance to modify a long-standing curriculum unit due to fear of potential losses, such as diminished effectiveness or reputation (see identity above), despite potential gains.

  • Highlights the importance of framing change in terms of gains.
  • Underscores the need for gradual, supported transitions.
  • Critical in convincing stakeholders by emphasising long-term benefits.

Personal Risk Portfolio

A concept from decision theory and psychology refers to how individuals assess and respond to risk in their decisions. When most of our work behaviours are new or uncertain, we will likely have a low tolerance for more risk. It is about balancing what is dependable, reliable, and stable and what is riskier.

An educator deciding whether to adopt new technology in the classroom exemplifies balancing their personal risk portfolio, weighing potential risks and benefits of change against other stable aspects of their work. “Should I take this on?”

  • Understanding risk tolerance is crucial for implementing change.
  • Aids in tailoring strategies to different risk profiles.
  • Facilitates more inclusive and considerate planning processes.

Negative Capability

The ability to remain comfortable and perform effectively despite high levels of uncertainty and ambiguity. A concept from literature and psychology which is crucial for responding to change and is integral to adaptive leadership.

This might be seen when educators navigate the uncertainties of implementing a new policy without clear, immediate outcomes, such as the emergence of artificial intelligence technologies and their impact on education.

  • Emphasises the value of comfort with ambiguity during transitions.
  • Encourages flexibility and open-mindedness in leadership.
  • Leads to more adaptive problem-solving in uncertain situations.

⏭🎯 Your Next Steps
Commit to action and turn words into works

  • Reflect on past reactions using one model as a lens. What new insights emerge?
  • Frame proposed changes as minimising losses and acquiring gains.
  • Evaluate your team’s risk tolerance and customise the change approach accordingly.

🗣💬 Your Talking Points
Lead a team dialogue with these provocations

After sharing this issue of the newsletter with your team, reflect on these questions together:

  • Which of these models is most relevant to our staff?
  • How much do we have on our plate?
  • What are some uncertainties on our team right now?
  • If you mapped our risk profiles, what would that reveal about our readiness for change?

🕳🐇 Down the Rabbit Hole
Still curious? Explore some further readings from my archive

Escaping old ideas and the bias that erodes your creative culture

John Maynard Keynes points us to the challenge of “escaping” old ideas, a direct reference in my opinion to two things. (1) The creative culture those new ideas are born into, (2) the mindset of those attached to existing ideas.

10 Shifts in Perspective To Unlock Insight and Embrace Change

The skills, dispositions and routines of shifting perspectives are potent catalysts to better thinking and dialogue. Here is a selection of perspectives to explore.

Are your assumptions holding you back?

Too often, we take the status quo for granted and don’t challenge our assumptions about the world around us. This can lead to stagnation and a lack of innovation.

Thanks for reading, let me know what resonates. Next week will be the last issue for 2023. I always enjoy hearing from readers, so drop me a note or question if there is anything I can help with.

~ Tom Barrett

Support this newsletter

Donate by leaving a tip

Encourage a colleague to subscribe​​​​​

Tweet about this issue

The Bunurong people of the Kulin Nation are the Traditional Custodians of the land on which I write and create. I recognise their continuing connection and stewardship of lands, waters, communities and learning. I pay my respects to Indigenous Elders past, present and those who are emerging. Sovereignty has never been ceded. It always was and always will be Aboriginal land.

Unsubscribe | Update your profile | Mt Eliza, Melbourne, VIC 3930