.: Promptcraft 46 .: Google forced to pause Gemini images

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In this issue, you’ll discover:

  • Google pauses Gemini’s ability to generate AI images of people after diversity errors;
  • AI-Generated Nude Photos Of Middle Schoolers Found In Beverly Hills;
  • Google Is Giving Away Some of the AI That Powers Chatbots.

Let’s get started!

~ Tom Barrett

GUARDRAILS

.: Google pauses Gemini’s ability to generate AI images of people after diversity errors

Summary ➜ The decision was made after the tool inaccurately generated images of historical figures like US Founding Fathers and Nazi-era German soldiers, leading to conspiracy theories online. Google aims to address the issues with Gemini’s image generation feature and plans to release an improved version soon. This pause on generating pictures of people comes after Gemini users noticed non-white AI-generated individuals in historical contexts where they should not have appeared, erasing the history of racial and gender discrimination.

Why this matters for education ➜ The system failed; it likely caused unintentional harm. This is an important reminder for us all about how we move forward with cautious optimism about the emergence of these technologies. Google has since explained.

First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive.

Smart educators will see the teachable moment here. Which is a perfect example of the intersection of literacies – see more below.

These models are imperfect, and despite the PR disaster, this type of practical feedback from millions of users will improve the models. But as I have said before, at what cost?

3ivVJNnKLgE2i3G6vVertA

OPEN-SOURCE

.: Google Is Giving Away Some of the AI That Powers Chatbots

Summary ➜ Google has decided to open source some chatbot models, similar to Meta’s move last year. The company released two A.I. language models, Gemma 2B and Gemma 7B, to help developers create chatbots resembling Google’s own. While Google is not offering its most powerful A.I. model for free, it aims to engage the developer community and promote the adoption of modern A.I. standards.

Why this matters for education ➜ Keep in mind that AI and ChatGPT are not the same thing. Even though ChatGPT is rapidly becoming as ubiquitous as Hoover or Google. There are more than 300,000 open-source models available on platforms such as Hugging Face. One of the most powerful models, Gemma, has been made available by one of the most important organisations in the AI field. I can envision students and educators using these tools to create something amazing in the near future. It’s worth noting that OpenAI is conspicuously absent from the list of companies and research labs releasing open-source models. 🤔

vC6LTqSMu2gnyvQLv9CR7w

DEEPFAKES

.: AI-Generated Nude Photos Of Middle Schoolers Found In Beverly Hills

Summary ➜ Beverly Hills school officials have warned that anyone caught making AI-generated nude photos of middle schoolers could be expelled. The warning came after administrators at Beverly Vista Middle School discovered the images, which were created and disseminated by students with other students’ faces superimposed onto them. BHUSD officials said they are working with the Beverly Hills Police Department during the investigation

Why this matters for education ➜ I don’t have to state the obvious do I? The recent story about deepfake incidents in education is just the tip of the iceberg. There are likely many more cases that go unreported. This is not a hypothetical risk, it’s real harm. With powerful AI media tools readily available to everyone, we need to ask ourselves: how are our education organisations and systems helping us understand AI literacy? And how are we, as educators, helping young people navigate these uncertain waters? Ignoring the problem is not an option. It is an abdication of our professional responsibility.

.: Other News In Brief

📉 A recent report by Copyleaks reveals that 60% of OpenAI’s GPT-3.5 responses show signs of plagiarism.

🔗 OpenAI users can now directly link to pre-filled prompts for immediate execution.

❤️ Studies show that emotive prompts trigger improved responses from AI models.

🖖 Encouraging Star Trek affinity improves mathematical reasoning results with the AI model Llama2-70B.

⚡️ Lightning fast Groq AI goes viral and rivals ChatGPT, challenges Elon Musk’s Grok

🧠 An AI algorithm can predict Alzheimer’s disease risk up to seven years in advance with 72% accuracy.

🚧 Tyler Perry puts $800 million studio expansion on hold because of OpenAI’s Sora.

🎨 Stable Diffusion has unveiled Stable Diffusion 3, a powerful image-generating AI model designed to compete with offerings from OpenAI and Google.

🇫🇷 Microsoft partners with Mistral in second AI deal beyond OpenAI.

💰 Reddit has a new AI training deal to sell user content.

:. .:

Discount Available

.: The humAIn community is growing!

Take a look at my online community to explore, connect and learn about AI for education.

💡 AI learning resources

🗣 Shared community forums

📅 Regular online community events

🫂 Connections with peers worldwide

✨ Guidance from three trusted community leaders

You will be joining fellow educators from Singapore, US, Australia, Spain and the UK.

Find out more and grab the final membership offer before the price goes up at the end of February.

Monthly Review

.: All the January issues in one convenient PDF

iRta3CpdpjYTC7psYCBTF3

Promptcrafted January 2024

Discover the future of learning with Promptcrafted – Tom Barrett’s monthly guide to AI developments impacting education…. Read more

Look out for information about the new February edition of Promptcrafted – coming soon!

.: :.

What’s on my mind?

.: A Collision of Literacies

I think when I first saw the images generated in error by ​Gemini​, I winced. Here is one of the image sets from the Verge article in case you have not seen them before:

These images caused controversy by inaccurately depicting historical events and figures. This is one of many examples shared on social media illustrating how the system failed and how Google got it wrong, likely causing unintentional harm.

As I mentioned earlier, this is a perfect example of the intersection of literacy for us and the young people we support.

Some might brush these examples off in the pursuit of improvement, but these emerging missteps can help us calibrate our disposition and understanding of AI for societal good. Any educator will see the teachable moment here.

The imperfection, mistake and harm can be interrogated and used to learn. What learning opportunity do you see?

As I made sense of the image above, I experienced a simultaneous stressing of literacies.

There were lights on across the board of literacies:

  • AI literacy
  • Digital literacy
  • Media literacy
  • Algorithmic literacy
  • Historical literacy
  • Ethical literacy
  • Cultural literacy

As adults, we are also experiencing some of these collisions for the first time. Checking your understanding and literacy gaps is crucial, too, especially for educators.

While some extreme conspiracies around this story point to big tech attempting to rewrite history, what is perhaps more worrisome is the way AI content is flooding the internet.

Although the image in question might be easy to spot as inaccurate, there will be thousands, if not millions, of others in the future whose flaws are harder to see.

Navigating this landscape will require an amalgam of multidimensional literacies, a collision of competencies in ethics, critical thinking, history, futures, humanity and technology.

:. .:

~ Tom

Prompts

.: Refine your promptcraft

A helpful aspect of your promptcraft is remembering that humans create the training data for text-based large language models (LLMs).

This means that a positive, polite, collaborative conversation will likely yield better results than if you robotically ordered the chatbot around. Just as you would communicate with a coworker or a team member, engaging in a constructive dialogue with the AI model can lead to more effective outcomes.

This might seem like we are over-anthropomorphising the technology, but many studies have shown improvements in performance from respectful, polite interactions.

Here are a few tactics I try to use when I am prompted.

  • Initiate with Enthusiasm: Start by expressing excitement about the collaboration, for example, “I am excited to collaborate with you on [TASK]. Shall we get started?” This sets a positive tone for the interaction.
  • Provide Constructive Feedback: Offer kind, specific, and helpful feedback periodically. This can guide the model towards more accurate and relevant responses.
  • Maintain Politeness and Positivity: Engage politely and cheerfully with the model, avoiding toxicity. This makes the interaction more pleasant and can influence the quality of the responses.
  • Encouragement: When facing an impasse, offer encouragement. LLMs might “hallucinate” a lack of ability, which gentle coaxing can overcome. Think of this in the spirit of Mrs. Doyle from Father Ted, encouraging persistence and creativity.
  • Close with Gratitude: Conclude interactions by thanking the LLM for its assistance. This reinforces the collaborative nature of the exchange and sets a positive tone for future engagements, leveraging the memory feature of platforms like ChatGPT.

One curious example of the connection to training data I discovered this week is the case of Star Trek affinity.

A study about optimising prompts discovered that when Llama2-70B (an open-source LLM) was prompted to be a Star Trek fan, it was better at mathematical reasoning.

The full system prompt reads as follows:

System Message:

«Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation.»

Answer Prefix:

Captain’s Log, Stardate [insert date here]: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.

In many ways, this is a nuanced extension of the adoption of expertise or role, and I wonder if it taps into popular culture and fictional contexts in new ways.

I think this is weird, but I also appreciate the technical logic.

The responses and performance draw from the sum of all human text. So, it would be expected for the LLM to be familiar with Star Trek, given its cultural prominence and the likely prevalence of related content in its training data.

This includes the scripts and countless articles, fan fiction, and forum discussions about the series. Therefore, it’s plausible that adopting the persona of a Star Trek character could potentially activate relevant knowledge structures within the LLM, improving its ability to generate creative and contextually appropriate responses.

It’s an interesting demonstration of how the model’s performance can be influenced by the content of the prompts and the framing or persona that’s implicitly or explicitly adopted.

I wonder what other performance enhancements we might see from these types of creative activations.

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

AFRICA

.: Rural Kenyans power West’s AI revolution. Now they want more

Rural Kenyans are increasingly becoming data annotators, providing the building blocks for training artificial intelligence (AI) to recognise patterns in real life.

Despite the challenges, including low pay and difficult subject matter, this work has become a backbone of the country’s economy, with at least 1.2 million Kenyans working online, most of them informally.

The annotation industry has spread far beyond Nairobi, with Kenya emerging as a hub for such online work, rising to compete with countries like India and the Philippines.

While AI might help small-scale businesses thrive, education systems need an overhaul to create an AI innovation hub in African countries.

BIAS
.: To benefit all, diverse voices must take part in leading the growth and regulation of AI

The absence of Latinx/e founders and leaders in discussions about the growth and regulation of AI is a concerning trend. Diverse founders often bring unique perspectives and address critical social needs through their startups. However, their voices remain largely absent from policy discussions.

despite their entrepreneurial talent and determination, Latinx/e founders remain overlooked and undervalued, receiving less than 2% of startup investment funding. Even when they receive it, it’s typically just a fraction of what’s awarded to their non-Hispanic counterparts.

TOOLKIT
.: Learning With AI

Rather than try to ban this technology from classrooms outright, the Learning With AI project asks if this moment offers an opportunity to introduce students to the ethical and economic questions wreaked by these new tools, as well as to experiment with progressive forms of pedagogy that can exploit them.

The University of Maine launched Learning With AI, which includes a range of curated resources, strategies and learning pathways. The toolkit is built on a database of resources which you can explore here.

Ethics

.: Provocations for Balance

➜ What mechanisms can be established to close the gap between where AI innovation happens and who truly benefits?

➜ If diverse voices are absent in AI leadership, how can we broaden participation to harness unique perspectives?

➜ What’s the best approach for introducing young people to AI’s promises and perils?

Inspired by some of the topics this week.

:. .:

Which topic would you like to see featured in a future issue of Promptcraft?

(Click on your choice below)

❤️ The State of Companionship AI

🛠️ How to design your own chatbot

🪞 How AI Is a Mirror to Our Humanity

🦋 AI Augmented Feedback and Critique

🛡️ Walled Gardens – Student Safe Chatbots

.: :.

Questions, comments or suggestions? Please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 45 .: OpenAI’s Sora Video Tool Will Make You Gasp!

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In this issue, you’ll discover:

  • Google’s next-generation model: Gemini 1.5;
  • A state of the art text-to-video model called Sora;
  • What happens when AI eclipses your technical skills?

Let’s get started!

~ Tom Barrett

oSnf3bniRuv7Y11VhDU1no

VIDEO

.: OpenAI releases Sora: a state of the art text-to-video model

Summary ➜ OpenAI has introduced Sora, a text-to-video AI model that generates photorealistic HD videos based on written descriptions. Sora has been able to create 60-second synthetic videos with a higher fidelity and consistency than any other text-to-video model currently available. It is worth exploring some of the examples on the OpenAI site and reminding yourself they were generated from simple text prompts.

Why this matters for education ➜ Though this news may not immediately disrupt classrooms, it offers a telling glimpse of powerful AI creativity tools fast approaching. While full integration in schools could be far off, the proliferation of higher-fidelity synthetic content underscores why investing now in student AI and media literacy is vital.

More access to innovative technologies could unlock new forms of student expression. But there is work to do to lay the groundwork of critical thinking on using AI responsibly and ethically. This news is yet another reminder that regardless of if or when such tools enter our schools, nurturing students’ compassion and humanity will be as important as ever.

If you are looking for a slightly more technical exploration of the new Sora model from OpenAI, and what it means for filmmaking, I recommend this great post from Dan Shipper at Every.

OpenAI sees Sora as the first step in a “world simulator” that can model any slice of reality with a text prompt.

Yes, The Matrix.

j4r7RHHEtkwoytwcuzmjed

FRONTIER AI

.: Google’s next-generation model: Gemini 1.5

Summary ➜ Gemini 1.5 has a larger context window, enabling it to process up to 1 million tokens and analyse vast amounts of information in one go. “This means 1.5 Pro can process vast amounts of information in one go — including 1 hour of video, 11 hours of audio, codebases with over 30,000 lines of code or over 700,000 words. In our research, we’ve also successfully tested up to 10 million tokens.”

Why this matters for education ➜ Announcements of powerful new AI models are now commonplace. What matters is how this re-establishes Google as a leader in large language models, now rivalling OpenAI. For educators, having multiple big tech companies investing in AI could bring benefits if it catalyses innovation and increases access to these tools across Google’s education ecosystem.

e2NcZqLegeUbSSaooZvCvm

FUTURE OF WORK

.: When Your Technical Skills Are Eclipsed, Your Humanity Will Matter More Than Ever

Summary ➜ In this short essay from The New York Times, Aneesh Raman and Maria Flynn argue that as AI advances, technical skills like coding will become less valued while human skills like communication and empathy will only increase in importance.

Why this matters for education ➜ Raman and Flynn make a compelling argument that AI will reshape the skills needed for work, requiring less technical expertise and more human collaboration. This matters for education because (i) how to train to be an educator will change, (ii) education systems will be transformed by AI, (ii) education can transform other industries, and (iv) education can powerfully mould the future citizens that will wield these powerful technologies.

.: Other News In Brief

📣 Earlier this month the EdSafe Alliance announced their 33 Women in AI Fellows, “Designed for women technologists and educational leaders, this Fellowship creates a space for learning, support, and building a network.”

🇨🇦 Air Canada must honour refund policy invented by airline’s chatbot.

🤔 OpenAI is testing the ability for ChatGPT to remember things you discuss to make future chats more helpful.

An overview of how Anthropic are approaching the use of their AI systems in elections.

™️ The US Patent and Trademark Office (PTO) has denied OpenAI’s application to register the word GPT as a trademark.

⚡️ How much electricity does AI consume?

💸 Reddit sells training data to unnamed AI company ahead of IPO

🔊 Hear your imagination: ElevenLabs to launch model for AI sound effects

:. .:

Discount Available

.: The humAIn community is growing!

Take a look at my online community to explore, connect and learn about AI for education.

💡 AI learning resources

🗣 Shared community forums

📅 Regular online community events

🫂 Connections with peers worldwide

✨ Guidance from three trusted community leaders

You will be joining fellow educators from Singapore, US, Australia, Spain and the UK.

Find out more and grab the final membership offer before it is gone.

Monthly Review

.: All the January issues in one convenient PDF

iRta3CpdpjYTC7psYCBTF3

Promptcrafted January 2024

Discover the future of learning with Promptcrafted – Tom Barrett’s monthly guide to AI developments impacting education…. Read more

.: :.

What’s on my mind?

.: Unwrapping Promptcraft

As educators exploring integrating AI into teaching and learning in thoughtful and meaningful ways, we stand at an exciting and sobering threshold.

Do you employ a slick third-party app promising to enhance lessons through the power of algorithms effortlessly? Or directly prompt models like Gemini and ChatGPT, navigating the exhilaration and uncertainties of unfiltered AI?

In this short reflection, let’s look at the different approaches. But first, some context.

You might have heard the phrase ‘thin-wrappers’ for accessing AI tools. This software category is a simplified interface or application layer built on top of the large language models. The user is not working directly with the ChatGPT chatbot; it is through an interface or software application, even though the engine might be the same.

Imagine a LessonBot application teachers use to click a few suggested choices and generate lesson planning content. This would be the thin-wrapper application.

The alternative for the teacher would be to open up your favourite flavour of the large language model, write a prompt, and work more directly with the large language model through its native chatbot interface.

I understand we might have various tools to draw on, but for educators, which of these pathways will help them grow the most?

How does this move us closer to a healthier learning ecosystem?

Convenience, time-saving, structure and the importance of beginner starting points have all been shared with me as a rationale for why these tools might be helpful.

As Darren Coxon describes in a recent post on this topic:

using a wrapper versus learning to prompt is a little like the difference between buying a ready meal and creating a recipe ourselves.

And Dr Sabba Quidwai goes further in calling out these thin-wrapper apps as fast food.

The point is we diminish holistic growth over the medium to long term in a range of AI Literacy elements if we only choose these intermediary shortcuts.

Much like the way some people are creating protocols for student assessment, to include the process of AI prompting in the submission, adult learning needs to focus on process and outcome.

Yes, these teacher AI apps might get you an outcome quickly, but has your skill set or mindset also improved? After every interaction, do you have a marginally better knowledge of the capabilities and limitations of LLMs? Has your confidence in AI collaboration and augmentation improved? If we continue to rely solely on these third-party applications, we risk leaving teachers in the dark about how AI functions.

Beyond the issue of teacher skill building by prompting, iterating and engaging directly with these models, there are broader considerations.

One of the critical things for me is that using more tools further reduces transparency.

It might be called a thin wrapper, but it still muddies the view to the engine room and creates more complexity in the architecture of what is happening. It also further introduces the potential for human bias to the experience.

This is at a time when a lack of transparency about what’s happening is a significant critique of AI systems. So if we use these wrappers, these intermediary software products that are kind of shortcuts for teachers, surely there’s more opacity and not less.

What do you think? How might all of this play out?

:. .:

~ Tom

Prompts

.: Refine your promptcraft

Today I am delighted to share some great promptcraft from reader James Whittle, the Head of eLearning and IT at Centenary State High School in Brisbane, Queensland, Australia.

James has been exploring how to use ChatGPT as an informal coaching tool to enhance his decision-making processes, maintain well-being, and improve work quality.

As I have recommended before he uses the audio conversations from ChatGPT to make this easy.

I have found the act of speaking my thoughts aloud is a powerful tool for reflection and clarity. I tend to overthink things without making much progress. However, as I articulate my teaching dilemmas or professional challenges to ChatGPT in this way, I feel like I am making much more progress and improving my ability to define the problems I’m facing. It’s really like the coach I never had!

I appreciate the structure of his prompt below and how the final line makes the expectations clear.

“ChatGPT, as I explore [insert topic or challenge], I’m looking for a sounding board to bring out my own thoughts more clearly.

Considering my situation, where [describe the specific context or issue, without revealing personal or identifiable details], could you provide reflective questions or prompts that help me articulate my approach and solutions?

My goal is to do the majority of the thinking and talking, with your role being to guide me towards my own insights and decisions.”

Take moment to try the prompt and also read the article from James to set it all in context.

This promptcraft from James coincided with some of my own research into AI for coaching and how to design coachbots!

More on that soon.

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

STRATEGY

.: Assessment and Generative AI

For many schools and education systems the emergence of AI tools is a direct provocation to existing models of assessment.

This list of articles, research and strategy documents from John Mikton is a great starting point.

An invitation for schools to explore strategically re-calibrating assessment to highlight critical thinking, creativity, and the practical application of learning. “What is the added value of current assessment practices and how this value can be enhanced with the integration of Generative AI tools” Some resources to consider to support these conversations.

COURSE
.: AI For Everyone

AI For Everyone is a free course from Andrew Ng and DeepLearning.AI that aims to make AI accessible to everyone, including non-technical professionals. The course covers common AI terminology, the realistic capabilities of AI, identifying opportunities to apply AI in organisations, and the process of building machine learning and data science projects.

AI CHEATING
.: Guarding Academic Integrity: A Teacher’s Quixotic Battle Against AI

Some teachers may claim they can catch all these methods of cheating. However, I would argue that they only catch those students who are inept at it, and if you can catch the adept ones, you will have no problem detecting work generated by ChatGPT.

Jack Dougall explores his perspective on the use of AI tools in education and the issue of academic integrity. He acknowledges that students have always found ways to cheat, and AI tools are just another method they can use. Jack argues that the responsibility to prevent cheating lies with teachers, parents, and society.

Ethics

.: Provocations for Balance

➜ If AI can simulate and generate bespoke virtual worlds, will virtual worlds seem more perfect than ours? Could people withdraw more from imperfect real life into flawless AI-generated worlds?

➜ Will family bonds weaken if AI tutors know our children better than parents? Could children become more attached to their perfectly patient AI tutor than imperfect human parents?

➜ If AI expression surpasses humans, and machines write songs stirring our souls more than any poet could, does this sever an essential human connection to art? Will the last strummed guitar be displayed in an “Obsolete Creativity” museum exhibit?

Inspired by some of the topics this week. And I deliberately dialled up the level of provocation nearing Black Mirror setting.

:. .:

Which topic would you like to see featured in a future issue of Promptcraft?

(Click on your choice below)

❤️ The State of Companionship AI

🛠️ How to design your own chatbot

🪞 How AI Is a Mirror to Our Humanity

🦋 AI Augmented Feedback and Critique

🛡️ Walled Gardens – Student Safe Chatbots

.: :.

Questions, comments or suggestions? Please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 44 .: ​More schools in Australia trial AI tool for students

⏰ Don’t forget to join my online community to elevate your AI Literacy, before the price goes up in 6 days.

Hello Reader,

Promptcraft is a weekly AI-focused newsletter for education, improving AI literacy and enhancing the learning ecosystem.

In this issue, you’ll discover:

  • More schools in Australia to trial AI tool for students;
  • OpenAI joins Meta in labeling AI generated images;
  • What can the EU learn from Asia about AI regulations?

Let’s get started!

~ Tom Barrett

K-12 TRIAL

.: NSW schools in Australia trial AI tool for students

Summary ➜ The NSW Department of Education in Australia has developed a new generative artificial intelligence (AI) app called NSWEduChat, designed for education and safe for school-aged children. The app will be trialled in 16 NSW public schools over the first two terms of 2024, with teachers deciding how to use it in their classrooms. The app only responds to students’ questions about school activities and education-related topics and has embedded safeguards to monitor and remove inappropriate content. The trial will be monitored and reviewed to inform the future direction of AI in NSW public schools.

Why this matters for education ➜ This news from Australia is the second trial led by a public education system after South Australia bucked the banning trend last year. I am curious if there are global precedents.

Do you know of any other K-12 education system running similar trials of state-designed chatbots?

It reminds me of the walled garden days when YouTube emerged as a ‘threat’ to education systems as a young primary teacher in England. Tom, the techno-idealist, still wonders about the chasm between walled-garden edu-versions of AI systems and the full capabilities of frontier models.

AI LABELS

.: OpenAI joins Meta in labeling AI generated images

Summary ➜ OpenAI has announced that it is updating its app ChatGPT and its AI image generator model, DALL-E 3, to include metadata tagging identifying images created with AI tools. This move comes shortly after Meta announced a similar measure for labelling AI-generated images across its platforms.

Why this matters for education ➜ A good start regarding better labelling and transparency by design. Fast forward a few years and I can see we will have better tools and encoded standards for displaying augmentation. Although it is not a silver bullet, I wonder what auto-labelling might look like for text. A key attribute this development does not change is the opacity of the training data of proprietary models, which is still as shrouded as ever.

POLICY

.: What can the EU learn from Asia about AI regulations?

Summary ➜ ASEAN has published voluntary and light-touch guidelines for using AI. The ten ASEAN members agreed to the guidelines, which could cause upset within the European Union (EU) as it has been lobbying for other parts of the world to align with its own stricter proposed framework, the AI

Why this matters for education ➜ It is interesting to see how different regions publish different guidance frameworks. This is relevant in education because careers in technology may look very different depending on where you are in the world as AI regulation takes hold. It is also worth paying attention to the alignment of educational guidelines to wider industry policy regarding AI. For example, access to some AI models has been slow or limited in the EU due to restrictions. Not a global level playing field.

.: Other News In Brief

🇮🇩 A deepfake video of the late Indonesian dictator, Suharto, has gone viral ahead of upcoming elections

💰 Deepfake scammer walks off with $25 million in first-of-its-kind AI heist.

🌋 Archaeologists Tap AI to Decipher Ancient Scrolls Nearly Lost to Volcano.

📝 Google Bard becomes Gemini: Ultra 1.0 and a new mobile app available in the US.

🇺🇸 The US has made robocalls that use AI-generated voices illegal.

⚖️ Stability, Midjourney, Runway hit back in AI art lawsuit.

🇨🇳 China’s generative video race heats up.

🐲 Meet ‘Smaug-72B’: The new king of open-source AI

:. .:

Early Bird Discount

.: The humAIn Community is Open!

Take a look at my online community to explore, connect and learn about AI for education, before the price goes up in 6 days.

You will be joining fellow educators from Singapore, US, Australia, Spain and the UK.

Find out more and grab the early bird membership offer before it is gone.

Monthly Review

.: All the January issues in one convenient PDF

iRta3CpdpjYTC7psYCBTF3

Promptcrafted January 2024

Discover the future of learning with Promptcrafted – Tom Barrett’s monthly guide to AI developments impacting education…. Read more

.: :.

What’s on my mind?

.: The Ghost of Politics Past: AI’s Unsettling Role in Indonesian Elections

While I was working on today’s newsletter, I came across reports that there has been widespread use of AI deepfakes in election campaigning in Indonesia.

This news might not be new to some, as we have seen similar issues before in the US. However, this story takes the idea of an election deepfake to a whole new level.

Indonesia is in the midst of an election cycle, and with a population of over 278 million, one of the largest democracies globally, people are being bombarded with deepfake videos and images. It is worth noting that Indonesia has the fourth-largest education system in the world.

The scale and extent of this misuse of AI technology in such a populous country are unprecedented and unsettling. It’s a stark reminder of the urgent need for AI literacy and the ability to discern between real and synthetic content in our digital age.

According to the reports, one political party has generated a deepfake message from a previous president, Suharto, who passed away in 2009.

“I am Suharto, the second president of Indonesia,” the former general says in a three-minute video that has racked up more than 4.7 million views on X and spread to TikTok, Facebook and YouTube.

This incident underlines the pressing need for comprehensive regulations and public education about the potential misuse of AI technologies.

This issue got me thinking about deepfakes for political gain, the nuances of ethical use and where the line of taste is drawn.

If this example sits on one end of a disturbing scale, where would you place the chatbot prompt: “Act as Grace Hopper and help me with my innovation strategy…”?

Are they not the same invocation of a deceased leader in society, just executed to varying levels of fidelity, with different intent?

In an era where digital resurrection is possible, where do we draw the line between honouring a legacy and exploiting a memory?

:. .:

~ Tom

Prompts

.: Refine your promptcraft

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Today I want to share a good example of structured prompting from Jessica Parker who uses the powerful trio: Role, Context, Output.

Please explore her post on LinkedIn about the prompt and upcoming webinars.

The only changes I have made are to add parentheses to any possible variables you could adapt to your context.

Search Strategy Prompt

Role and Purpose

You are a [research librarian] who is an expert at [developing search strategies for literature reviews]. Your purpose is to help me develop a []comprehensive search strategy].

Context

I am a [doctoral student researcher] conducting a study on [higher education faculty and their acceptance of Al-driven assessment methods].

Instructions

I will provide the research question guiding my study. You will then provide guidance and help me develop a [comprehensive literature search strategy by identifying key concepts and terms, databases relevant to my field of study, search strings, and filters].

Output

Always structure your responses using markdown.

  • Identify key concepts and terms and consider synonyms, acronyms, and variations in spelling or terminology.
  • Identify databases that are most relevant to my field.
  • Suggest search strings, including Boolean operators and truncation.
  • Suggest filters for me to use in various databases to refine my search.

Remember to make this your own, try different language models and evaluate the completions.

Also, I have found that it doesn’t matter too much if you drop the sub-headings from the prompt. But different tools will give you different results.

Learning

.: Boost your AI Literacy

INTRO COURSE

.: Generative AI in a Nutshell – how to survive and thrive in the age of AI

Basically a full day AI course crammed into 18 mins of drawing and talking. Target audience: everyone.

video preview

REFERENCE
.: AI Ethics Living Dictionary

The Montreal AI Ethics Institute created the AI Ethics Living Dictionary to make AI ethics more accessible.

The dictionary contains plain language definitions of technical computer science and social science terms related to AI ethics.

The Living Dictionary aims to inspire and empower readers to engage more deeply in AI ethics and contribute to developing ethical, safe, and inclusive AI.

FUTURE WORKFORCE
.: Generative Artificial Intelligence and the Workforce

A useful new report on the impact of GenAI on the workforce from The Burning Glass Institute in the US.

GenAI will touch a broad array of roles. In many cases, however, the impact will be less about automating away tasks than about augmenting workers’ productivity and effectiveness or transforming the definition of job roles altogether

Ethics

.: Provocations for Balance

Why is it disrespectful and unethical to invoke the thoughts, voice or words of a person who is deceased?
Can you think of a scenario where this would be acceptable? Is it OK if the intention is benevolent? What about without consent?
When AI can convincingly replicate the voices and opinions of past leaders, how might this power reshape our understanding of history and truth?
Is there a difference in invoking the persona of a long-deceased historical figure for educational or entertainment purposes versus someone who has recently passed away?

Inspired by some of the ideas raised by my reflection on deepfakes and digital resurrection.

:. .:

.: :.

Questions, comments or suggestions for future topics? Please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 43 .: ​AI narrows the performance gap

Hello Reader,

Promptcraft is a weekly AI-focused newsletter for education, improving AI literacy and enhancing the learning ecosystem.

In this issue, you’ll discover:

  • More than half of UK undergraduate students are using AI to help with their essays;
  • How much Ethan Mollick can do with AI in 59 seconds;
  • A random controlled trial shows using GPT-4 narrows the performance gap among law students.

Let’s get started!

~ Tom Barrett

547N6aXk2GiYvXsS7bgDij

AI AT WORK

.: What Can be Done in 59 Seconds: An Opportunity (and a Crisis)

Summary ➜ Ethan Mollick conducts an experiment to see how much work can be done in under a minute using AI, producing five reasonably high-quality drafts. However, he also warns of the potential crisis of meaning that could arise as AI-written content becomes more prevalent in organisations and suggests that thoughtful leaders need to consider how to use AI in ways that emphasise the good and not the bad.

Why this matters for education ➜ This article was a provoking read, but it was not the AI demonstration that piqued my interest. It was how Mollick laid out the emerging mystery of what the use of AI in the world of work means.

The ramifications for the nature of ‘work’ are as unclear as some of the training methods of the models we are using. I appreciated this great question which sums up some of the trajectory we are on.

What does your skill and effort mean if people don’t care if your work was done by a machine?

The reality for most educators is, despite the growing paper-work pressure, words are not proxies for effort, intelligence or care, as Ethan Mollick suggests for the majority of professions.

Education pushes ever forward, perhaps cosseted, sometimes belligerent and almost certainly out of sync, from the wider impact of AI in society.

pjuu2iB5Qo1iNuLGyikPrY

HIGHER ED

.: More than half of UK undergraduate students are using AI to help with their essays

Summary ➜ More than half of UK undergraduate students are using AI to help with their essays, according to a survey of over 1,000 students conducted by the Higher Education Policy Institute. The survey found that 53% of respondents used AI to generate material for their work, and 25% used applications such as Google Bard and ChatGPT to suggest topics.

Why this matters for education ➜ It will be interesting to see the results of the EEF project which is set to look into the impact of AI tools on cutting the workload burden of teacher and improve the quality of teaching. In many schools there is a fixation with a research driven approach, but such a stance is soon put to one side when trying these new AI technologies. Perhaps both points of view can be held at the same time, but it does feel a little contradictory.

Prof Becky Francis, the chief executive of the EEF, said: “There’s already huge anticipation around how this technology could transform teachers’ roles, but the research into its actual impact on practice is – currently – limited.”

RESEARCH

.: Lawyering in the Age of Artificial Intelligence

Summary ➜ A University of Minnesota Law School study found that AI, notably GPT-4, slightly improves legal analysis quality and significantly boosts task completion speed for law students. The biggest benefits were seen in lower-skilled students. Users were satisfied and effectively identified tasks where AI helped most, suggesting AI can enhance productivity and equality in law practice.

Why this matters for education ➜ The research reveals AI’s potential to democratise academic performance, notably narrowing the performance gap among students. This levelling effect, especially beneficial for those with lower initial skill levels, suggests AI could transform learning across various domains by making educational outcomes more equitable. It make me wonder about the broader application in enhancing learning efficiency and equality, but at what cost? Could AI similarly level the playing field in other educational areas, reducing barriers and making learning more accessible to all? How might this impact long-term educational strategies and inclusivity across diverse learning environments?

.: Other News In Brief

Google is preparing to fully rename Bard to Gemini.

Apple is set to reveal it’s AI development “later this year”.

An AI-generated image of an Australian state MP raises wider questions on digital ethics.

Hugging Face has launched an open source AI assistant maker called Hugging Chat Assistants.

An interdisciplinary team of researchers have developed a machine learning system to detect mental health crises messages.

The EU Member States have endorsed the EU’s AI Act (AIA), here’s a useful quick guide from Christopher Götz.

:. .:

Spark Dialogue

.: The humAIn Community is Open!

I am delighted to share with all you Promptcrafters, our online community to explore, connect and learn about AI for education is open!

We have already welcomed our first members today from Australia, Spain and the UK, which is very exciting.

Find out more and grab our early bird membership offer.

.: :.

What’s on my mind?

.: Make it stick

ChatGPT was one of the fastest-growing technology tools we have ever seen. It gained 100 million users within just two months after its launch in November 2022.

But what drove that rapid user base and growth, how does this play out regarding traditional technology adoption theory, and is education immune to these societal shifts? These are some of the questions I have been thinking about this week.

Part of the theory you would have seen is to chunk people into different user groups: early adopters, laggards, etc. This is from the work of Everett Rogers, who proposed the diffusion theory in the 60s. Innovations or new technologies tend to spread through a population predictably.

If you look beyond people’s labels, there is a much more nuanced aspect of his work, which explores the attributes of the technology or idea itself.

He hypothesised a direct relationship between the characteristics of the innovation and the percentage of people who adopt it over time.

▶︎ Relative advantage

▶︎ Compatibility

▶︎ Observability

▶︎ Complexity

▶︎ Trialability

You might be thinking about how AI will be integrated into your organisation or how school colleagues can use these powerful tools.

Take a moment to consider each of the attributes of what you might be proposing. Let’s look at what this means for something like ChatGPT.

▶︎ Relative advantage – How does prompting a chatbot put me in a better position than where I was? What’s the advantage: time saved, speed, convenience, overcoming idea blocks, performance boost.

▶︎ Compatibility – How well does the chatbot align with the potential adopters’ values, past experiences, and needs? Does it fit into their current workflow, or will it require a drastic change in habits? I think this is not just a question of infrastructure but also a philosophical challenge to identity (see the question in the lead article above).

▶︎ Observability – Can the results of using AI chatbots be seen and appreciated by others? Is there a demonstrable benefit that can be observed and measured? For instance, the effectiveness of ChatGPT can be observed in the quality of text it produces, the time saved, and the increase in productivity.

▶︎ Complexity – Is the technology easy to understand and use, or does it require significant learning effort and time? ChatGPT, for instance, is relatively simple to use; you type in a prompt, and it generates a response. No steep learning curve is involved despite the underlying technology being vastly complex.

▶︎ Trialability – Can your colleagues try the technology easily? Remember, we all have free access to the most powerful AI model via Microsoft’s Co-Pilot, formerly Bing. This trialability reduces the perceived risk of adoption and encourages exploration, but it is also a question of equity and access.

I always use these characteristics when exploring and developing ideas or working on innovation strategies with leadership teams. They serve as a helpful guide to how we approach helping others on their AI Literacy journey.

:. .:

~ Tom

Prompts

.: Refine your promptcraft

Back to basics this week as we look into a foundation prompting technique, persona primers: establish the role or persona for the LLM to adopt.

Persona priming was one of the first methods I learned to help improve the outputs I get from LLMs. Below I have included some examples to add before your task description.

Establish the role you want the chatbot to adopt that is appropriate for your task.

Act as an expert music teacher and learning designer.
You are an experienced mentor to secondary teachers.
Act as a highly creative learning designer with a specialism in primary teaching in Singapore.
Act as an adept critical thinking strategist, specialised in developing engaging, subject-aligned scenarios that provoke high school students to sharpen their critical, analytical and evaluative thinking abilities.

Most of the time these short persona primers improve the alignment of the output to your task. But you can also experiment with longer role descriptions.

An extra tip for developing personas or roles in more detail is to start with a quick description, and simply prompt your favourite flavoured chatbot to:

Expand on this role description

Remember to make this your own, try different language models and evaluate the completions.

Learning

.: Boost your AI Literacy

RESEARCH
.: A Meta Review of AI in Higher Education

A meta-review of 66 evidence syntheses explores the application of Artificial Intelligence in higher education over the past 5 years, highlighting the need for greater emphasis on ethical, collaborative and rigorous AI research.

The review indicates a need for enhanced ethical considerations, including participant consent, data collection procedures, and consideration of data diversity.

Top 5 benefits of using AI in education = personalised learning, greater insight into student understanding, positive influence on learning outcomes, reduced planning and administration time for educators, greater equity in education and precise assessment and feedback.
Top 5 challenges of using AI in education = lack of ethical consideration, curriculum development, infrastructure, lack of teacher technical knowledge and concerns over the shifting of authority (from human to AI).

More context about the research here from Melissa Bond’s announcement.

US SCHOOLS
.: AI Guidance for US Schools

A handful of policy and guidance links from six US states who have published since the beginning of the school year, shared by Pat Yongpradit.

This is helpful to get a sense of how systems are approaching offering guidance to teachers in the US.

AI BIAS
.: Claude 2 and GPT4 are biased and racist

A helpful reminder from Ryan Tannenbaum about the flaws in the models we are using.

By highlighting bias in these models, we can raise awareness, and hopefully mitigate its affect.

…the training done to these [large language models] masks the racism rather than removes it. But also in making it more subtle it makes it more subversive. Anything these models output hold up a mirror to ourselves.

Ethics

.: Provocations for Balance

Look around you, how much of your cyber-physical experience is managed by an algorithm?
How can we ensure that AI systems used in education are transparent, explainable, and fair? More attention needs to be paid to algorithmic accountability.
AI chatbots could reduce social isolation, but might they diminish human relationships? More research into effects on student wellbeing is warranted.
Research shows benefits of personalisation, but could this lead students down narrow paths? We must consider the risks of using AI to overly tailor educational journeys.

Inspired by some of the Meta Review and this week’s news and developments.

:. .:

.: :.

Questions, comments or suggestions for future topics? Please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 42 .: ​Google showcases new edu AI tools

Join 80 educators on the waitlist for my new learning community about AI for education.

Hello Reader,

Promptcraft is a weekly AI-focused newsletter for education, improving AI literacy and enhancing the learning ecosystem.

In this issue, you’ll discover:

  • How explicit deepfake images of Taylor Swift have sparked calls for new laws;
  • Google showcases new edu AI tools to help teachers save time;
  • Nightshade – like putting hot sauce in your lunch so it doesn’t get stolen.

Let’s get started!

~ Tom Barrett

rnsfJcE1owzJUpZdwNMM2e

DEEPFAKE

.: Explicit Deepfake Images of Taylor Swift Spark Calls for New Laws

Summary ➜ Explicit deepfake images of singer Taylor Swift were widely shared online, viewed millions of times. This has led US lawmakers to call for new legislation criminalising deepfake creation. Currently no federal laws exist against deepfakes in the US. The BBC notes the UK recently banned deepfake porn in its Online Safety Act.

Why this matters for education ➜ It has been suggested that this story brings to light the rapid advancements in deepfake technology, which is being used to target women specifically. However, it is important to note that these tools are not exclusive to deepfake technology, but rather AI image generators from companies such as Microsoft and Midjourney. In some cases, these tools are even freely available.

Over 99% of deepfake pornography depicts women without consent and there has been a 550% rise in the creation of doctored images since 2019. It’s a reminder that students need guidance on how to evaluate sources and credibility online. Media literacy skills and critical thinking are the shared territory of AI Literacy and we need to help young people so they can identify manipulated or synthetic media. Discussing these topics provide an opportunity to reflect on ethical issues like consent and privacy in the digital age. We must equip the next generation to navigate an information landscape where technological advances have outpaced regulation.

oGWPVdkyqFPeTGCta1QiTs

US ELECTION

.: Fake Biden Robocall Creator Suspended from AI Voice Startup ElevenLabs

Summary ➜ An audio deepfake impersonating President Biden was used to disseminate false information telling New Hampshire voters not to participate in the state’s primary election. The call wrongly claimed citizens’ votes would not make a difference in the primary, in an apparent attempt to suppress voter turnout. ElevenLabs, the AI voice generation startup whose technology was likely used to create the fake Biden audio, has now suspended the account responsible after being alerted to the disinformation campaign.

Why this matters for education ➜ In the past few weeks, I have been sharing various articles and links that discuss the threat posed by deepfake technology to democratic processes across the world. Unfortunately, this issue is not isolated and needs to be considered in the larger context of the spread of non-consensual synthetic explicit media featuring celebrities and other individuals. It is crucial for educators to take note of this trend. Additionally, it is worth noting that AI is increasingly generating articles on the internet. This raises the question of how we can develop new guidelines for young learners to navigate this new landscape.

7uus84WeJbRwdc3fg3QaHu

GOOGLE AI

.: Google showcases new edu AI tools to help teachers save time and support students

Summary ➜ At the BETT edtech conference in London, Google showcased over 30 upcoming tools for educators in Classroom, Meet, Chromebooks and more. Key highlights include new AI features like Duet in Docs to aid lesson planning, interactive video activities and practice sets in Classroom, data insights for teachers, accessibility upgrades, and strengthened security controls.

Why this matters for education ➜ As I mentioned in previous issues, it’s important to keep an eye on Google’s advancements in AI because of their huge user base. This is a significant update in AI for education, which is a notable development considering education has not been a primary focus in their previous tool integrations with Bard and others. Google has been very active in AI this past week, and it will be interesting to see how their momentum builds going forward. Additionally, based on user evaluations rather than academic benchmarks, the performance of Google’s AI tool Bard and the Gemini Pro model has improved significantly. As of now, Bard is ranked second on the LMSYS Chatbot Arena Leaderboard, just behind GPT-4 Turbo.

.: Other News In Brief

Nightshade, the tool that ‘poisons’ data, gives artists a fighting chance against AI

Chrome OS has been updated with a few experimental AI features.

Speaking of web browsers my preferred choice is Arc, and they just shipped a connection to Perplexity AI as a default search tool.

Google’s Lumiere brings AI video closer to real than unreal

OpenAI has released a new ChatGPT mention feature in BETA, which allows a user to connect different GPTs or bots in a single chat.

This feature is on for me so once I have had a play I will share more with you in the next Promptcraft. TB

Google and Hugging Face have established a partnership to offer affordable supercomputing access for open models.

:. .:

.: Join the community waitlist

On 5 February, we’re opening up the humAIn community – a space for forward-thinking educators to connect and learn together as we navigate the age of AI.

By joining, you’ll:

  • Build connections with like-minded peers
  • Attend exclusive webinars and virtual events
  • Join lively discussions on AI’s emerging role in education
  • Access member-only resources and Q&A forums

It’s a chance to be part of something meaningful – a space to share ideas, find inspiration, and focus on our shared humanity.

Get your name on the waitlist for information, so you don’t miss out on early bird subscriptions.

.: :.

What’s on my mind?

.: Unreal Engine

Last week, while sifting through the latest in media and AI developments, a term caught my attention and refused to let go: the ‘liar’s dividend.’ It’s a concept that feels almost dystopian yet undeniably real in our current digital landscape.

This term refers to a disturbing new trend: the growing ease with which genuine information can be dismissed as fake, thanks to the ever-looming shadow of AI and digital manipulation.

‘Liar’s dividend’ was coined by Hany Farid, a professor at UC Berkeley who specialises in digital forensics., and I discovered it via Casey Newton on the Hardfork podcast:

because there is so much falseness in the world, it becomes easier for politicians or other bad guys to stand up and say, hey, that’s just another deepfake.

Where AI and digital tools are adept at crafting convincing falsehoods, even the truth can be casually brushed aside as fabrication. It’s a modern twist on gaslighting, but on a global scale, where collective sanity is at stake.

This concept hit home for me this week amidst the flurry of stories about deepfakes, robocalls and synthetic media.

It’s like watching the web transform into a murky pool of half-truths and potential lies. This shift isn’t just about technology; it’s a fundamental change in how we perceive and interact with information and each other.

I can’t ignore the profound challenge this presents. Big tech promotes AI tools as miraculous timesavers, but they also enable new forms of deception. What first seemed a distant threat now feels palpably close as the risks become a reality. The trade-off has become unsettlingly clear – these tools streamline our lives and distort our reality.

Not long ago, many viewed the risks of AI as distant, almost theoretical concerns. But today, these risks are palpably close. As I see it, the real threat isn’t in the AI itself but in how it erodes our trust in what we see and hear. As AI tools become more sophisticated, the task of discerning truth in the media becomes daunting.

This draws my attention to the shared territory between media literacy, critical thinking and AI literacy efforts. For years, schools have emphasised the importance of the ‘big Cs’ – critical thinking, creativity, curiosity, etc. But now, we must urgently enact and evolve these concepts. Students require a new kind of literacy, a blend of traditional critical thinking with a nuanced understanding of AI and digital manipulation.

Truth has become a fluid concept, shaped by algorithms and artificial voices; how do we prepare students to think critically and exercise discernment in an era of manipulated realities?

They need more than knowledge; they need a toolkit for learning and discerning and the ability to navigate a reality where AI blurs the lines between fact and fiction.

:. .:

~ Tom

Prompts

.: Refine your promptcraft

This week I want you to focus on exploring a structured template for your promptcraft. Last year I shared CREATE as a handy acronym for the elements of good prompting.

Let’s take a look at another helpful framework, CO-STAR, from Sheila Teo and the GovTech Singapore’s Data Science & AI team, the winners of a recent Singaporean prompt engineering competition.

Context :.

Provide specific background information to aid the LLM’s understanding of the scenario, while ensuring data privacy is respected.

Objective :.

Concisely state the specific goal or purpose of the task to provide clear direction to the LLM.

Style :.

Indicate the preferred linguistic register, diction, syntax, or other stylistic choices to guide the LLM’s responses.

Tone :.

Set the desired emotional tone using descriptive words to shape the sentiment and attitude conveyed by the LLM.

Audience :.

Outline relevant attributes of the target audience, such as background knowledge or perspectives, to adapt the LLM’s language appropriately.

Response :.

Specify the expected output format, such as text, a table, formatted with Markdown, or another structured response, to direct the LLM.

Context: The students are 10-11 years old and have a basic understanding of food production and transportation. The project aims to teach about the environmental impacts of imported foods. Privacy should be respected.
Objective: Generate a draft planning outline for a 4-week unit on food miles including learning objectives, activities, and resources. Focus on Science and Tech concepts.
Style: Use clear headings and bullet points. Write in an educational style suitable for teachers.
Tone: The tone should be factual and enthusiastic about student learning.
Audience: The materials are for a Year 5 teacher familiar with the national curriculum.
Response: Return the draft outline formatted in Markdown. Include main headings, sub-headings, and bullet points.

Remember to make this your own, try different language models and evaluate the completions.

Learning

.: Boost your AI Literacy

ENERGY
.: Rethinking Concerns About AI’s Energy Use | Center for Data Innovation

many of the early claims about the consumption of energy by AI have proven to be inflated and misleading. This report provides an overview of the debate, including some of the early missteps and how they have already shaped the policy conversation, and sets the record straight about AI’s energy footprint and how it will likely evolve in the coming years.

ESAFETY
.: Deepfake trends and challenges — position statement

The Australian eSafety Commissioner published guidance on the potential risks and challenges posed by deepfake technology.

Their position statement is a helpful introduction, including background details about deepfake technology, recent coverage (but not up to date) eSafety approach and advice for deal with deepfakes.

DIGITAL DECEPTION
.: Deepfakes: How to empower youth to fight the threat of misinformation and disinformation

An extensive exploration of this issue from Nadia Naffi including some highlights from her research into how to counter the proliferation of deepfakes and mitigate the impact:

Youth need to be encouraged in active, yet safe, well-informed and strategic, participation in the fight against malicious deepfakes in digital spaces.

She also offers these helpful guiding strategies, tactics and concrete actions

  • teaching the detrimental effects of disinformation on society;
  • providing spaces for youth to reflect on and challenge societal norms, inform them about social media policies and outlining permissible and prohibited content;
  • training students in recognizing deepfakes through exposure to the technology behind them;
  • encouraging involvement in meaningful causes while staying alert to disinformation and guiding youth in respectfully and productively countering disinformation.

Ethics

.: Provocations for Balance

  1. How are you increasing your understanding of deepfake technology to effectively educate students about its risks?
  2. What methods have you seen which integrate deepfake recognition into your media literacy curriculum?
  3. How do you facilitate classroom discussions about the ethical implications and societal impacts of deepfakes?
  4. What strategies are you teaching students to identify and respond to deepfake disinformation, especially online?
  5. What measures does your school or system have in place to address incidents involving deepfakes targeting students or staff?

Inspired by all the deepfake news.

:. .:

.: :.

Questions, comments or suggestions for future topics? Please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett