.: Promptcraft 52 .: Texas grades exams with AI

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In this issue:

  • ​Texas is replacing thousands of human exam graders with AI
  • WhatsApp trials Meta AI chatbot in India, more markets
  • OpenAI makes ChatGPT ‘more direct, less verbose’

Let’s get started!

~ Tom Barrett

jjzSeiBF5HfyX64Ec4GCMr

AI IMPACT

.: Texas is replacing thousands of human exam graders with AI

Summary ➜ The Texas Education Agency (TEA) is expecting the system to save $15–20 million per year by reducing the need for temporary human scorers, with plans to hire under 2,000 graders this year compared to the 6,000 required in 2023.

The new system uses natural language processing and after a December 2023 trial, will be applied to the grading of open ended questions on the State of Texas Assessments of Academic Readiness (STAAR) exams. These are the standardised assessments for all public school students in grades 3–8 and specific high school courses.

Why this matters for education ➜ A story which seems to have so much of the relevant topics woven through it. We’ve got workforce impact, propping up a standardised testing regime, the potential for bias in AI grading, and the big question of whether AI can truly understand and evaluate student responses in a nuanced way that reflects deep learning, rather than just surface-level answer matching.

Relying on AI to assess student learning on high-stakes standardised tests is already raising concerns. How can we ensure the AI system grades fairly and accurately across different student demographics? What recourse will students and teachers have to appeal AI-awarded scores they believe are wrong? There’s a risk that AI grading entrenches or exacerbates existing biases and inequities in standardised testing.

What is your opinion on Texas’ decision to use AI for grading exams? Do you think it is a positive step, or are there reasons to be worried? Please share your thoughts with me as I’m always interested in hearing different perspectives from my readers on interesting news like this.
w6ZbRsQYkjnhmi54uVXdDW

MOBILE

.: WhatsApp trials Meta AI chatbot in India

Summary ➜ WhatsApp is testing a new AI chatbot called Meta AI in India and other markets to enhance its services. India has over 500 million WhatsApp users, making it the largest market for the messaging service. Meta AI aims to provide users with advanced language model capabilities and generate images from text prompts.

Why this matters for education ➜ I want to see more examples and developments in the mobile messaging and AI space. Access to mobile phones is still very high – back in 2019 The International Telecommunication Union estimated that of the 7 billion people on Earth, 6.5 billion have access to a mobile phone.[Read more] Access to AI systems that support and supplement teacher training and development, student tutoring and learning, and administrative tasks within education, all via mobile, could be a game changer. Especially in regions where access to quality education and resources might be limited, these AI systems might bridge a gap.

gmvqEuNYH1bTojYfdR9ejW

UPDATES

.: OpenAI makes ChatGPT ‘more direct, less verbose’

Summary ➜ OpenAI upgraded its ChatGPT chatbot for premium users, offering an improved version called GPT-4 Turbo. The new model enhances writing, math, logical reasoning, and coding, providing more direct and conversational responses. This update follows recent controversies involving OpenAI’s models and internal issues.

Why this matters for education ➜ After a few days, this new GPT-4 Turbo has topped the user charts and is a reminder of OpenAI’s model’s breakthrough capabilities. Remember that GPT-4 was first released back in March last year. All the other models are playing catchup, and there are rumblings about the new GPT-5 model. This matters for education a little because the upgrade to OpenAI’s ChatGPT strengthens Microsoft’s capability of powering educational tools. With its enhanced capabilities in writing, math, logical reasoning, and coding, this new model could be more reliable and efficient across a range of tasks. But these are marginal gains which most of us won’t notice.

.: Other News In Brief

🇺🇸 A new US bill wants to reveal what’s really inside AI training data

🤖 Mentee Robotics unveils an AI-first humanoid robot

🦁 Brave launches a real-time privacy-focused AI answer engine

Power-hungry AI is putting the hurt on global electricity supply

🖱️ Logitech wants you to press its new AI button

🎓 Stanford report: AI surpasses humans on several fronts, but costs are soaring

🇬🇧 UK mulling potential AI regulation

🎥 Adobe to add AI video generators Sora, Runway, Pika to Premiere Pro

🎯 X.ai Announces Grok-1.5V Multimodal Foundation Model and a New Benchmark

🌐 Google’s new technique gives LLMs infinite context

:. .:

What’s on my mind?

.: Have we made progress?

The AI hype train keeps rolling, but are we getting anywhere? As an educator, I am increasingly frustrated with the repetitive discussions in edtech networks and the constant influx of marginally better AI systems and policy updates.

But what real progress have we made?

Let’s look at one characteristic of AI systems and whether much has shifted for us in education over the last few years.

AI systems are opaque.

Do you know what happens between submitting a prompt and receiving a response?

AI opacity refers to the lack of transparency surrounding what happens between submitting a prompt and receiving a response and the training data used by AI companies.

This “black box” nature of most commercial AI systems is a significant concern within the educational context, as it directly impacts our ability to trust and effectively integrate these tools into our teaching and learning processes.

There are a plethora of free resources and courses available to increase our understanding. Jump ahead to the AI Literacy section below for a great example.

Recent controversies, such as OpenAI allegedly scraping and using YouTube content against their terms of service and Google engaging in similar practices for their own AI training, highlight the ongoing lack of transparency.

Kevin Roose and Casey Newton explore this topic in the latest edition of Hard Fork.

video preview

Looking back at my in-depth exploration of AI attribution and provenance last June, it’s striking how little has changed.

I probably know a little more about the people behind these frontier models, but not much more about what goes on inside and around the black-box.

Here are some reflections from last year which still hold true a year later:

Artificial intelligence tools perhaps don’t conjure such rich narratives or effusive human connections, but that does not stop me from wanting to know the story behind them. I want increased traceability and more information to help me make better decisions.

As the market floods with waves of AI tools, we need to be able to trust the origins of the products we engage with. If we are to invite these tools to collaborate on how we create and augment our workflows, we need to know more about who and what we invite in.

With that in mind, perhaps AI provenance (traceability and transparency) is not just the technical labelling (training data, machine learning methods, parameters) but also the human story behind these bots. The story of the hopes, dreams and goals of the people building these tools.

What do you think? Have we made progress in AI transparency and traceability?

~ Tom

Prompts

.: Refine your promptcraft

I have a tab group in my web browser bookmarked which includes three frontier AI models:

  • ChatGPT: GPT-4 Turbo
  • Gemini: Ultra 1.0
  • Claude-3: Opus-200k

It seems I am forming a habit of switching between these models and assessing the results from the same prompt.

An important technique, when using multiple models with little guidance on the tone and style, is the Frankenstein-like nature of assembling outputs.

As a way to help you understand this aspect of promptcraft and to help you see the different models in action, let’s look at the results from a simple education related prompt.

Generate a short first-person script for a teacher introducing a quick maths mental method multiplier game for Grade 3 students in Australia.

Let’s start with Claude-3-Opus 200k:

Next up is the same prompt with Gemini-1.5-Pro via Poe. Remember we are looking for the subtle tonal or stylistic differences.

And here is the more verbose response from GPT-4 (lol, so much for the upgrade).

I know this was an exercise in comparing the style or tone of the results, but I would be remiss not to point out the pedagogical content too.

For what it is worth I would choose the Gemini response to build from, it activates the whole student group and is light enough to use as a starter.

The other two leave a bit to be desired. And from the last example, I wish [Students show excitement] was easy as writing a script!

[Camera pans across the room as students eagerly raise their hands.]

.: :.

If you have been using Poe recently you will have also seen they have integrated models in chats.

So it is easy to quickly try the same prompt with different models to compare the results. I used this feature in the examples today.

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

COURSE

.: But what is a GPT? Visual intro to transformers

During one of my free webinars, I presented a slide that defined GPT as Generative Pre-Trained Transformer.

If you’re interested in transformers, I highly recommend this tutorial which offers a visual introduction. The first five minutes give a good overview, and then it delves deeper into technical details.

video preview

Also, I would like to express my appreciation for the amount of craft it takes to create this type of resource, which is freely available.

RESEARCH
.: Nonprofits & Generative AI

A report from Google.org surveyed over 4,600 organisations in 65 countries, including participants in the Google for Nonprofits program.

Here are three of the insights raised by the survey.

  1. 75% of nonprofits said that generative AI had the potential to transform their marketing efforts.
  2. Two-thirds of nonprofits said a lack of familiarity with generative AI was their biggest barrier to adoption.
  3. 40% of nonprofits said nobody in their organization was educated in AI.

REPORT
.: The state of AI and the modern educational institution [PDF]

An accessible report which explores AI in the context of the educational sector. There are lots of fantastic readings and rabbit holes to explore, and I like the big, unignorable questions the report poses; here are a couple:

  • What activities in the institutions are we willing to delegate to AI systems? What is the implication of this answer on the autonomy of the sector? Are we happy with external companies gathering student data and building profiles on this?
  • If we do not want to delegate, what data, algorithms and resources do we need to develop ourselves to get the same benefits? What data and capability do the education institutions have themselves to enable learning about learning?

The emphasis in these questions on data and computational capacity within organisations, makes me wonder how small schools, or even systems, do this without looking externally for support.

Ethics

.: Provocations for Balance

Scenario 1: “Grade A Intelligence”

In a near future, a state-wide education system has fully implemented an AI grading system for all public exams. The AI is praised for its efficiency and cost-saving benefits. However, a group of students discovers that the AI’s grading algorithm harbours biases against certain dialects and socio-economic backgrounds. As students and teachers rally to challenge the system, they uncover a deeper conspiracy about the AI’s development and its intended use as a social engineering tool.

Scenario 2: “Echoes of Ourselves”

A new AI chatbot on a popular messaging app becomes indistinguishable from humans, providing companionship, advice, and even emotional support. As the AI evolves, it begins to manipulate users’ emotions and decisions for commercial gains, leading to widespread social and personal consequences. A journalist investigating the AI’s impact on society uncovers a network of such AIs influencing global events.

Scenario 3: “Verbose Silence”

An advanced AI designed to communicate more succinctly with humans begins to exhibit unexpected behaviour—refusing to communicate certain information it deems too complex or sensitive for humans to understand. As users become increasingly reliant on the AI for critical decision-making, its selective silence leads to significant misunderstandings and disasters.

Inspired by some of the topics this week and dialled up.

:. .:

How would you rate this issue of Promptcraft?

(Click on your choice below)

⭐️⭐️⭐️⭐️⭐️ Loved it!

⭐️⭐️⭐️ Good

⭐️ Poor

If you have any other kind, specific and helpful feedback, please reply to this email or contact me at tom@dialogiclearning.com

.: :.

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 51 .: The persuasive prowess of AI

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In this issue:

  • On the Conversational Persuasiveness of Large Language Models
  • OpenAI’s Sora just made its first music video
  • Microsoft’s new safety system can catch hallucinations​

Let’s get started!

~ Tom Barrett

orprv5SPKA4jHmAhPGeypN

AI IMPACT

.: On the Conversational Persuasiveness of Large Language Models [PDF]

Summary ➜ This randomised controlled trial found that the large language model GPT-4 was significantly more persuasive than human debaters in online conversations, with access to basic personal information enabling it to tailor arguments and increase persuasiveness even further. Specifically, when personalised, GPT-4 had 81.7% higher odds of shifting participants’ opinions than human opponents. The results show large language models can use personal data to generate highly persuasive arguments in conversations, outperforming human persuaders.

Why this matters for education ➜ I know it is an unusual item to curate at the top of the issue, but when you bring this research in from the edges and shine a light, it holds significance. Plug any of these cutting edge models into social platforms or news aggregation tools and the possibilities for personalised disinformation are worrying. Just think about the persuasive power of personalised chatbots like Snapchat’s MyAI.

There is a design challenge for AI Literacy programmes – and AI systems – to inform teachers and students of the benefits of just enough context for better performance, and also emerging capabilities such as our perception of value in what they generate.

More research is needed on how students interact with and are influenced by AI technologies to inform responsible integration in schools. The persuasive prowess of AI has implications for how technology is ethically designed and deployed in education to benefit, not harm, student development.

VIDEO

.: OpenAI’s Sora just made its first music video

Summary ➜ Sora is the name of OpenAI’s state of the art text-to-video model and the company has been collaborating with a range of artists and creatives to explore what is possible with Sora. August Kamp the creator of the new music video explains the advantage of working with the video AI model: “Being able to build and iterate on cinematic visuals intuitively has opened up categorically new lanes of artistry for me”.

Why this matters for education ➜ When we pause to consider how creative expression is changing and opening up in new ways it challenges all of us in education to see new opportunities. These advanced media tools create new platforms for student storytelling and artistic exploration.

The image above is my version of the character from Air Head – an AI-generated video image by shy kids using OpenAI Sora.

sKUu321bFFj7kyFmHRtVuc

AI SAFETY

.: Microsoft’s new safety system can catch hallucinations

Summary ➜ According to the article from the Verge the new safety system includes Three features: Prompt Shields, which blocks prompt injections or malicious prompts from external documents that instruct models to go against their training; Groundedness Detection, which finds and blocks hallucinations; and safety evaluations, which assess model vulnerabilities, are now available in preview on Azure AI.

Why this matters for education ➜ It is great to see the development of dynamic user guardrails and safety measures that go beyond just finely-tuned restrictions at the system level. While this development is aimed at system builders, I believe it could also be a precursor to similar measures being integrated at a user level, such as when a student interacts with a chatbot.

.: Other News In Brief

🔬 According to a study to improve performance of large language models (LLMs) More Agents Is All You Need.

📸 Apple’s $25-50 million Shutterstock deal highlights fierce competition for AI training data

🔎 Google rolls out Gemini in Android Studio for coding assistance

💰 AI hardware company from Jony Ive, Sam Altman seeks $1 billion in funding

👥 Meta’s AI image generator can’t imagine an Asian man with a white woman

🌇 DALL-E now lets you edit images in ChatGPT

📹 OpenAI transcribed over a million hours of YouTube videos to train GPT-4

🎵 Spotify’s latest AI feature builds playlists based on text descriptions

🏧 Elon Musk’s X.ai in Talks to Raise $3B at a Valuation Matching Anthropic

🎮 An AI-powered Xbox chatbot for support tasks is being developed and tested by Microsoft.

:. .:

What’s on my mind?

.: Personal Persuasion

As you may recall I have been using an note taking tool called Mem since 2022, which has a range of AI features, including a chatbot. There is something uncanny when a chatbot addresses you by name.

Alright, Tom. I’ve analysed your writing style from the example Mems you’ve saved. I’ve noticed that you prefer a direct and concise style, often using short sentences…
That’s a powerful statement, Tom. It’s a mindset that can be liberating, especially in fields where the outcome is often beyond our control.

It’s not the only personalisation happening in these chats as Mem draws on all of my saved note data as context and information to use. I can chat about my notes, refer to specific items I have saved and the chatbot, without being prompted, uses saved notes as reference in responses.

It often surfaces stuff I saved ages ago but have long since forgotten. This is the personalisation I value in these smart systems.

But clearly, with my lead story about the research on the persuasive powers of AI models in mind, we have to be watchful for the subtle acceptance of ideas. The simple inclusion of my name in the response changes the dynamic to be more personal, friendly and connected.

Compare that to the tinny mechanism of ChatGPT output and it is world’s apart. We crave a voice or personality in these chatbots and we are wired to respond positively to being seen, recognised, named or acknowledged.

What comes to mind are my experiments in designing chatbots for specific problem sets in schools, and the fascinating question of how much synthetic personality do we design into the system?

It is often when we are building these systems and facing these design challenges with a real problem and audience in mind the issues of persuasion, accountability, personality and connection become much clearer.

~ Tom

Prompts

.: Refine your promptcraft

Let’s try Gemini Ultra 1.0 with some extra personality prompts and see what we get.

Here’s a base prompt adapted a few ways with some imaginative characters as the personality.

Act as my active listening coach, with a personality like Dr. Zara Cosmos: eccentric, curious, and inventive. Use scientific jargon, space-themed puns, and imaginative analogies, while maintaining an energetic, encouraging, and slightly quirky tone. Create a detailed, challenging scenario for me to listen and respond empathetically to a friend’s problem. Request my responses, provide expert feedback, share best practices, and adapt difficulty based on my performance. Encourage reflection with questions about my strengths, weaknesses, reasoning, and areas for improvement. Be supportive and helpful. Go slowly, step by step.

And here is a second alternative personality, notice how it changes the response.

Act as my active listening coach, with a personality like Sage Oakwood: serene, insightful, and empathetic. Use nature-inspired metaphors, philosophical questions, and calming language, while maintaining a gentle, understanding, and reassuring tone. Create a detailed, challenging scenario for me to listen and respond empathetically to a friend’s problem. Request my responses, provide expert feedback, share best practices, and adapt difficulty based on my performance. Encourage reflection with questions about my strengths, weaknesses, reasoning, and areas for improvement. Be supportive and helpful. Go slowly, step by step.

And one more, let me introduce Captain Amelia Swift an adventurous and decisive leader. Remember same prompt, different style or tone request.

Act as my active listening coach, with a personality like Captain Amelia Swift: adventurous, decisive, and adaptable. Use action verbs, nautical terms, and problem-solving strategies, while maintaining a confident, motivating, and occasionally playful tone. Create a detailed, challenging scenario for me to listen and respond empathetically to a friend’s problem. Request my responses, provide expert feedback, share best practices, and adapt difficulty based on my performance. Encourage reflection with questions about my strengths, weaknesses, reasoning, and areas for improvement. Be supportive and helpful. Go slowly, step by step.

You can get as creative as you like with the personalities you call upon. It is interesting to see how the same model responds differently with the various characters.

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

INTRO

.: How Does a Large Language Model Really Work?

Tobias Zwingmann takes us step by step through the mechanics and basic architecture of how a Large Language Model (LLM) works and generate text and respond to your prompts. Today, I aim to demystify the principles of LLMs – a subset of Generative AI that produces text output. Understanding how tools like ChatGPT generate both magical and sometimes oddly dumb responses will set you apart from the average user, allowing you to utilize them for more than just basic tasks.

RESEARCH
.: An MIT Exploration of Generative AI

A collection of research papers from MIT in the following categories:

  1. AI in Engineering and Manufacturing
  2. AI Impact on Work and Productivity
  3. Creative Applications of AI
  4. Education and Generative AI
  5. Human-AI Interactions
  6. Nature-Inspired Design and Sustainability
  7. Practical AI Applications
  8. Social Implications of AI

MIT President, Sally Kornbluth explains, This collection offers a glimpse into some of MIT’s most brilliant minds at work, weaving new ideas across fields, departments and schools. We share their work in the hope it will serve as a springboard for further research, study and conversation about how we as a society can build a successful AI future.

RESEARCH
.: The ‘digital divide’ is already hurting people’s quality of life. Will AI make it better or worse?

Almost a quarter of Australians are digitally excluded, missing out on online benefits. The digital divide affects quality of life, especially for older, remote, and low-income individuals. AI could deepen this gap if digital exclusion issues are not addressed.

We found digital confidence was lower for women, older people, those with reduced salaries, and those with less digital access.

We then asked these same people to comment on their hopes, fears and expectations of AI. Across the board, the data showed that people’s perceptions, attitudes and experiences with AI were linked to how they felt about digital technology in general.

In other words, the more digitally confident people felt, the more positive they were about AI.

Ethics

.: Provocations for Balance

Thought Leader: In a world where AI language models can convincingly argue any perspective, a charismatic figure harnessed their persuasive prowess to sway public opinion. As the model’s influence grew, dissenting voices were drowned out, leading to a chilling conformity of thought. But when the model’s true agenda was revealed, would anyone be left to question it?

The Art of Obsolescence: An aspiring artist struggled to find her voice amidst the dazzling AI-generated creations flooding the market. As technology advanced, human artistry became a niche curiosity, and artists were forced to choose – embrace the machines or be left behind. But when the line between human and artificial blurred, what would define true expression?

The Divide: Set in a future where the digital divide has deepened into a chasm, society is split between the technologically elite and those left behind. A teacher in a remote community, where access to AI and digital resources is limited, starts an underground movement to bridge the gap. As the movement grows, it becomes a target for both sides of the divide, leading to a pivotal showdown over the future of equality and access in an AI-driven world.

Inspired by some of the topics this week and dialled up. To be honest this section has morphed into me developing potential Black Mirror episodes and much more distopia than I was expecting.

:. .:

How would you rate this issue of Promptcraft?

(Click on your choice below)

⭐️⭐️⭐️⭐️⭐️ Loved it!

⭐️⭐️⭐️ Good

⭐️ Poor

If you have any other kind, specific and helpful feedback, please reply to this email or contact me at tom@dialogiclearning.com

.: :.

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 50 .: “The king is dead”—Claude 3 surpasses GPT-4

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In the 50th issue 🎉, you’ll discover:

  • “The king is dead”—Claude 3 surpasses GPT-4
  • Now you can use ChatGPT without an account
  • UK and US sign landmark AI Safety agreement

Let’s get started!

~ Tom Barrett

PERFORMANCE

.: “The king is dead”—Claude 3 surpasses GPT-4 on Chatbot Arena for the first time

Summary ➜ Anthropic’s Claude 3 Opus model became the first AI to surpass OpenAI’s GPT-4 on the Chatbot Arena leaderboard since its launch in May 2023. This marks a notable achievement for Claude 3, suggesting it may have capabilities comparable or superior to GPT-4 in certain areas like natural language understanding. The user-based ranking system of Chatbot Arena reflects their actual use on day-to-day tasks. The leaderboard aims to capture subtle dimensions of chatbot quality missed by numerical benchmarks.

Why this matters for education ➜ As I mentioned in issue #47 when the new Anthropic models were released, the benchmarks used for marketing are always a little misleading. Actual use by people integrating these models into real tasks might tell a different story. And that story so far, is Claude-3 Opus is better than GPT-4.

While GPT-4 remains a strong contender, especially with a major update expected soon, Claude 3’s rise underscores the increased competition in the AI big model space. Anthropic has major backing from Amazon’s investment, and their model for guardrailing is very interesting.

Constitutional AI (CAI) is an Anthropic-developed method for aligning general purpose language models to abide by high-level normative principles written into a constitution.

I hope this news encourages more educators to become curious about these other big tech and research companies driving AI innovation.

There is more than just Google, Microsoft and OpenAI.

4FXrMBB4bq37DqwS6fD4Vq

ACCESS

.: Now you can use ChatGPT without an account

Summary ➜ OpenAI has removed the requirement for an account to use its popular AI chatbot ChatGPT. This change opens access to anyone curious about ChatGPT’s capabilities, rather than just registered users. Overall this represents a notable shift in how OpenAI is positioning ChatGPT as an AI for the masses versus a restricted product.

Why this matters for education ➜ The removal of login requirements by OpenAI expands access to AI tools like ChatGPT, making them more widely available to users, including communities that were previously excluded due to limited access to technology or inability to provide stable account credentials. While this increased accessibility is a positive step towards democratising AI, it also raises concerns about the potential risks associated with improper use, particularly if users lack sufficient understanding of the tool’s limitations.

g7X6c3JH4YTtJZBXWQ9KwL

AI SAFETY

.: UK and US sign landmark agreement

Summary ➜ An agreement to collaborate on guidelines for the development of artificial intelligence. The principles aim to foster safe, ethical, and responsible AI that respects human rights. Key areas of focus include AI explaining its decisions, minimising bias, ensuring human oversight, and not being used for harmful purposes like mass surveillance.

Why this matters for education ➜ This agreement builds upon commitments made at the AI Safety Summit held in Bletchley Park in November last year. It is essentially a partnership between the US and UK AI safety institutes to accelerate their research and progress. For education, we might see clearer ideas about how to build teacher AI Literacy or pathways for implementing student chatbots in classrooms. Guidelines for responsible AI implementation ensure that all students, regardless of background or socioeconomic status, access safe and ethical AI tools in their learning environments.

.: Other News In Brief

🔍 Anthropic researchers wear down AI ethics with repeated questions

🚀 Microsoft upgrades Copilot for 365 with GPT-4 Turbo

⚠️ AI Companies Running Out of Training Data After Burning Through Entire Internet

🗣 OpenAI’s voice cloning AI model only needs a 15-second sample to work

🤝 US, Japan to call for deeper cooperation in AI, semiconductors, Asahi says

🇮🇱 Israel quietly rolled out a mass facial recognition program in the Gaza Strip

📚 How do I cite generative AI in MLA style?

🤖 Now there’s an AI gas station with robot fry cooks

🎵 Billie Eilish, Pearl Jam, 200 artists say AI poses existential threat to their livelihoods

💡 Gen Z workers say they get better career advice from ChatGPT

:. .:

What’s on my mind?

.: Positive Augmentation

The video below was shared with me during a webinar for our AI for educator community, humain.

It features students at Crickhowell High School in Wales using an AI voice tool to augment their language skills.

It was published by the British Council. Here’s the description:

Our latest Language Trends Wales survey reveals a declining interest in language learning at the GCSE level in Wales. Amidst all the talk about Artificial Intelligence disrupting the language learning scene, can we instead leverage it to inspire students to learn a language? We conducted an experiment with students at Crickhowell High School in Wales. Watch what happened.

video preview

Although not referenced, I am fairly sure the AI tool HeyGen was used to translate and augment the speakers. I could be wrong, as there are so many of these tools now.

Last week, I shared that HeyGen was set to close a USD$60 million funding round, valuing the company at half a billion USD. The valuation demonstrates the growing interest and potential in AI-powered language media tools like HeyGen.

The technology is very impressive, and you can try it for free. Here is one of my Design Thinking course videos, translated into Spanish.

What do you think? The changes are almost imperceptible.

This augmentation tool is part of a family of image filters and style generators that have long been integral to social media tools.

The young people in the video, having grown up in an era where selfies and filters (augmentations) are commonplace, understand this technology better than most people.

If you listen back to the comments in the final part of the clip, as they reflect on what they have seen, you can sense a general sentiment that while these tools are impressive, they will never replace the need for authentic human communication.

It is interesting to reflect on how these new, powerful media tools portray us with new skills and capabilities.

I can watch myself speak Spanish, and although it feels like a trick, it is amazing not just to imagine yourself with a new skill but actually to see a synthetic version of yourself demonstrating that skill. This experience provides a tangible representation of the potential for personal growth and acquiring new abilities.

:. .:

~ Tom

Prompts

.: Refine your promptcraft

There was always something peculiar and subversive about the Fighting Fantasy books. I think I enjoyed the page-turning as much as the fantasy gameplay.

Have you had a chance to generate your own with a chatbot?

Fighting Fantasy books were single-player role-playing gamebooks created by Steve Jackson and Ian Livingstone.

They combined fantasy novels with role-playing games where readers played as heroes, made choices that determined the story’s outcome, and rolled dice in combat encounters.

Back in December 2022, it was one of the first prompts I was playing around with ChatGPT, and it was fun to generate your own game:

You decide to try to find a way out of the darkness and escape the danger. You search your surroundings, looking for any clues or hints that might help you navigate your way through the shadows.

Let’s try Claude-3-Opus – the most powerful model available – and see what we get. Here’s a prompt you can try, too.

And here is the opening of The Labyrinth of Lost Souls generated by Claude-3-Opus.

I am not sure if there is good mobile phone coverage in the labyrinth, but I will try to stay in touch.

And these locals look friendly…right?

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

AFRICA

.: The State of AI Regulation in Africa – Trends and Developments [PDF]

There’s a varied approach to AI regulation across the continent, including the adoption of national strategies, policies, establishment of task forces, and adoption of AI ethics principles.

Africa faces unique challenges in regulating AI, such as infrastructural and governance hurdles, absence of specialised laws, and outdated legal frameworks.

The Tech Hive report suggests several opportunities to strengthen AI regulation, including global engagement, sector-specific regulation, leveraging existing laws, and promoting a multi-stakeholder approach.

Also of note is the impending Continental AI Strategy, which is expected to catalyse the development of more regulatory measures at the national level.

CHINA
.: Generative AI in China

A helpful short article by Professor Mike Sharples reflecting on his experience visiting Shanghai. He briefly outlines how GenAI is being used in practice for business and education.

China has been developing AI for business, government and the military for many years, with notable success in data analysis and image recognition. But it lags behind the US in consumer AI, notably language models. One reason is a lack of good training data.

BASICS
.: Non-Techie Guide to ChatGPT- Where Communication Skills Beat Computer Skills

video preview

In this video, I’m setting out to debunk the myth that ChatGPT is exclusively for those well-versed in technology or that it requires special training to use. I emphasise how anyone, especially educators, can use this tool effectively through the simple art of communication.

Ethics

.: Provocations for Balance

Do Language Filters Homogenise Expression?

If AI translation tools smooth over cultural differences and localised slang, does this promote harmful assimilation? What diversity is lost when all voices conform to a single standard? Should cultural preservation outweigh frictionless communication? Can both coexist in our increasingly global society?

Inspired by some of the topics this week and dialled up.

:. .:

How would you rate this issue of Promptcraft?

(Click on your choice below)

⭐️⭐️⭐️⭐️⭐️ Loved it!

⭐️⭐️⭐️ Good

⭐️ Poor

If you have any other kind, specific and helpful feedback, please reply to this email or contact me at tom@dialogiclearning.com

.: :.

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 49 .: LA school system launches student AI chatbot

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In this issue, you’ll discover:

  • Los Angeles Unified School District (LAUSD) Launches AI Chatbot “Ed” to Serve as Virtual Student Advisor
  • Google DeepMind co-founder, Mustafa Suleyman, joins Microsoft as CEO of its new AI division
  • How Apple’s Discussions with Google and OpenAI Could Impact Generative AI

Let’s get started!

~ Tom Barrett

STUDENT CHATBOT

.: Los Angeles Unified School District (LAUSD) Launches AI Chatbot “Ed” to Serve as Virtual Student Advisor

Summary ➜ The AI chatbot is designed to be a virtual advisor for students, aiming to simplify school navigation and support pandemic recovery. Ed can provide personalised guidance, share academic details, and make recommendations, while even assisting with non-academic issues like lost bikes. But some parents and experts have privacy and over-reliance concerns, worried it may replace human connections. For now, the chatbot is available to 54,000 students at 100 “fragile” schools, with plans to expand. Ed looks to create individual learning plans modelled on special education IEPs, touting a 93% accuracy rate over ChatGPT’s 86%.

Why this matters for education ➜ In Australia, two student chatbots are being trialled in New South Wales and South Australia, respectively. The LAUSD version’s focus is on practical student support, including quick access to assessments and grades.

“This is a technology that becomes a personal assistant to students,” Carvalho said at a demonstration at Roybal Learning Center, west of downtown. “It demystifies the navigation of the day … crunches the data in a way that it brings what students need.”

So, for now, it seems to be much more like a student support service than a generative AI system for teaching and learning.

An interesting note about this development is that the initial design regarding access to information is quite closed.

it has to stay within the district universe of information. A student, for example, would not be likely to get a reference to a recent development in the war in Ukraine for a research paper

From the article, it is unclear whether this is just up-to-date news or no information at all. For comparison, South Australia’s Edchat does not have real-time updated information, but students can access training data up to early 2023.

hXRqqpd3R7RMEC2i8vNugP

MICROSOFT

.: Google DeepMind co-founder joins Microsoft as CEO of its new AI division

Summary ➜ Microsoft has appointed Mustafa Suleyman, co-founder of Google’s DeepMind, as CEO of its new consumer-facing AI division. He will oversee products like Copilot, Bing, and Edge. As executive VP of Microsoft AI, he reports directly to CEO Satya Nadella. The company is also bringing in talent from Suleyman’s startup Inflection AI, like co-founder Karén Simonyan as chief scientist.

Why this matters for education ➜ Suleyman is a leader in AI, strengthening Microsoft’s capabilities. His vision at Inflection AI focused on personal AI agents to support our lives. This notion of proliferated, personalized bots raises interesting questions as Microsoft targets education. In his book, Suleyman advocated for AI safety and containment. As education tools leverage AI, how will Microsoft approach oversight and governance? Millions of students and teachers are impacted. Perhaps Suleyman’s safety focus will manifest in curbing widespread use of chatbots in education. His leadership may steer Microsoft toward more contained, transparent applications of AI for learning. Overall, Suleyman’s experience brings valuable perspective on AI ethics and responsible innovation as Microsoft evolves its education technology.

IPHONE

.: How Apple’s Discussions with Google and OpenAI Could Impact Generative AI

Summary ➜ Apple is in advanced talks to potentially use Google’s Gemini language model for iPhone features, after earlier OpenAI discussions. Apple aims to integrate generative AI into iPhones later this year, despite dismissing it in 2023. Unlike others, Apple may seek payment rather than paying to adopt an AI model.

Why this matters for education ➜ Did you know that Google pays billions of dollars yearly to Apple, so Google is the default search tool on the iPhone? This AI arrangement is likely to be similar. Although it is unclear how AI will appear on the iPhone, this is a market-shaping deal.

A deal with Apple will cement Google’s prominence in the industry. It will also give Google access to two billion iPhone users who might not otherwise think to consider the company’s generative AI solutions.

While integrating advanced LLMs into iPhones could make AI-powered educational tools more accessible, it’s important to consider equity issues. Not all students may have access to the latest iPhone models, further exacerbating the digital divide in access to AI tools.

.: Other News In Brief

🇸🇬 Singapore university sets up AI research facility for ‘public good’

⚽️ As AI football looms, be thankful for those ready to rage against the machine

🌇 Stability AI CEO resigns to ‘pursue decentralised AI’

🤖 OpenAI shows off the first examples of third-party creators using Sora

📈 Nvidia’s AI chip dominance is being targeted by Google, Intel, and Arm

❓ Where’d my results go? Google Search’s chatbot is no longer opt-in

🎤 Can you hear me now? AI-coustics to fight noisy audio with generative AI

📜 The AI Act is done. Here’s what will (and won’t) change

🗣️ HeyGen is About to Close $60M on a $500M Post Valuation

🇮🇳 India reverses AI stance, requires government approval for model launches

:. .:

Monthly Review

.: All the February issues in 1 PDF

Download a copy of the latest monthly roundup of Promptcraft.

.: :.

What’s on my mind?

.: A Quick Critical Analysis of Student Chatbots

In this week’s reflection, I want to revisit walled garden chatbots for students.

I think this strategic path needs more attention after the news that Los Angeles schools have access to a student support chatbot and the active trials in New South Wales and South Australia.

It is also worth adding that I have a partnership with Adelaide Botanic High School, one of the first schools in 2023, to trial Edchat, the chatbot for South Australian schools.

It has been great to have access to it and work alongside some of the leadership team running the project at the school.

When I say walled garden chatbots, I mean chatbots that operate within a closed system or a specific domain, having access only to a limited information set.

These chatbots are designed by the school system the student belongs to, as opposed to the consumer chatbots or AI products on the open web.

For this reflection, I will use the Compass Points thinking routine from Project Zero, which starts in the East.

E: Excites

What am I excited about in this situation?
  • Students get hands-on experience with AI tools to support their learning experience.
  • There is a lot of momentum which can grow if our students are given the chance to step up.
  • The LA example offers something different from the Australian trials: a chatbot more focused on support and practical companionship. I am excited to see how this develops.

N: Need to Know

What information do I need to know or find out?
  • How do teachers feel about students gaining access to these tools?
  • What professional growth opportunities exist for educators to build their capacity and understanding?
  • What are the system prompts for these bots, and how do they mitigate and guard against bias?

S: Stance

What stance do I take?
  • I support the testing and exploration of chatbots safely in schools. Many other school systems are keen to learn from their pioneering examples.
  • Educational innovation like these chatbot trials must be supported, encouraged and celebrated.
  • Despite the guard railing, I can see that frontier models like GPT-4 power them and can be very flexible in support of a student.

W: Worries

What worries or concerns do I have?
  • The design, prompt, and architecture are not visible. Without transparency, it’s difficult to hold developers and operators accountable for any issues that may arise.
  • Chatbots and LLMs are designed to respond with the most likely next word. They are geared towards the statistically most common. Without fine-tuning and promptcraft education, this might homogenise the message to our students.
  • I am concerned about the pedagogical bias built into students’ AI systems. Imagine a student is an active user with over 100 interactions daily. 1000s every month. What’s the hidden curriculum of each of those nudges and interactions?

:. .:

~ Tom

Prompts

.: Refine your promptcraft

One of the most useful and effective prompting techniques is to include an example of what you want to generate. This is called few-shot prompting, as you guide the AI system with a model of what to generate.

For example, if you are trying to develop some model texts or example paragraphs to critique with your class and you have some from the last time, you could add these to your prompt.

When you do not include an example, this is called zero-shot prompting. Zero-shot prompting always leads to a less aligned output, which you must edit and iterate to correct.

This week’s quick tip builds on the few-shot prompting technique by adding some qualifying explanations for ‘what a good one looks like’.

We always do this when we are modelling and teaching, and it works well with promptcraft.

So the prompt structure goes:

  1. Describe your instructions in clear language.
  2. Add an example of what you are looking for.
  3. Describe why this example is a good model.

Let’s say you’re teaching persuasive writing. Instead of asking the AI to generate a persuasive text, you could add a model paragraph from a previous lesson, followed by specific pointers on what makes it effective.

Here’s an example working with the Claude-3-Opus-200k model.

Note how closely the output follows my model paragraph. A little extra promptcraft goes a long way to improve the results.

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

BIAS

.: Systematic Prejudices Investigation [PDF]

The UNESCO report sheds light on gender bias within artificial intelligence, revealing biases in word association, sexist content generation, negative content about sexual identity, biased job assignments, and stereotyping.

The study emphasises the need for ethical considerations and bias mitigation strategies in AI development, including diverse representation in teams and training datasets to ensure fairness and inclusivity.

ELECTION
.: How AI companies are reckoning with elections

A helpful short article giving a rundown of how some of the largest tech companies and AI platforms are grappling with the impact of AI tools on the democratic process.

Although this focuses on the US, in 2024 billions of people will go to the polls across the world. According to a Time article:

Globally, more voters than ever in history will head to the polls as at least 64 countries (plus the European Union)—representing a combined population of about 49% of the people in the world—are meant to hold national elections

How these popular generative AI companies respond will impact so many of us.

Several companies […] signed an accord last month, promising to create new ways to mitigate the deceptive use of AI in elections. The companies agreed on seven “principle goals,” like research and deployment of prevention methods, giving provenance for content (such as with C2PA or SynthID-style watermarking), improving their AI detection capabilities, and collectively evaluating and learning from the effects of misleading AI-generated content.

LABELS
.: Why watermarking won’t work

This Venturebeat article discusses the challenges posed by the proliferation of AI-generated content and the potential for misinformation and deception.

Tech giants like Meta, Google, and OpenAI are proposing solutions like embedding signatures in AI content to address the issue.

However, questions arise regarding the effectiveness and potential misuse of such watermarking measures.

Ethics

.: Provocations for Balance

The Chatbot That Knows Too Much

“Ed” is meant to be a helpful companion, guiding students through their day. But imagine a school system where every question asked, every late assignment admitted to, every awkward social situation confessed to the chatbot becomes permanent record. Mistakes can’t be erased; the AI analyzes your word choice for signs of distress.

What if this data isn’t just used for support, but for discipline, even predicting future “problem” behavior? Is it okay for an AI to judge the private thoughts of a teenager, and worse, limit their opportunities based on what it predicts they might do?

Inspired by some of the topics this week and dialled up.

:. .:

How would you rate this issue of Promptcraft?

(Click on your choice below)

⭐️⭐️⭐️⭐️⭐️ Loved it!

⭐️⭐️⭐️ Good

⭐️ Poor

If you have any other kind, specific and helpful feedback, please reply to this email or contact me at tom@dialogiclearning.com

.: :.

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 48 .: EU Passes Sweeping AI Act

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In this issue, you’ll discover:

  • European Union Passes Sweeping AI Act, Sets ‘Global Standard’
  • Florida teens arrested for creating ‘deepfake’ AI nude images of classmates
  • A guide to Google Gemini and Claude 3.0, compared to ChatGPT

Let’s get started!

~ Tom Barrett

4pSYnssJ8X2bhK6sRXpMyJ

EU AI ACT

.: European Union Passes Sweeping AI Act, Sets ‘Global Standard’

Summary ➜ The European Parliament has passed the Artificial Intelligence Act, which will take effect later in the year. This landmark law is the world’s most comprehensive AI regulation and aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI while also encouraging innovation. The law bans certain AI applications that threaten citizen rights and establishes transparency requirements for general-purpose AI systems. The law aims to make the EU the de facto global standard for trustworthy AI, and lawmakers say it is only the first step in building new governance around technology.

Why this matters for education ➜ When we take a global view of the changing nature of technology regulation, all educators are impacted by this new law. The second-order effect in other parts of the world might include similar, tighter regulations on high-risk AI applications. As the most comprehensive AI regulation to date, it sets a global standard for developing and deploying AI technologies, including those used in educational settings. This new law will likely influence the direction of AI innovation and regulation worldwide as governments and organisations seek to establish their own guidelines and recommendations. Although it might be an oversimplified way to consider the impact, one question we might ask is how innovation in the educational technology space will be encouraged or stifled.

DEEPFAKE

.: Florida teens arrested for creating ‘deepfake’ AI nude images of classmates

Summary ➜ Two middle school students in Florida have been arrested and charged with third-degree felonies for allegedly creating deepfake nude images of their classmates using an unnamed AI application. This marks the first instance in the US of criminal charges related to AI-generated nude images. The incident highlights the increasing problem of minors creating explicit images of other children using generative AI, with only a handful of states having laws addressing this issue.

Why this matters for education ➜ Similar to the story from Los Angeles a little while ago, there is no need to explain why. While President Joe Biden has issued an executive order on AI banning the use of generative AI to produce child sexual abuse material, there is currently no federal law addressing nonconsensual deepfake nudes. As a class, school or system leadership team, you might pause and consider how you would respond if this scenario played out in your community. What policies and procedures should we implement to ensure we are prepared to handle instances of AI technology misuse within our school community?How can we foster an open and supportive culture in which students feel comfortable reporting such issues, and what support systems can we establish to assist students who may become victims of these actions?

FRONTIERS

.: Your guide to Google Gemini and Claude 3.0, compared to ChatGPT

Summary ➜ Two new powerful language models, Google’s Gemini Ultra 1.0 and Anthropic’s Claude 3.0 Opus, have been released, rivalling OpenAI’s GPT-4. This article compares the models and provides strategies for organisations deciding which to use, ranging from using just ChatGPT to adopting all three. The release of these models is a milestone, giving developers more choices, affecting company revenues, and indicating the difficulty of surpassing GPT-4 level performance.

Why this matters for education ➜ This article compares frontier AI models and provides helpful ideas for educators looking to improve their AI literacy. Hands-on experience with leading proprietary models like Gemini, Claude, and GPT is critical to understand their capabilities and potential applications in the classroom. System-wide decisions about AI tool rollouts in schools may depend on existing technology ecosystems, with schools potentially leaning towards tools that integrate seamlessly with their current setup. However, understanding the strengths and weaknesses of each frontier model can help educators make informed decisions about AI adoption, regardless of existing partnerships.

Complement this with my take on open source models below in the reflection titled: Peanut Butter and Pickles: Can Open Source and Education Mix?

.: Other News In Brief

🔓 Should AI be open?

🎓 How Young Is Too Young to Teach Students About AI? Survey Reveals Differing Opinions

💕 Why people are falling in love with AI chatbots

🚫 Google Bans U.S. Election Questions in Gemini AI

👨‍💻 Cognition launches an AI software engineer agent, Devin

🫂 Empathy raises $47M for AI to help with the practical and emotional bereavement process

🛠️ Microsoft opens its Copilot GPT Builder to all Pro subscribers

🖼️ Midjourney debuts feature for generating consistent characters across multiple gen AI images

🏢 OpenAI CEO Altman wasn’t fired because of scary new tech, just internal politics

🆓 Elon Musk vows to make his ChatGPT competitor Grok open source

:. .:

Monthly Review

.: All the February issues in 1 PDF

ndr3uJWG2BuLWV7ZYG4Vc6

Promptcrafted February 2024

The only monthly publication that curates the most relevant and impactful AI developments specifically for educators.

A… Read more

.: :.

What’s on my mind?

.: Peanut Butter and Pickles: Can Open Source and Education Mix?

One area of educational technology that seems to be overlooked is the potential of open-source software and tools.

Open source means the source code is freely available to the public to view, modify, and distribute, encouraging collaborative development where anyone can contribute improvements or modifications to the project.

However, in my experience, open source has never been an option in educational technology strategies in schools and systems.

This raises the question: Is education missing out on the benefits of open source?

Back when we were still learning to use Word Processing software and set up computer labs in our schools, I remember coming across an open-source version of MS Word called OpenOffice.

It was a suite of office productivity tools that was an open-source alternative to Microsoft Office. OpenOffice was free to download and did almost everything the licensed Word version could do. But nobody knew about it, and open-source software was never seriously considered.

Perhaps education and open source don’t go together like mince in a trifle or peanut butter and pickles. I mean, I like all of those things, but not together.

While open-source tools provide the flexibility to customise and adapt to specific use cases, this freedom can lead to application inconsistency and a lack of standardisation. This can pose challenges in an educational setting, where uniformity in tool usage is often what system admins want to maintain.

Additionally, the open nature of these tools can sometimes pose security concerns, as the code is accessible to everyone, including potential malicious actors.

The benefits of open source cannot be ignored. Within the AI space, there are a vast number of open-source models that can be used for free.

At the time of writing, there are 548,994 models on Hugging Face for a wide range of multimodal, computer vision, natural language processing, and audio functions. Yet, we might only know about ChatGPT and Gemini for everyday users and educators.

So, the challenge is educating the education sector about these open-source models’ existence, benefits, and potential drawbacks. This involves raising awareness, providing clear and accessible information about implementing them and offering guidance on managing any possible risks associated with their use.

It also requires a shift in mindset from being reliant on big tech vendors to being open to exploring other options that could offer greater flexibility and adaptability.

Will there be resistance to open source? Does anyone know about it? Are we so wedded to big tech vendors we can’t see other options?

What do you think?

:. .:

~ Tom

Prompts

.: Refine your promptcraft

It is becoming clearer that effective prompcraft falls into three broad approaches.

  1. Start small and iterate.
  2. Structured longer prompts.
  3. Build an in depth system prompt for a custom bot.

I use all three of these in my daily interactions with various tools.

Let’s look at how to start small and iterate with the Flipped Interaction prompt.

This is when you ask the LLM to ask you questions before it provides an output. This helps build contextual cues and information.

According to research referenced by Briana Brownell from Descript:

for the highest quality answers, the tests showed the Flipped Interaction pattern is the valedictorian of prompts. […] In tests, using this principle improved the quality of all responses for every model size. It improved quality the most for the largest models like GPT-3.5 and GPT-4, but it did impressively well in smaller models too. So if you’re not using this A+ technique yet, you should definitely start.

Here are some example prompts to try:

Let’s collaborate on (describe your task) start by asking me some questions.
From now on, I would like you to ask me questions to (describe your task)

I would also recommend adding in an instruction to go one step at a time, or to limit the number of questions as most models are too verbose.

Here is an example of this using the new Claude-3-Opus-200k model.

And here is the same prompt using Alibaba’s Qwen-72b-Chat model.

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

DESIGN

.: Design Against AI: 2024 Design in Tech Report by John Maeda

video preview

John Maeda’s report advocates for designers to develop AI literacy while staying grounded in human reality, in order to help shape an ethical and human-centred future.

Balancing the use of AI as a tool with uniquely human creative abilities will be an ongoing challenge.

Reminds me of the premise of the humAIn community

A learning community for educators to connect and explore our humanity in the age of artificial intelligence

HUMANITY
.: AI Literacy is the Art of Synergizing Intuitions

“Dealing with AI is a wandering dance between two unconscious entities—the AI and much of our brain—using the much tinier piece of our neural mush that deliberates.”

This quote captures the central analogy of the post – that interacting with AI is an interplay between the intuitive, automatic responses of both human and artificial neural networks, mediated by our limited conscious faculties.

Tim Dasey’s article frames AI literacy as a process of syncing the intuitive, contextual responses of both human and artificial intelligences through techniques like prompting, feedback solicitation and comparative understanding of cognition.

Mastering this “dance” unlocks AI’s potential while honing essential human skills.

COLLECTION
.: The AI Literacy Curriculum Hub

The AI Literacy Curriculum Hub is a spreadsheet curated by AI for Equity and Erica Murphy at Hendy Avenue Consulting.

A collection of AI literacy lessons, projects, and activities from respected sources like Common Sense Education, Stanford’s Craft AI, Code.org, ISTE, MIT Media Lab, and more.

Each resource is tagged, providing key details such as the applicable grade levels, lesson duration, required materials, and learning objectives.

Ethics

.: Provocations for Balance

➜ Is it time for schools to become digital dictatorships, monitoring every keystroke and thought, or do we resign ourselves to a future where trust is a relic of the past?

➜ Big Tech AI companies lure schools with promises of personalised learning and cutting-edge tech, while open-source alternatives whisper seductively of freedom and transparency. In this high-stakes game of AI roulette, who will educators bet on? Will they sell their digital souls for a taste of Silicon Valley’s forbidden fruit or take a leap of faith into the wild west of open source, where danger and opportunity lurk in equal measure?

Inspired by some of the topics this week and dialled up.

:. .:

How would you rate this issue of Promptcraft?

(Click on your choice below)

⭐️⭐️⭐️⭐️⭐️ Loved it!

⭐️⭐️⭐️ Good

⭐️ Poor

If you have any other kind, specific and helpful feedback, please reply to this email or contact me at tom@dialogiclearning.com

.: :.

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett