.: Promptcraft 49 .: LA school system launches student AI chatbot

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In this issue, you’ll discover:

  • Los Angeles Unified School District (LAUSD) Launches AI Chatbot “Ed” to Serve as Virtual Student Advisor
  • Google DeepMind co-founder, Mustafa Suleyman, joins Microsoft as CEO of its new AI division
  • How Apple’s Discussions with Google and OpenAI Could Impact Generative AI

Let’s get started!

~ Tom Barrett

STUDENT CHATBOT

.: Los Angeles Unified School District (LAUSD) Launches AI Chatbot “Ed” to Serve as Virtual Student Advisor

Summary ➜ The AI chatbot is designed to be a virtual advisor for students, aiming to simplify school navigation and support pandemic recovery. Ed can provide personalised guidance, share academic details, and make recommendations, while even assisting with non-academic issues like lost bikes. But some parents and experts have privacy and over-reliance concerns, worried it may replace human connections. For now, the chatbot is available to 54,000 students at 100 “fragile” schools, with plans to expand. Ed looks to create individual learning plans modelled on special education IEPs, touting a 93% accuracy rate over ChatGPT’s 86%.

Why this matters for education ➜ In Australia, two student chatbots are being trialled in New South Wales and South Australia, respectively. The LAUSD version’s focus is on practical student support, including quick access to assessments and grades.

“This is a technology that becomes a personal assistant to students,” Carvalho said at a demonstration at Roybal Learning Center, west of downtown. “It demystifies the navigation of the day … crunches the data in a way that it brings what students need.”

So, for now, it seems to be much more like a student support service than a generative AI system for teaching and learning.

An interesting note about this development is that the initial design regarding access to information is quite closed.

it has to stay within the district universe of information. A student, for example, would not be likely to get a reference to a recent development in the war in Ukraine for a research paper

From the article, it is unclear whether this is just up-to-date news or no information at all. For comparison, South Australia’s Edchat does not have real-time updated information, but students can access training data up to early 2023.

hXRqqpd3R7RMEC2i8vNugP

MICROSOFT

.: Google DeepMind co-founder joins Microsoft as CEO of its new AI division

Summary ➜ Microsoft has appointed Mustafa Suleyman, co-founder of Google’s DeepMind, as CEO of its new consumer-facing AI division. He will oversee products like Copilot, Bing, and Edge. As executive VP of Microsoft AI, he reports directly to CEO Satya Nadella. The company is also bringing in talent from Suleyman’s startup Inflection AI, like co-founder Karén Simonyan as chief scientist.

Why this matters for education ➜ Suleyman is a leader in AI, strengthening Microsoft’s capabilities. His vision at Inflection AI focused on personal AI agents to support our lives. This notion of proliferated, personalized bots raises interesting questions as Microsoft targets education. In his book, Suleyman advocated for AI safety and containment. As education tools leverage AI, how will Microsoft approach oversight and governance? Millions of students and teachers are impacted. Perhaps Suleyman’s safety focus will manifest in curbing widespread use of chatbots in education. His leadership may steer Microsoft toward more contained, transparent applications of AI for learning. Overall, Suleyman’s experience brings valuable perspective on AI ethics and responsible innovation as Microsoft evolves its education technology.

IPHONE

.: How Apple’s Discussions with Google and OpenAI Could Impact Generative AI

Summary ➜ Apple is in advanced talks to potentially use Google’s Gemini language model for iPhone features, after earlier OpenAI discussions. Apple aims to integrate generative AI into iPhones later this year, despite dismissing it in 2023. Unlike others, Apple may seek payment rather than paying to adopt an AI model.

Why this matters for education ➜ Did you know that Google pays billions of dollars yearly to Apple, so Google is the default search tool on the iPhone? This AI arrangement is likely to be similar. Although it is unclear how AI will appear on the iPhone, this is a market-shaping deal.

A deal with Apple will cement Google’s prominence in the industry. It will also give Google access to two billion iPhone users who might not otherwise think to consider the company’s generative AI solutions.

While integrating advanced LLMs into iPhones could make AI-powered educational tools more accessible, it’s important to consider equity issues. Not all students may have access to the latest iPhone models, further exacerbating the digital divide in access to AI tools.

.: Other News In Brief

🇸🇬 Singapore university sets up AI research facility for ‘public good’

⚽️ As AI football looms, be thankful for those ready to rage against the machine

🌇 Stability AI CEO resigns to ‘pursue decentralised AI’

🤖 OpenAI shows off the first examples of third-party creators using Sora

📈 Nvidia’s AI chip dominance is being targeted by Google, Intel, and Arm

❓ Where’d my results go? Google Search’s chatbot is no longer opt-in

🎤 Can you hear me now? AI-coustics to fight noisy audio with generative AI

📜 The AI Act is done. Here’s what will (and won’t) change

🗣️ HeyGen is About to Close $60M on a $500M Post Valuation

🇮🇳 India reverses AI stance, requires government approval for model launches

:. .:

Monthly Review

.: All the February issues in 1 PDF

Download a copy of the latest monthly roundup of Promptcraft.

.: :.

What’s on my mind?

.: A Quick Critical Analysis of Student Chatbots

In this week’s reflection, I want to revisit walled garden chatbots for students.

I think this strategic path needs more attention after the news that Los Angeles schools have access to a student support chatbot and the active trials in New South Wales and South Australia.

It is also worth adding that I have a partnership with Adelaide Botanic High School, one of the first schools in 2023, to trial Edchat, the chatbot for South Australian schools.

It has been great to have access to it and work alongside some of the leadership team running the project at the school.

When I say walled garden chatbots, I mean chatbots that operate within a closed system or a specific domain, having access only to a limited information set.

These chatbots are designed by the school system the student belongs to, as opposed to the consumer chatbots or AI products on the open web.

For this reflection, I will use the Compass Points thinking routine from Project Zero, which starts in the East.

E: Excites

What am I excited about in this situation?
  • Students get hands-on experience with AI tools to support their learning experience.
  • There is a lot of momentum which can grow if our students are given the chance to step up.
  • The LA example offers something different from the Australian trials: a chatbot more focused on support and practical companionship. I am excited to see how this develops.

N: Need to Know

What information do I need to know or find out?
  • How do teachers feel about students gaining access to these tools?
  • What professional growth opportunities exist for educators to build their capacity and understanding?
  • What are the system prompts for these bots, and how do they mitigate and guard against bias?

S: Stance

What stance do I take?
  • I support the testing and exploration of chatbots safely in schools. Many other school systems are keen to learn from their pioneering examples.
  • Educational innovation like these chatbot trials must be supported, encouraged and celebrated.
  • Despite the guard railing, I can see that frontier models like GPT-4 power them and can be very flexible in support of a student.

W: Worries

What worries or concerns do I have?
  • The design, prompt, and architecture are not visible. Without transparency, it’s difficult to hold developers and operators accountable for any issues that may arise.
  • Chatbots and LLMs are designed to respond with the most likely next word. They are geared towards the statistically most common. Without fine-tuning and promptcraft education, this might homogenise the message to our students.
  • I am concerned about the pedagogical bias built into students’ AI systems. Imagine a student is an active user with over 100 interactions daily. 1000s every month. What’s the hidden curriculum of each of those nudges and interactions?

:. .:

~ Tom

Prompts

.: Refine your promptcraft

One of the most useful and effective prompting techniques is to include an example of what you want to generate. This is called few-shot prompting, as you guide the AI system with a model of what to generate.

For example, if you are trying to develop some model texts or example paragraphs to critique with your class and you have some from the last time, you could add these to your prompt.

When you do not include an example, this is called zero-shot prompting. Zero-shot prompting always leads to a less aligned output, which you must edit and iterate to correct.

This week’s quick tip builds on the few-shot prompting technique by adding some qualifying explanations for ‘what a good one looks like’.

We always do this when we are modelling and teaching, and it works well with promptcraft.

So the prompt structure goes:

  1. Describe your instructions in clear language.
  2. Add an example of what you are looking for.
  3. Describe why this example is a good model.

Let’s say you’re teaching persuasive writing. Instead of asking the AI to generate a persuasive text, you could add a model paragraph from a previous lesson, followed by specific pointers on what makes it effective.

Here’s an example working with the Claude-3-Opus-200k model.

Note how closely the output follows my model paragraph. A little extra promptcraft goes a long way to improve the results.

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

BIAS

.: Systematic Prejudices Investigation [PDF]

The UNESCO report sheds light on gender bias within artificial intelligence, revealing biases in word association, sexist content generation, negative content about sexual identity, biased job assignments, and stereotyping.

The study emphasises the need for ethical considerations and bias mitigation strategies in AI development, including diverse representation in teams and training datasets to ensure fairness and inclusivity.

ELECTION
.: How AI companies are reckoning with elections

A helpful short article giving a rundown of how some of the largest tech companies and AI platforms are grappling with the impact of AI tools on the democratic process.

Although this focuses on the US, in 2024 billions of people will go to the polls across the world. According to a Time article:

Globally, more voters than ever in history will head to the polls as at least 64 countries (plus the European Union)—representing a combined population of about 49% of the people in the world—are meant to hold national elections

How these popular generative AI companies respond will impact so many of us.

Several companies […] signed an accord last month, promising to create new ways to mitigate the deceptive use of AI in elections. The companies agreed on seven “principle goals,” like research and deployment of prevention methods, giving provenance for content (such as with C2PA or SynthID-style watermarking), improving their AI detection capabilities, and collectively evaluating and learning from the effects of misleading AI-generated content.

LABELS
.: Why watermarking won’t work

This Venturebeat article discusses the challenges posed by the proliferation of AI-generated content and the potential for misinformation and deception.

Tech giants like Meta, Google, and OpenAI are proposing solutions like embedding signatures in AI content to address the issue.

However, questions arise regarding the effectiveness and potential misuse of such watermarking measures.

Ethics

.: Provocations for Balance

The Chatbot That Knows Too Much

“Ed” is meant to be a helpful companion, guiding students through their day. But imagine a school system where every question asked, every late assignment admitted to, every awkward social situation confessed to the chatbot becomes permanent record. Mistakes can’t be erased; the AI analyzes your word choice for signs of distress.

What if this data isn’t just used for support, but for discipline, even predicting future “problem” behavior? Is it okay for an AI to judge the private thoughts of a teenager, and worse, limit their opportunities based on what it predicts they might do?

Inspired by some of the topics this week and dialled up.

:. .:

How would you rate this issue of Promptcraft?

(Click on your choice below)

⭐️⭐️⭐️⭐️⭐️ Loved it!

⭐️⭐️⭐️ Good

⭐️ Poor

If you have any other kind, specific and helpful feedback, please reply to this email or contact me at tom@dialogiclearning.com

.: :.

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 48 .: EU Passes Sweeping AI Act

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In this issue, you’ll discover:

  • European Union Passes Sweeping AI Act, Sets ‘Global Standard’
  • Florida teens arrested for creating ‘deepfake’ AI nude images of classmates
  • A guide to Google Gemini and Claude 3.0, compared to ChatGPT

Let’s get started!

~ Tom Barrett

4pSYnssJ8X2bhK6sRXpMyJ

EU AI ACT

.: European Union Passes Sweeping AI Act, Sets ‘Global Standard’

Summary ➜ The European Parliament has passed the Artificial Intelligence Act, which will take effect later in the year. This landmark law is the world’s most comprehensive AI regulation and aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI while also encouraging innovation. The law bans certain AI applications that threaten citizen rights and establishes transparency requirements for general-purpose AI systems. The law aims to make the EU the de facto global standard for trustworthy AI, and lawmakers say it is only the first step in building new governance around technology.

Why this matters for education ➜ When we take a global view of the changing nature of technology regulation, all educators are impacted by this new law. The second-order effect in other parts of the world might include similar, tighter regulations on high-risk AI applications. As the most comprehensive AI regulation to date, it sets a global standard for developing and deploying AI technologies, including those used in educational settings. This new law will likely influence the direction of AI innovation and regulation worldwide as governments and organisations seek to establish their own guidelines and recommendations. Although it might be an oversimplified way to consider the impact, one question we might ask is how innovation in the educational technology space will be encouraged or stifled.

DEEPFAKE

.: Florida teens arrested for creating ‘deepfake’ AI nude images of classmates

Summary ➜ Two middle school students in Florida have been arrested and charged with third-degree felonies for allegedly creating deepfake nude images of their classmates using an unnamed AI application. This marks the first instance in the US of criminal charges related to AI-generated nude images. The incident highlights the increasing problem of minors creating explicit images of other children using generative AI, with only a handful of states having laws addressing this issue.

Why this matters for education ➜ Similar to the story from Los Angeles a little while ago, there is no need to explain why. While President Joe Biden has issued an executive order on AI banning the use of generative AI to produce child sexual abuse material, there is currently no federal law addressing nonconsensual deepfake nudes. As a class, school or system leadership team, you might pause and consider how you would respond if this scenario played out in your community. What policies and procedures should we implement to ensure we are prepared to handle instances of AI technology misuse within our school community?How can we foster an open and supportive culture in which students feel comfortable reporting such issues, and what support systems can we establish to assist students who may become victims of these actions?

FRONTIERS

.: Your guide to Google Gemini and Claude 3.0, compared to ChatGPT

Summary ➜ Two new powerful language models, Google’s Gemini Ultra 1.0 and Anthropic’s Claude 3.0 Opus, have been released, rivalling OpenAI’s GPT-4. This article compares the models and provides strategies for organisations deciding which to use, ranging from using just ChatGPT to adopting all three. The release of these models is a milestone, giving developers more choices, affecting company revenues, and indicating the difficulty of surpassing GPT-4 level performance.

Why this matters for education ➜ This article compares frontier AI models and provides helpful ideas for educators looking to improve their AI literacy. Hands-on experience with leading proprietary models like Gemini, Claude, and GPT is critical to understand their capabilities and potential applications in the classroom. System-wide decisions about AI tool rollouts in schools may depend on existing technology ecosystems, with schools potentially leaning towards tools that integrate seamlessly with their current setup. However, understanding the strengths and weaknesses of each frontier model can help educators make informed decisions about AI adoption, regardless of existing partnerships.

Complement this with my take on open source models below in the reflection titled: Peanut Butter and Pickles: Can Open Source and Education Mix?

.: Other News In Brief

🔓 Should AI be open?

🎓 How Young Is Too Young to Teach Students About AI? Survey Reveals Differing Opinions

💕 Why people are falling in love with AI chatbots

🚫 Google Bans U.S. Election Questions in Gemini AI

👨‍💻 Cognition launches an AI software engineer agent, Devin

🫂 Empathy raises $47M for AI to help with the practical and emotional bereavement process

🛠️ Microsoft opens its Copilot GPT Builder to all Pro subscribers

🖼️ Midjourney debuts feature for generating consistent characters across multiple gen AI images

🏢 OpenAI CEO Altman wasn’t fired because of scary new tech, just internal politics

🆓 Elon Musk vows to make his ChatGPT competitor Grok open source

:. .:

Monthly Review

.: All the February issues in 1 PDF

ndr3uJWG2BuLWV7ZYG4Vc6

Promptcrafted February 2024

The only monthly publication that curates the most relevant and impactful AI developments specifically for educators.

A… Read more

.: :.

What’s on my mind?

.: Peanut Butter and Pickles: Can Open Source and Education Mix?

One area of educational technology that seems to be overlooked is the potential of open-source software and tools.

Open source means the source code is freely available to the public to view, modify, and distribute, encouraging collaborative development where anyone can contribute improvements or modifications to the project.

However, in my experience, open source has never been an option in educational technology strategies in schools and systems.

This raises the question: Is education missing out on the benefits of open source?

Back when we were still learning to use Word Processing software and set up computer labs in our schools, I remember coming across an open-source version of MS Word called OpenOffice.

It was a suite of office productivity tools that was an open-source alternative to Microsoft Office. OpenOffice was free to download and did almost everything the licensed Word version could do. But nobody knew about it, and open-source software was never seriously considered.

Perhaps education and open source don’t go together like mince in a trifle or peanut butter and pickles. I mean, I like all of those things, but not together.

While open-source tools provide the flexibility to customise and adapt to specific use cases, this freedom can lead to application inconsistency and a lack of standardisation. This can pose challenges in an educational setting, where uniformity in tool usage is often what system admins want to maintain.

Additionally, the open nature of these tools can sometimes pose security concerns, as the code is accessible to everyone, including potential malicious actors.

The benefits of open source cannot be ignored. Within the AI space, there are a vast number of open-source models that can be used for free.

At the time of writing, there are 548,994 models on Hugging Face for a wide range of multimodal, computer vision, natural language processing, and audio functions. Yet, we might only know about ChatGPT and Gemini for everyday users and educators.

So, the challenge is educating the education sector about these open-source models’ existence, benefits, and potential drawbacks. This involves raising awareness, providing clear and accessible information about implementing them and offering guidance on managing any possible risks associated with their use.

It also requires a shift in mindset from being reliant on big tech vendors to being open to exploring other options that could offer greater flexibility and adaptability.

Will there be resistance to open source? Does anyone know about it? Are we so wedded to big tech vendors we can’t see other options?

What do you think?

:. .:

~ Tom

Prompts

.: Refine your promptcraft

It is becoming clearer that effective prompcraft falls into three broad approaches.

  1. Start small and iterate.
  2. Structured longer prompts.
  3. Build an in depth system prompt for a custom bot.

I use all three of these in my daily interactions with various tools.

Let’s look at how to start small and iterate with the Flipped Interaction prompt.

This is when you ask the LLM to ask you questions before it provides an output. This helps build contextual cues and information.

According to research referenced by Briana Brownell from Descript:

for the highest quality answers, the tests showed the Flipped Interaction pattern is the valedictorian of prompts. […] In tests, using this principle improved the quality of all responses for every model size. It improved quality the most for the largest models like GPT-3.5 and GPT-4, but it did impressively well in smaller models too. So if you’re not using this A+ technique yet, you should definitely start.

Here are some example prompts to try:

Let’s collaborate on (describe your task) start by asking me some questions.
From now on, I would like you to ask me questions to (describe your task)

I would also recommend adding in an instruction to go one step at a time, or to limit the number of questions as most models are too verbose.

Here is an example of this using the new Claude-3-Opus-200k model.

And here is the same prompt using Alibaba’s Qwen-72b-Chat model.

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

DESIGN

.: Design Against AI: 2024 Design in Tech Report by John Maeda

video preview

John Maeda’s report advocates for designers to develop AI literacy while staying grounded in human reality, in order to help shape an ethical and human-centred future.

Balancing the use of AI as a tool with uniquely human creative abilities will be an ongoing challenge.

Reminds me of the premise of the humAIn community

A learning community for educators to connect and explore our humanity in the age of artificial intelligence

HUMANITY
.: AI Literacy is the Art of Synergizing Intuitions

“Dealing with AI is a wandering dance between two unconscious entities—the AI and much of our brain—using the much tinier piece of our neural mush that deliberates.”

This quote captures the central analogy of the post – that interacting with AI is an interplay between the intuitive, automatic responses of both human and artificial neural networks, mediated by our limited conscious faculties.

Tim Dasey’s article frames AI literacy as a process of syncing the intuitive, contextual responses of both human and artificial intelligences through techniques like prompting, feedback solicitation and comparative understanding of cognition.

Mastering this “dance” unlocks AI’s potential while honing essential human skills.

COLLECTION
.: The AI Literacy Curriculum Hub

The AI Literacy Curriculum Hub is a spreadsheet curated by AI for Equity and Erica Murphy at Hendy Avenue Consulting.

A collection of AI literacy lessons, projects, and activities from respected sources like Common Sense Education, Stanford’s Craft AI, Code.org, ISTE, MIT Media Lab, and more.

Each resource is tagged, providing key details such as the applicable grade levels, lesson duration, required materials, and learning objectives.

Ethics

.: Provocations for Balance

➜ Is it time for schools to become digital dictatorships, monitoring every keystroke and thought, or do we resign ourselves to a future where trust is a relic of the past?

➜ Big Tech AI companies lure schools with promises of personalised learning and cutting-edge tech, while open-source alternatives whisper seductively of freedom and transparency. In this high-stakes game of AI roulette, who will educators bet on? Will they sell their digital souls for a taste of Silicon Valley’s forbidden fruit or take a leap of faith into the wild west of open source, where danger and opportunity lurk in equal measure?

Inspired by some of the topics this week and dialled up.

:. .:

How would you rate this issue of Promptcraft?

(Click on your choice below)

⭐️⭐️⭐️⭐️⭐️ Loved it!

⭐️⭐️⭐️ Good

⭐️ Poor

If you have any other kind, specific and helpful feedback, please reply to this email or contact me at tom@dialogiclearning.com

.: :.

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 47 .: Anthropic launches new Claude-3 models

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In this issue, you’ll discover:

  • Anthropic says its latest AI bot can beat Gemini and ChatGPT
  • Google DeepMind Unveils AI ‘Genie’ to Instantly Conjure Playable Games
  • Over Half of Business Users Employ Generative AI at Least Weekly

Let’s get started!

~ Tom Barrett

wHjQk3aodvry5NsmgheYoe

NEW MODELS

.: Anthropic says its latest AI bot can beat Gemini and ChatGPT

Summary ➜ Anthropic has launched its Claude 3 model family, which claims to have superior performance to OpenAI’s GPT-4 across ten public AI model benchmarks, marking a significant advance over the company’s 2.1 model release from November 2023. The introduction includes the Opus, Sonnet, and Haiku models, with Opus and Sonnet available for use in the API, and Haiku coming soon. The company also plans to release frequent updates to the Claude 3 model family over the next few months, including tool use, interactive coding, and more advanced agentic capabilities. Anthropic can now rival GPT-4 on equal terms and is closing the feature gap with OpenAI’s model family.

Why this matters for education ➜ These large language models are the engines that power our AI systems. The release of new models are significant milestones for the ecosystem.

I had to laugh at one commenter framing these announcements as if Anthropic with Claude-3 and Google with Gemini, have finally reached The Moon where OpenAI has been for a while, because everything is in comparison to the GPT-4 class models.

But, as they arrive, OpenAI, with the impending release of GPT-5, say,

Yep, and we are ready to go to Mars.

It is too early to tell what this means for education, but more choice is helpful. The benchmarks used for marketing are always a little misleading, and actual use on real tasks might tell a different story. I can access some of the new models via Poe, and I will give it a play; you can try the new model at Claude.ai

GAME DEV

.: Google DeepMind Unveils AI ‘Genie’ to Instantly Conjure Playable Games

Summary ➜ Google DeepMind’s Genie AI can create interactive, playable games from simple prompts by learning game mechanics from countless gameplay videos. With 11 billion parameters, Genie converts media into games, allowing users to control generated environments frame-by-frame. This groundbreaking model has the potential to revolutionise how AI learns and interacts with virtual environments, opening up new possibilities for training versatile AI agents.

Why this matters for education ➜ As young students, we are inherently world builders, naturally learning through play and conjuring up worlds that reimagine what we know into something entirely new. The development of text-to-game AI models, like Google DeepMind’s Genie, strikes a chord with me, highlighting how future students will face fewer barriers to creative expression. With the power of these AI tools at their fingertips, students can create simple sketches and bring their ideas to life, collaborating with game engines to adapt and refine their concepts. As AI-powered game creation tools become more accessible and integrated into learning experiences, I can’t help but feel excited about the possibilities that lie ahead.

dnaJHzNKcnK2JpWsw6QXsC

AI ADOPTION

.: Over Half of Business Users Employ Generative AI at Least Weekly

Summary ➜ An Oliver Wyman report surveyed 200,000 people in 16 countries in November 2023, finding a 55% average adoption rate of generative AI. Adoption increased by 62% between June and November 2023. The technology industry had the highest adoption with 75% of white-collar workers using generative AI weekly. Healthcare and life sciences professionals use generative AI extensively, and consumers in many countries welcome it to expand healthcare access.

Why this matters for education ➜ These reports are important for us to be aware of, as they give educators a glimpse into what is happening across the ecosystem. Seeing the pace of adoption in every industry is an important provocation which I hope catalyses some action from school and system leaders. The other aspect I am interested in is a meaningful level of adoption.

Education showed the largest rise in use, with increases in 2023 of 144%. Forty-four percent of education industry employees report using generative AI weekly

In your organisation how many people are using GenAI every week? Every day? More than 40%?

.: Other News In Brief

🗣️ OpenAI has introduced a Read Aloud feature for ChatGPT

😠 Google CEO says Gemini AI diversity errors are ‘completely unacceptable’.

🖼️ Ideogram, the free AI image generator rolls out text-in-image upgrade.

🧪 Anthropic’s Claude 3 knew when researchers were testing it.

🍫 Willy Wonka Experience Glasgow: a metaphor for the overpromises of AI?

📰 OpenAI claims the New York Times cheated to get ChatGPT to regurgitate articles.

🇮🇳 India reverses AI stance, requires government approval for model launches.

🎤 Alibaba’s new AI system ‘EMO’ creates realistic talking and singing videos from photos.

😈 Why does AI have to be nice? Researchers proposeAntagonistic AI.

🤝 Tumblr’s owner is striking deals with OpenAI and Midjourney for training data.

:. .:

Connect & Learn

.: The humAIn community is growing!

Take a look at my online community to explore, connect and learn about AI for education.

💡 AI learning resources

🗣 Shared community forums

📅 Regular online community events

🫂 Connections with peers worldwide

✨ Guidance from three trusted community leaders

Join over 30 educators from Singapore, US, Australia, Spain and the UK.

Monthly Review

.: All the January issues in one convenient PDF

iRta3CpdpjYTC7psYCBTF3

Promptcrafted January 2024

Discover the future of learning with Promptcrafted – Tom Barrett’s monthly guide to AI developments impacting education…. Read more

Look out for information about the new February edition of Promptcrafted – coming soon!

.: :.

What’s on my mind?

.: Lift Your Gaze

In early 2022, the Grattan Institute, a prominent Australian think tank, released a report examining how better government policy might help make more time for great teaching.

The report explored the results of a survey of 5,442 Australian teachers and school leaders, finding more than 90 per cent of teachers say they don’t have enough time to prepare effectively for classroom teaching.

Teachers report feeling overwhelmed by everything they are expected to achieve. And worryingly, many school leaders feel powerless to help them.

This is amidst, and perhaps feeding, an education workforce crisis that is also being felt globally.

Amid this education workforce crisis, AI has emerged as a potential solution – but one that comes with its own set of challenges and opportunities. When the number one issue is a lack of time, a tool that purports to time-save is likely to be adopted quickly, sometimes unquestioningly.

There are two friction points here: the sleepwalking into AI adoption when we know a wide range of literacies, such as media bias and understanding the capabilities and limitations of AI systems, need to mature alongside good prompting.

The other is that we get comfortable with low-level replacement tasks, the ‘grunt work’, the time savers – and we do not look beyond the marginal productivity gains.

As I say, the poor health of the current learning ecosystem has complex conditions in which to adopt a powerful, generative set of new technologies.

This is not to say that saving time is not helpful, important, or even critical for education to be sustainable. Go get that first draft of the email, expand on those lesson starter ideas, and build a bunch of open-ended questions to get you started!

But I want us all to consider what happens when we have saved all the time there is to save. How are we stretching the capabilities of AI systems and, in turn, stretching and reshaping what might be possible in teaching and learning?

I still think this is initially explored in daily tasks and productivity. So, when mapping out a medium-term plan of six lessons, ask yourself: What could I create with AI that I would never have had the time or resources to create before? Could I design something that was previously out of reach?

The next time you sit down to get stuck into some precious learning design, consider what would typically be inconceivable. What is usually out of reach for me? How could I push the boundaries of what’s possible?

We need more educators who know enough about how to save time, who have gathered the low-hanging fruit and are now ready to lift their gaze and design new ways to reach higher and further. 

:. .:

~ Tom

Prompts

.: Refine your promptcraft

A reminder this week of the powerful image creating tools like Midjourney.

These have really advanced quickly across the last few months and the images below were created using Midjourney version 6.

These are powerful tools to bring ideas to life. What do you think of my orc warlord?

Here’s the full prompt:

Character design close-up, intimidating orc warlord with battle-scarred green skin, heavy spiked armour, and a massive war hammer, unreal engine

Or, perhaps you prefer my inventor?

Character design close-up, brilliant gnome inventor with wild purple hair, goggles, a tool belt, and a steam-powered mechanical arm, unreal engine

You can see how little text we have to write to get some amazing results. A fun way to bring some character writing to life for students. Which in turn, generates some further visual cues for writing.

Please also note the promptcraft includes “Character design close-up” for the type of image, then details, and the style key is “unreal engine”.

These prompts were inspired by the ideas in this great article exploring Midjourney in depth.

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

RESEARCH
.: Preparing for AI-enhanced education: Conceptualising and empirically examining teachers’ AI readiness

Teachers’ readiness for integrating AI into education is crucial for the success of AI-enhanced teaching.

This study defines AI readiness based on cognition, ability, vision, and ethics and explores how these components impact teachers’ work, innovation, and job satisfaction.

Here are some other study highlights:

  • Teachers’ AI readiness were conceptualised from cognition, ability, vision, and ethics in the educational use of AI.
  • Teachers’ cognition, ability, and vision in the educational use of AI were positively associated with ethical considerations.
  • The four components of AI readiness all positively predicted AI-enhanced innovation and job satisfaction.
  • Teachers with high levels of AI readiness perceived low AI threats and demonstrated high innovation and job satisfaction.
  • Teachers from different socio-economic regions and of different genders showed no significant differences in AI readiness.

CLIMATE
.: AI’s Climate Impacts May Hit Marginalised People Hardest

Artificial intelligence (AI) technology, while celebrated for its potential in weather forecasting, also plays a significant role in exacerbating the climate crisis, according to a report from the Brookings Institution.

The report warns that AI’s soaring energy consumption and environmental costs could disproportionately impact marginalised communities already vulnerable to global warming.

Training a chatbot, for example, requires the same amount of energy as 1 million U.S. homes consume in an hour. The report highlights the potential for AI’s climate impacts to worsen existing environmental inequities related to extreme heat, pollution, air quality, and access to potable water in areas reliant on fossil fuels, often near poor communities.

CLIMATE
.: The Staggering Ecological Impacts of Computation and the Cloud

An interesting exploration of the environmental impact of computation, the cloud infrastructure AI models rely on, and the ecological costs of ubiquitous computing in modern life.

The article highlights the material flows of electricity, water, air, heat, metals, minerals, and rare earth elements that undergird our digital lives.

It discusses the environmental impact of the Cloud, such as carbon footprint and water scarcity. The article also explores the acoustic waste data centres emit, known as “noise pollution.”

Ethics

.: Provocations for Balance

➜ So you have saved time. Now what?

➜ How can we reframe the conversation around AI in education to focus on its potential for transformative change beyond efficiency gains?

➜ Could this technology stifle creativity and imagination in young people, who might become reliant on AI for generating ideas instead of developing their own?

Inspired by some of the topics this week.

:. .:

How would you rate this issue of Promptcraft?

(Click on your choice below)

⭐️⭐️⭐️⭐️⭐️ Loved it!

⭐️⭐️⭐️ Good

⭐️ Poor

If you have any other kind, specific and helpful feedback, please reply to this email or contact me at tom@dialogiclearning.com

.: :.

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 46 .: Google forced to pause Gemini images

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In this issue, you’ll discover:

  • Google pauses Gemini’s ability to generate AI images of people after diversity errors;
  • AI-Generated Nude Photos Of Middle Schoolers Found In Beverly Hills;
  • Google Is Giving Away Some of the AI That Powers Chatbots.

Let’s get started!

~ Tom Barrett

GUARDRAILS

.: Google pauses Gemini’s ability to generate AI images of people after diversity errors

Summary ➜ The decision was made after the tool inaccurately generated images of historical figures like US Founding Fathers and Nazi-era German soldiers, leading to conspiracy theories online. Google aims to address the issues with Gemini’s image generation feature and plans to release an improved version soon. This pause on generating pictures of people comes after Gemini users noticed non-white AI-generated individuals in historical contexts where they should not have appeared, erasing the history of racial and gender discrimination.

Why this matters for education ➜ The system failed; it likely caused unintentional harm. This is an important reminder for us all about how we move forward with cautious optimism about the emergence of these technologies. Google has since explained.

First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive.

Smart educators will see the teachable moment here. Which is a perfect example of the intersection of literacies – see more below.

These models are imperfect, and despite the PR disaster, this type of practical feedback from millions of users will improve the models. But as I have said before, at what cost?

3ivVJNnKLgE2i3G6vVertA

OPEN-SOURCE

.: Google Is Giving Away Some of the AI That Powers Chatbots

Summary ➜ Google has decided to open source some chatbot models, similar to Meta’s move last year. The company released two A.I. language models, Gemma 2B and Gemma 7B, to help developers create chatbots resembling Google’s own. While Google is not offering its most powerful A.I. model for free, it aims to engage the developer community and promote the adoption of modern A.I. standards.

Why this matters for education ➜ Keep in mind that AI and ChatGPT are not the same thing. Even though ChatGPT is rapidly becoming as ubiquitous as Hoover or Google. There are more than 300,000 open-source models available on platforms such as Hugging Face. One of the most powerful models, Gemma, has been made available by one of the most important organisations in the AI field. I can envision students and educators using these tools to create something amazing in the near future. It’s worth noting that OpenAI is conspicuously absent from the list of companies and research labs releasing open-source models. 🤔

vC6LTqSMu2gnyvQLv9CR7w

DEEPFAKES

.: AI-Generated Nude Photos Of Middle Schoolers Found In Beverly Hills

Summary ➜ Beverly Hills school officials have warned that anyone caught making AI-generated nude photos of middle schoolers could be expelled. The warning came after administrators at Beverly Vista Middle School discovered the images, which were created and disseminated by students with other students’ faces superimposed onto them. BHUSD officials said they are working with the Beverly Hills Police Department during the investigation

Why this matters for education ➜ I don’t have to state the obvious do I? The recent story about deepfake incidents in education is just the tip of the iceberg. There are likely many more cases that go unreported. This is not a hypothetical risk, it’s real harm. With powerful AI media tools readily available to everyone, we need to ask ourselves: how are our education organisations and systems helping us understand AI literacy? And how are we, as educators, helping young people navigate these uncertain waters? Ignoring the problem is not an option. It is an abdication of our professional responsibility.

.: Other News In Brief

📉 A recent report by Copyleaks reveals that 60% of OpenAI’s GPT-3.5 responses show signs of plagiarism.

🔗 OpenAI users can now directly link to pre-filled prompts for immediate execution.

❤️ Studies show that emotive prompts trigger improved responses from AI models.

🖖 Encouraging Star Trek affinity improves mathematical reasoning results with the AI model Llama2-70B.

⚡️ Lightning fast Groq AI goes viral and rivals ChatGPT, challenges Elon Musk’s Grok

🧠 An AI algorithm can predict Alzheimer’s disease risk up to seven years in advance with 72% accuracy.

🚧 Tyler Perry puts $800 million studio expansion on hold because of OpenAI’s Sora.

🎨 Stable Diffusion has unveiled Stable Diffusion 3, a powerful image-generating AI model designed to compete with offerings from OpenAI and Google.

🇫🇷 Microsoft partners with Mistral in second AI deal beyond OpenAI.

💰 Reddit has a new AI training deal to sell user content.

:. .:

Discount Available

.: The humAIn community is growing!

Take a look at my online community to explore, connect and learn about AI for education.

💡 AI learning resources

🗣 Shared community forums

📅 Regular online community events

🫂 Connections with peers worldwide

✨ Guidance from three trusted community leaders

You will be joining fellow educators from Singapore, US, Australia, Spain and the UK.

Find out more and grab the final membership offer before the price goes up at the end of February.

Monthly Review

.: All the January issues in one convenient PDF

iRta3CpdpjYTC7psYCBTF3

Promptcrafted January 2024

Discover the future of learning with Promptcrafted – Tom Barrett’s monthly guide to AI developments impacting education…. Read more

Look out for information about the new February edition of Promptcrafted – coming soon!

.: :.

What’s on my mind?

.: A Collision of Literacies

I think when I first saw the images generated in error by ​Gemini​, I winced. Here is one of the image sets from the Verge article in case you have not seen them before:

These images caused controversy by inaccurately depicting historical events and figures. This is one of many examples shared on social media illustrating how the system failed and how Google got it wrong, likely causing unintentional harm.

As I mentioned earlier, this is a perfect example of the intersection of literacy for us and the young people we support.

Some might brush these examples off in the pursuit of improvement, but these emerging missteps can help us calibrate our disposition and understanding of AI for societal good. Any educator will see the teachable moment here.

The imperfection, mistake and harm can be interrogated and used to learn. What learning opportunity do you see?

As I made sense of the image above, I experienced a simultaneous stressing of literacies.

There were lights on across the board of literacies:

  • AI literacy
  • Digital literacy
  • Media literacy
  • Algorithmic literacy
  • Historical literacy
  • Ethical literacy
  • Cultural literacy

As adults, we are also experiencing some of these collisions for the first time. Checking your understanding and literacy gaps is crucial, too, especially for educators.

While some extreme conspiracies around this story point to big tech attempting to rewrite history, what is perhaps more worrisome is the way AI content is flooding the internet.

Although the image in question might be easy to spot as inaccurate, there will be thousands, if not millions, of others in the future whose flaws are harder to see.

Navigating this landscape will require an amalgam of multidimensional literacies, a collision of competencies in ethics, critical thinking, history, futures, humanity and technology.

:. .:

~ Tom

Prompts

.: Refine your promptcraft

A helpful aspect of your promptcraft is remembering that humans create the training data for text-based large language models (LLMs).

This means that a positive, polite, collaborative conversation will likely yield better results than if you robotically ordered the chatbot around. Just as you would communicate with a coworker or a team member, engaging in a constructive dialogue with the AI model can lead to more effective outcomes.

This might seem like we are over-anthropomorphising the technology, but many studies have shown improvements in performance from respectful, polite interactions.

Here are a few tactics I try to use when I am prompted.

  • Initiate with Enthusiasm: Start by expressing excitement about the collaboration, for example, “I am excited to collaborate with you on [TASK]. Shall we get started?” This sets a positive tone for the interaction.
  • Provide Constructive Feedback: Offer kind, specific, and helpful feedback periodically. This can guide the model towards more accurate and relevant responses.
  • Maintain Politeness and Positivity: Engage politely and cheerfully with the model, avoiding toxicity. This makes the interaction more pleasant and can influence the quality of the responses.
  • Encouragement: When facing an impasse, offer encouragement. LLMs might “hallucinate” a lack of ability, which gentle coaxing can overcome. Think of this in the spirit of Mrs. Doyle from Father Ted, encouraging persistence and creativity.
  • Close with Gratitude: Conclude interactions by thanking the LLM for its assistance. This reinforces the collaborative nature of the exchange and sets a positive tone for future engagements, leveraging the memory feature of platforms like ChatGPT.

One curious example of the connection to training data I discovered this week is the case of Star Trek affinity.

A study about optimising prompts discovered that when Llama2-70B (an open-source LLM) was prompted to be a Star Trek fan, it was better at mathematical reasoning.

The full system prompt reads as follows:

System Message:

«Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation.»

Answer Prefix:

Captain’s Log, Stardate [insert date here]: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.

In many ways, this is a nuanced extension of the adoption of expertise or role, and I wonder if it taps into popular culture and fictional contexts in new ways.

I think this is weird, but I also appreciate the technical logic.

The responses and performance draw from the sum of all human text. So, it would be expected for the LLM to be familiar with Star Trek, given its cultural prominence and the likely prevalence of related content in its training data.

This includes the scripts and countless articles, fan fiction, and forum discussions about the series. Therefore, it’s plausible that adopting the persona of a Star Trek character could potentially activate relevant knowledge structures within the LLM, improving its ability to generate creative and contextually appropriate responses.

It’s an interesting demonstration of how the model’s performance can be influenced by the content of the prompts and the framing or persona that’s implicitly or explicitly adopted.

I wonder what other performance enhancements we might see from these types of creative activations.

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

AFRICA

.: Rural Kenyans power West’s AI revolution. Now they want more

Rural Kenyans are increasingly becoming data annotators, providing the building blocks for training artificial intelligence (AI) to recognise patterns in real life.

Despite the challenges, including low pay and difficult subject matter, this work has become a backbone of the country’s economy, with at least 1.2 million Kenyans working online, most of them informally.

The annotation industry has spread far beyond Nairobi, with Kenya emerging as a hub for such online work, rising to compete with countries like India and the Philippines.

While AI might help small-scale businesses thrive, education systems need an overhaul to create an AI innovation hub in African countries.

BIAS
.: To benefit all, diverse voices must take part in leading the growth and regulation of AI

The absence of Latinx/e founders and leaders in discussions about the growth and regulation of AI is a concerning trend. Diverse founders often bring unique perspectives and address critical social needs through their startups. However, their voices remain largely absent from policy discussions.

despite their entrepreneurial talent and determination, Latinx/e founders remain overlooked and undervalued, receiving less than 2% of startup investment funding. Even when they receive it, it’s typically just a fraction of what’s awarded to their non-Hispanic counterparts.

TOOLKIT
.: Learning With AI

Rather than try to ban this technology from classrooms outright, the Learning With AI project asks if this moment offers an opportunity to introduce students to the ethical and economic questions wreaked by these new tools, as well as to experiment with progressive forms of pedagogy that can exploit them.

The University of Maine launched Learning With AI, which includes a range of curated resources, strategies and learning pathways. The toolkit is built on a database of resources which you can explore here.

Ethics

.: Provocations for Balance

➜ What mechanisms can be established to close the gap between where AI innovation happens and who truly benefits?

➜ If diverse voices are absent in AI leadership, how can we broaden participation to harness unique perspectives?

➜ What’s the best approach for introducing young people to AI’s promises and perils?

Inspired by some of the topics this week.

:. .:

Which topic would you like to see featured in a future issue of Promptcraft?

(Click on your choice below)

❤️ The State of Companionship AI

🛠️ How to design your own chatbot

🪞 How AI Is a Mirror to Our Humanity

🦋 AI Augmented Feedback and Critique

🛡️ Walled Gardens – Student Safe Chatbots

.: :.

Questions, comments or suggestions? Please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 45 .: OpenAI’s Sora Video Tool Will Make You Gasp!

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In this issue, you’ll discover:

  • Google’s next-generation model: Gemini 1.5;
  • A state of the art text-to-video model called Sora;
  • What happens when AI eclipses your technical skills?

Let’s get started!

~ Tom Barrett

oSnf3bniRuv7Y11VhDU1no

VIDEO

.: OpenAI releases Sora: a state of the art text-to-video model

Summary ➜ OpenAI has introduced Sora, a text-to-video AI model that generates photorealistic HD videos based on written descriptions. Sora has been able to create 60-second synthetic videos with a higher fidelity and consistency than any other text-to-video model currently available. It is worth exploring some of the examples on the OpenAI site and reminding yourself they were generated from simple text prompts.

Why this matters for education ➜ Though this news may not immediately disrupt classrooms, it offers a telling glimpse of powerful AI creativity tools fast approaching. While full integration in schools could be far off, the proliferation of higher-fidelity synthetic content underscores why investing now in student AI and media literacy is vital.

More access to innovative technologies could unlock new forms of student expression. But there is work to do to lay the groundwork of critical thinking on using AI responsibly and ethically. This news is yet another reminder that regardless of if or when such tools enter our schools, nurturing students’ compassion and humanity will be as important as ever.

If you are looking for a slightly more technical exploration of the new Sora model from OpenAI, and what it means for filmmaking, I recommend this great post from Dan Shipper at Every.

OpenAI sees Sora as the first step in a “world simulator” that can model any slice of reality with a text prompt.

Yes, The Matrix.

j4r7RHHEtkwoytwcuzmjed

FRONTIER AI

.: Google’s next-generation model: Gemini 1.5

Summary ➜ Gemini 1.5 has a larger context window, enabling it to process up to 1 million tokens and analyse vast amounts of information in one go. “This means 1.5 Pro can process vast amounts of information in one go — including 1 hour of video, 11 hours of audio, codebases with over 30,000 lines of code or over 700,000 words. In our research, we’ve also successfully tested up to 10 million tokens.”

Why this matters for education ➜ Announcements of powerful new AI models are now commonplace. What matters is how this re-establishes Google as a leader in large language models, now rivalling OpenAI. For educators, having multiple big tech companies investing in AI could bring benefits if it catalyses innovation and increases access to these tools across Google’s education ecosystem.

e2NcZqLegeUbSSaooZvCvm

FUTURE OF WORK

.: When Your Technical Skills Are Eclipsed, Your Humanity Will Matter More Than Ever

Summary ➜ In this short essay from The New York Times, Aneesh Raman and Maria Flynn argue that as AI advances, technical skills like coding will become less valued while human skills like communication and empathy will only increase in importance.

Why this matters for education ➜ Raman and Flynn make a compelling argument that AI will reshape the skills needed for work, requiring less technical expertise and more human collaboration. This matters for education because (i) how to train to be an educator will change, (ii) education systems will be transformed by AI, (ii) education can transform other industries, and (iv) education can powerfully mould the future citizens that will wield these powerful technologies.

.: Other News In Brief

📣 Earlier this month the EdSafe Alliance announced their 33 Women in AI Fellows, “Designed for women technologists and educational leaders, this Fellowship creates a space for learning, support, and building a network.”

🇨🇦 Air Canada must honour refund policy invented by airline’s chatbot.

🤔 OpenAI is testing the ability for ChatGPT to remember things you discuss to make future chats more helpful.

An overview of how Anthropic are approaching the use of their AI systems in elections.

™️ The US Patent and Trademark Office (PTO) has denied OpenAI’s application to register the word GPT as a trademark.

⚡️ How much electricity does AI consume?

💸 Reddit sells training data to unnamed AI company ahead of IPO

🔊 Hear your imagination: ElevenLabs to launch model for AI sound effects

:. .:

Discount Available

.: The humAIn community is growing!

Take a look at my online community to explore, connect and learn about AI for education.

💡 AI learning resources

🗣 Shared community forums

📅 Regular online community events

🫂 Connections with peers worldwide

✨ Guidance from three trusted community leaders

You will be joining fellow educators from Singapore, US, Australia, Spain and the UK.

Find out more and grab the final membership offer before it is gone.

Monthly Review

.: All the January issues in one convenient PDF

iRta3CpdpjYTC7psYCBTF3

Promptcrafted January 2024

Discover the future of learning with Promptcrafted – Tom Barrett’s monthly guide to AI developments impacting education…. Read more

.: :.

What’s on my mind?

.: Unwrapping Promptcraft

As educators exploring integrating AI into teaching and learning in thoughtful and meaningful ways, we stand at an exciting and sobering threshold.

Do you employ a slick third-party app promising to enhance lessons through the power of algorithms effortlessly? Or directly prompt models like Gemini and ChatGPT, navigating the exhilaration and uncertainties of unfiltered AI?

In this short reflection, let’s look at the different approaches. But first, some context.

You might have heard the phrase ‘thin-wrappers’ for accessing AI tools. This software category is a simplified interface or application layer built on top of the large language models. The user is not working directly with the ChatGPT chatbot; it is through an interface or software application, even though the engine might be the same.

Imagine a LessonBot application teachers use to click a few suggested choices and generate lesson planning content. This would be the thin-wrapper application.

The alternative for the teacher would be to open up your favourite flavour of the large language model, write a prompt, and work more directly with the large language model through its native chatbot interface.

I understand we might have various tools to draw on, but for educators, which of these pathways will help them grow the most?

How does this move us closer to a healthier learning ecosystem?

Convenience, time-saving, structure and the importance of beginner starting points have all been shared with me as a rationale for why these tools might be helpful.

As Darren Coxon describes in a recent post on this topic:

using a wrapper versus learning to prompt is a little like the difference between buying a ready meal and creating a recipe ourselves.

And Dr Sabba Quidwai goes further in calling out these thin-wrapper apps as fast food.

The point is we diminish holistic growth over the medium to long term in a range of AI Literacy elements if we only choose these intermediary shortcuts.

Much like the way some people are creating protocols for student assessment, to include the process of AI prompting in the submission, adult learning needs to focus on process and outcome.

Yes, these teacher AI apps might get you an outcome quickly, but has your skill set or mindset also improved? After every interaction, do you have a marginally better knowledge of the capabilities and limitations of LLMs? Has your confidence in AI collaboration and augmentation improved? If we continue to rely solely on these third-party applications, we risk leaving teachers in the dark about how AI functions.

Beyond the issue of teacher skill building by prompting, iterating and engaging directly with these models, there are broader considerations.

One of the critical things for me is that using more tools further reduces transparency.

It might be called a thin wrapper, but it still muddies the view to the engine room and creates more complexity in the architecture of what is happening. It also further introduces the potential for human bias to the experience.

This is at a time when a lack of transparency about what’s happening is a significant critique of AI systems. So if we use these wrappers, these intermediary software products that are kind of shortcuts for teachers, surely there’s more opacity and not less.

What do you think? How might all of this play out?

:. .:

~ Tom

Prompts

.: Refine your promptcraft

Today I am delighted to share some great promptcraft from reader James Whittle, the Head of eLearning and IT at Centenary State High School in Brisbane, Queensland, Australia.

James has been exploring how to use ChatGPT as an informal coaching tool to enhance his decision-making processes, maintain well-being, and improve work quality.

As I have recommended before he uses the audio conversations from ChatGPT to make this easy.

I have found the act of speaking my thoughts aloud is a powerful tool for reflection and clarity. I tend to overthink things without making much progress. However, as I articulate my teaching dilemmas or professional challenges to ChatGPT in this way, I feel like I am making much more progress and improving my ability to define the problems I’m facing. It’s really like the coach I never had!

I appreciate the structure of his prompt below and how the final line makes the expectations clear.

“ChatGPT, as I explore [insert topic or challenge], I’m looking for a sounding board to bring out my own thoughts more clearly.

Considering my situation, where [describe the specific context or issue, without revealing personal or identifiable details], could you provide reflective questions or prompts that help me articulate my approach and solutions?

My goal is to do the majority of the thinking and talking, with your role being to guide me towards my own insights and decisions.”

Take moment to try the prompt and also read the article from James to set it all in context.

This promptcraft from James coincided with some of my own research into AI for coaching and how to design coachbots!

More on that soon.

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

STRATEGY

.: Assessment and Generative AI

For many schools and education systems the emergence of AI tools is a direct provocation to existing models of assessment.

This list of articles, research and strategy documents from John Mikton is a great starting point.

An invitation for schools to explore strategically re-calibrating assessment to highlight critical thinking, creativity, and the practical application of learning. “What is the added value of current assessment practices and how this value can be enhanced with the integration of Generative AI tools” Some resources to consider to support these conversations.

COURSE
.: AI For Everyone

AI For Everyone is a free course from Andrew Ng and DeepLearning.AI that aims to make AI accessible to everyone, including non-technical professionals. The course covers common AI terminology, the realistic capabilities of AI, identifying opportunities to apply AI in organisations, and the process of building machine learning and data science projects.

AI CHEATING
.: Guarding Academic Integrity: A Teacher’s Quixotic Battle Against AI

Some teachers may claim they can catch all these methods of cheating. However, I would argue that they only catch those students who are inept at it, and if you can catch the adept ones, you will have no problem detecting work generated by ChatGPT.

Jack Dougall explores his perspective on the use of AI tools in education and the issue of academic integrity. He acknowledges that students have always found ways to cheat, and AI tools are just another method they can use. Jack argues that the responsibility to prevent cheating lies with teachers, parents, and society.

Ethics

.: Provocations for Balance

➜ If AI can simulate and generate bespoke virtual worlds, will virtual worlds seem more perfect than ours? Could people withdraw more from imperfect real life into flawless AI-generated worlds?

➜ Will family bonds weaken if AI tutors know our children better than parents? Could children become more attached to their perfectly patient AI tutor than imperfect human parents?

➜ If AI expression surpasses humans, and machines write songs stirring our souls more than any poet could, does this sever an essential human connection to art? Will the last strummed guitar be displayed in an “Obsolete Creativity” museum exhibit?

Inspired by some of the topics this week. And I deliberately dialled up the level of provocation nearing Black Mirror setting.

:. .:

Which topic would you like to see featured in a future issue of Promptcraft?

(Click on your choice below)

❤️ The State of Companionship AI

🛠️ How to design your own chatbot

🪞 How AI Is a Mirror to Our Humanity

🦋 AI Augmented Feedback and Critique

🛡️ Walled Gardens – Student Safe Chatbots

.: :.

Questions, comments or suggestions? Please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett