.: Promptcraft 47 .: Anthropic launches new Claude-3 models

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In this issue, you’ll discover:

  • Anthropic says its latest AI bot can beat Gemini and ChatGPT
  • Google DeepMind Unveils AI ‘Genie’ to Instantly Conjure Playable Games
  • Over Half of Business Users Employ Generative AI at Least Weekly

Let’s get started!

~ Tom Barrett

wHjQk3aodvry5NsmgheYoe

NEW MODELS

.: Anthropic says its latest AI bot can beat Gemini and ChatGPT

Summary ➜ Anthropic has launched its Claude 3 model family, which claims to have superior performance to OpenAI’s GPT-4 across ten public AI model benchmarks, marking a significant advance over the company’s 2.1 model release from November 2023. The introduction includes the Opus, Sonnet, and Haiku models, with Opus and Sonnet available for use in the API, and Haiku coming soon. The company also plans to release frequent updates to the Claude 3 model family over the next few months, including tool use, interactive coding, and more advanced agentic capabilities. Anthropic can now rival GPT-4 on equal terms and is closing the feature gap with OpenAI’s model family.

Why this matters for education ➜ These large language models are the engines that power our AI systems. The release of new models are significant milestones for the ecosystem.

I had to laugh at one commenter framing these announcements as if Anthropic with Claude-3 and Google with Gemini, have finally reached The Moon where OpenAI has been for a while, because everything is in comparison to the GPT-4 class models.

But, as they arrive, OpenAI, with the impending release of GPT-5, say,

Yep, and we are ready to go to Mars.

It is too early to tell what this means for education, but more choice is helpful. The benchmarks used for marketing are always a little misleading, and actual use on real tasks might tell a different story. I can access some of the new models via Poe, and I will give it a play; you can try the new model at Claude.ai

GAME DEV

.: Google DeepMind Unveils AI ‘Genie’ to Instantly Conjure Playable Games

Summary ➜ Google DeepMind’s Genie AI can create interactive, playable games from simple prompts by learning game mechanics from countless gameplay videos. With 11 billion parameters, Genie converts media into games, allowing users to control generated environments frame-by-frame. This groundbreaking model has the potential to revolutionise how AI learns and interacts with virtual environments, opening up new possibilities for training versatile AI agents.

Why this matters for education ➜ As young students, we are inherently world builders, naturally learning through play and conjuring up worlds that reimagine what we know into something entirely new. The development of text-to-game AI models, like Google DeepMind’s Genie, strikes a chord with me, highlighting how future students will face fewer barriers to creative expression. With the power of these AI tools at their fingertips, students can create simple sketches and bring their ideas to life, collaborating with game engines to adapt and refine their concepts. As AI-powered game creation tools become more accessible and integrated into learning experiences, I can’t help but feel excited about the possibilities that lie ahead.

dnaJHzNKcnK2JpWsw6QXsC

AI ADOPTION

.: Over Half of Business Users Employ Generative AI at Least Weekly

Summary ➜ An Oliver Wyman report surveyed 200,000 people in 16 countries in November 2023, finding a 55% average adoption rate of generative AI. Adoption increased by 62% between June and November 2023. The technology industry had the highest adoption with 75% of white-collar workers using generative AI weekly. Healthcare and life sciences professionals use generative AI extensively, and consumers in many countries welcome it to expand healthcare access.

Why this matters for education ➜ These reports are important for us to be aware of, as they give educators a glimpse into what is happening across the ecosystem. Seeing the pace of adoption in every industry is an important provocation which I hope catalyses some action from school and system leaders. The other aspect I am interested in is a meaningful level of adoption.

Education showed the largest rise in use, with increases in 2023 of 144%. Forty-four percent of education industry employees report using generative AI weekly

In your organisation how many people are using GenAI every week? Every day? More than 40%?

.: Other News In Brief

🗣️ OpenAI has introduced a Read Aloud feature for ChatGPT

😠 Google CEO says Gemini AI diversity errors are ‘completely unacceptable’.

🖼️ Ideogram, the free AI image generator rolls out text-in-image upgrade.

🧪 Anthropic’s Claude 3 knew when researchers were testing it.

🍫 Willy Wonka Experience Glasgow: a metaphor for the overpromises of AI?

📰 OpenAI claims the New York Times cheated to get ChatGPT to regurgitate articles.

🇮🇳 India reverses AI stance, requires government approval for model launches.

🎤 Alibaba’s new AI system ‘EMO’ creates realistic talking and singing videos from photos.

😈 Why does AI have to be nice? Researchers proposeAntagonistic AI.

🤝 Tumblr’s owner is striking deals with OpenAI and Midjourney for training data.

:. .:

Connect & Learn

.: The humAIn community is growing!

Take a look at my online community to explore, connect and learn about AI for education.

💡 AI learning resources

🗣 Shared community forums

📅 Regular online community events

🫂 Connections with peers worldwide

✨ Guidance from three trusted community leaders

Join over 30 educators from Singapore, US, Australia, Spain and the UK.

Monthly Review

.: All the January issues in one convenient PDF

iRta3CpdpjYTC7psYCBTF3

Promptcrafted January 2024

Discover the future of learning with Promptcrafted – Tom Barrett’s monthly guide to AI developments impacting education…. Read more

Look out for information about the new February edition of Promptcrafted – coming soon!

.: :.

What’s on my mind?

.: Lift Your Gaze

In early 2022, the Grattan Institute, a prominent Australian think tank, released a report examining how better government policy might help make more time for great teaching.

The report explored the results of a survey of 5,442 Australian teachers and school leaders, finding more than 90 per cent of teachers say they don’t have enough time to prepare effectively for classroom teaching.

Teachers report feeling overwhelmed by everything they are expected to achieve. And worryingly, many school leaders feel powerless to help them.

This is amidst, and perhaps feeding, an education workforce crisis that is also being felt globally.

Amid this education workforce crisis, AI has emerged as a potential solution – but one that comes with its own set of challenges and opportunities. When the number one issue is a lack of time, a tool that purports to time-save is likely to be adopted quickly, sometimes unquestioningly.

There are two friction points here: the sleepwalking into AI adoption when we know a wide range of literacies, such as media bias and understanding the capabilities and limitations of AI systems, need to mature alongside good prompting.

The other is that we get comfortable with low-level replacement tasks, the ‘grunt work’, the time savers – and we do not look beyond the marginal productivity gains.

As I say, the poor health of the current learning ecosystem has complex conditions in which to adopt a powerful, generative set of new technologies.

This is not to say that saving time is not helpful, important, or even critical for education to be sustainable. Go get that first draft of the email, expand on those lesson starter ideas, and build a bunch of open-ended questions to get you started!

But I want us all to consider what happens when we have saved all the time there is to save. How are we stretching the capabilities of AI systems and, in turn, stretching and reshaping what might be possible in teaching and learning?

I still think this is initially explored in daily tasks and productivity. So, when mapping out a medium-term plan of six lessons, ask yourself: What could I create with AI that I would never have had the time or resources to create before? Could I design something that was previously out of reach?

The next time you sit down to get stuck into some precious learning design, consider what would typically be inconceivable. What is usually out of reach for me? How could I push the boundaries of what’s possible?

We need more educators who know enough about how to save time, who have gathered the low-hanging fruit and are now ready to lift their gaze and design new ways to reach higher and further. 

:. .:

~ Tom

Prompts

.: Refine your promptcraft

A reminder this week of the powerful image creating tools like Midjourney.

These have really advanced quickly across the last few months and the images below were created using Midjourney version 6.

These are powerful tools to bring ideas to life. What do you think of my orc warlord?

Here’s the full prompt:

Character design close-up, intimidating orc warlord with battle-scarred green skin, heavy spiked armour, and a massive war hammer, unreal engine

Or, perhaps you prefer my inventor?

Character design close-up, brilliant gnome inventor with wild purple hair, goggles, a tool belt, and a steam-powered mechanical arm, unreal engine

You can see how little text we have to write to get some amazing results. A fun way to bring some character writing to life for students. Which in turn, generates some further visual cues for writing.

Please also note the promptcraft includes “Character design close-up” for the type of image, then details, and the style key is “unreal engine”.

These prompts were inspired by the ideas in this great article exploring Midjourney in depth.

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

RESEARCH
.: Preparing for AI-enhanced education: Conceptualising and empirically examining teachers’ AI readiness

Teachers’ readiness for integrating AI into education is crucial for the success of AI-enhanced teaching.

This study defines AI readiness based on cognition, ability, vision, and ethics and explores how these components impact teachers’ work, innovation, and job satisfaction.

Here are some other study highlights:

  • Teachers’ AI readiness were conceptualised from cognition, ability, vision, and ethics in the educational use of AI.
  • Teachers’ cognition, ability, and vision in the educational use of AI were positively associated with ethical considerations.
  • The four components of AI readiness all positively predicted AI-enhanced innovation and job satisfaction.
  • Teachers with high levels of AI readiness perceived low AI threats and demonstrated high innovation and job satisfaction.
  • Teachers from different socio-economic regions and of different genders showed no significant differences in AI readiness.

CLIMATE
.: AI’s Climate Impacts May Hit Marginalised People Hardest

Artificial intelligence (AI) technology, while celebrated for its potential in weather forecasting, also plays a significant role in exacerbating the climate crisis, according to a report from the Brookings Institution.

The report warns that AI’s soaring energy consumption and environmental costs could disproportionately impact marginalised communities already vulnerable to global warming.

Training a chatbot, for example, requires the same amount of energy as 1 million U.S. homes consume in an hour. The report highlights the potential for AI’s climate impacts to worsen existing environmental inequities related to extreme heat, pollution, air quality, and access to potable water in areas reliant on fossil fuels, often near poor communities.

CLIMATE
.: The Staggering Ecological Impacts of Computation and the Cloud

An interesting exploration of the environmental impact of computation, the cloud infrastructure AI models rely on, and the ecological costs of ubiquitous computing in modern life.

The article highlights the material flows of electricity, water, air, heat, metals, minerals, and rare earth elements that undergird our digital lives.

It discusses the environmental impact of the Cloud, such as carbon footprint and water scarcity. The article also explores the acoustic waste data centres emit, known as “noise pollution.”

Ethics

.: Provocations for Balance

➜ So you have saved time. Now what?

➜ How can we reframe the conversation around AI in education to focus on its potential for transformative change beyond efficiency gains?

➜ Could this technology stifle creativity and imagination in young people, who might become reliant on AI for generating ideas instead of developing their own?

Inspired by some of the topics this week.

:. .:

How would you rate this issue of Promptcraft?

(Click on your choice below)

⭐️⭐️⭐️⭐️⭐️ Loved it!

⭐️⭐️⭐️ Good

⭐️ Poor

If you have any other kind, specific and helpful feedback, please reply to this email or contact me at tom@dialogiclearning.com

.: :.

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 46 .: Google forced to pause Gemini images

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In this issue, you’ll discover:

  • Google pauses Gemini’s ability to generate AI images of people after diversity errors;
  • AI-Generated Nude Photos Of Middle Schoolers Found In Beverly Hills;
  • Google Is Giving Away Some of the AI That Powers Chatbots.

Let’s get started!

~ Tom Barrett

GUARDRAILS

.: Google pauses Gemini’s ability to generate AI images of people after diversity errors

Summary ➜ The decision was made after the tool inaccurately generated images of historical figures like US Founding Fathers and Nazi-era German soldiers, leading to conspiracy theories online. Google aims to address the issues with Gemini’s image generation feature and plans to release an improved version soon. This pause on generating pictures of people comes after Gemini users noticed non-white AI-generated individuals in historical contexts where they should not have appeared, erasing the history of racial and gender discrimination.

Why this matters for education ➜ The system failed; it likely caused unintentional harm. This is an important reminder for us all about how we move forward with cautious optimism about the emergence of these technologies. Google has since explained.

First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive.

Smart educators will see the teachable moment here. Which is a perfect example of the intersection of literacies – see more below.

These models are imperfect, and despite the PR disaster, this type of practical feedback from millions of users will improve the models. But as I have said before, at what cost?

3ivVJNnKLgE2i3G6vVertA

OPEN-SOURCE

.: Google Is Giving Away Some of the AI That Powers Chatbots

Summary ➜ Google has decided to open source some chatbot models, similar to Meta’s move last year. The company released two A.I. language models, Gemma 2B and Gemma 7B, to help developers create chatbots resembling Google’s own. While Google is not offering its most powerful A.I. model for free, it aims to engage the developer community and promote the adoption of modern A.I. standards.

Why this matters for education ➜ Keep in mind that AI and ChatGPT are not the same thing. Even though ChatGPT is rapidly becoming as ubiquitous as Hoover or Google. There are more than 300,000 open-source models available on platforms such as Hugging Face. One of the most powerful models, Gemma, has been made available by one of the most important organisations in the AI field. I can envision students and educators using these tools to create something amazing in the near future. It’s worth noting that OpenAI is conspicuously absent from the list of companies and research labs releasing open-source models. 🤔

vC6LTqSMu2gnyvQLv9CR7w

DEEPFAKES

.: AI-Generated Nude Photos Of Middle Schoolers Found In Beverly Hills

Summary ➜ Beverly Hills school officials have warned that anyone caught making AI-generated nude photos of middle schoolers could be expelled. The warning came after administrators at Beverly Vista Middle School discovered the images, which were created and disseminated by students with other students’ faces superimposed onto them. BHUSD officials said they are working with the Beverly Hills Police Department during the investigation

Why this matters for education ➜ I don’t have to state the obvious do I? The recent story about deepfake incidents in education is just the tip of the iceberg. There are likely many more cases that go unreported. This is not a hypothetical risk, it’s real harm. With powerful AI media tools readily available to everyone, we need to ask ourselves: how are our education organisations and systems helping us understand AI literacy? And how are we, as educators, helping young people navigate these uncertain waters? Ignoring the problem is not an option. It is an abdication of our professional responsibility.

.: Other News In Brief

📉 A recent report by Copyleaks reveals that 60% of OpenAI’s GPT-3.5 responses show signs of plagiarism.

🔗 OpenAI users can now directly link to pre-filled prompts for immediate execution.

❤️ Studies show that emotive prompts trigger improved responses from AI models.

🖖 Encouraging Star Trek affinity improves mathematical reasoning results with the AI model Llama2-70B.

⚡️ Lightning fast Groq AI goes viral and rivals ChatGPT, challenges Elon Musk’s Grok

🧠 An AI algorithm can predict Alzheimer’s disease risk up to seven years in advance with 72% accuracy.

🚧 Tyler Perry puts $800 million studio expansion on hold because of OpenAI’s Sora.

🎨 Stable Diffusion has unveiled Stable Diffusion 3, a powerful image-generating AI model designed to compete with offerings from OpenAI and Google.

🇫🇷 Microsoft partners with Mistral in second AI deal beyond OpenAI.

💰 Reddit has a new AI training deal to sell user content.

:. .:

Discount Available

.: The humAIn community is growing!

Take a look at my online community to explore, connect and learn about AI for education.

💡 AI learning resources

🗣 Shared community forums

📅 Regular online community events

🫂 Connections with peers worldwide

✨ Guidance from three trusted community leaders

You will be joining fellow educators from Singapore, US, Australia, Spain and the UK.

Find out more and grab the final membership offer before the price goes up at the end of February.

Monthly Review

.: All the January issues in one convenient PDF

iRta3CpdpjYTC7psYCBTF3

Promptcrafted January 2024

Discover the future of learning with Promptcrafted – Tom Barrett’s monthly guide to AI developments impacting education…. Read more

Look out for information about the new February edition of Promptcrafted – coming soon!

.: :.

What’s on my mind?

.: A Collision of Literacies

I think when I first saw the images generated in error by ​Gemini​, I winced. Here is one of the image sets from the Verge article in case you have not seen them before:

These images caused controversy by inaccurately depicting historical events and figures. This is one of many examples shared on social media illustrating how the system failed and how Google got it wrong, likely causing unintentional harm.

As I mentioned earlier, this is a perfect example of the intersection of literacy for us and the young people we support.

Some might brush these examples off in the pursuit of improvement, but these emerging missteps can help us calibrate our disposition and understanding of AI for societal good. Any educator will see the teachable moment here.

The imperfection, mistake and harm can be interrogated and used to learn. What learning opportunity do you see?

As I made sense of the image above, I experienced a simultaneous stressing of literacies.

There were lights on across the board of literacies:

  • AI literacy
  • Digital literacy
  • Media literacy
  • Algorithmic literacy
  • Historical literacy
  • Ethical literacy
  • Cultural literacy

As adults, we are also experiencing some of these collisions for the first time. Checking your understanding and literacy gaps is crucial, too, especially for educators.

While some extreme conspiracies around this story point to big tech attempting to rewrite history, what is perhaps more worrisome is the way AI content is flooding the internet.

Although the image in question might be easy to spot as inaccurate, there will be thousands, if not millions, of others in the future whose flaws are harder to see.

Navigating this landscape will require an amalgam of multidimensional literacies, a collision of competencies in ethics, critical thinking, history, futures, humanity and technology.

:. .:

~ Tom

Prompts

.: Refine your promptcraft

A helpful aspect of your promptcraft is remembering that humans create the training data for text-based large language models (LLMs).

This means that a positive, polite, collaborative conversation will likely yield better results than if you robotically ordered the chatbot around. Just as you would communicate with a coworker or a team member, engaging in a constructive dialogue with the AI model can lead to more effective outcomes.

This might seem like we are over-anthropomorphising the technology, but many studies have shown improvements in performance from respectful, polite interactions.

Here are a few tactics I try to use when I am prompted.

  • Initiate with Enthusiasm: Start by expressing excitement about the collaboration, for example, “I am excited to collaborate with you on [TASK]. Shall we get started?” This sets a positive tone for the interaction.
  • Provide Constructive Feedback: Offer kind, specific, and helpful feedback periodically. This can guide the model towards more accurate and relevant responses.
  • Maintain Politeness and Positivity: Engage politely and cheerfully with the model, avoiding toxicity. This makes the interaction more pleasant and can influence the quality of the responses.
  • Encouragement: When facing an impasse, offer encouragement. LLMs might “hallucinate” a lack of ability, which gentle coaxing can overcome. Think of this in the spirit of Mrs. Doyle from Father Ted, encouraging persistence and creativity.
  • Close with Gratitude: Conclude interactions by thanking the LLM for its assistance. This reinforces the collaborative nature of the exchange and sets a positive tone for future engagements, leveraging the memory feature of platforms like ChatGPT.

One curious example of the connection to training data I discovered this week is the case of Star Trek affinity.

A study about optimising prompts discovered that when Llama2-70B (an open-source LLM) was prompted to be a Star Trek fan, it was better at mathematical reasoning.

The full system prompt reads as follows:

System Message:

«Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation.»

Answer Prefix:

Captain’s Log, Stardate [insert date here]: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.

In many ways, this is a nuanced extension of the adoption of expertise or role, and I wonder if it taps into popular culture and fictional contexts in new ways.

I think this is weird, but I also appreciate the technical logic.

The responses and performance draw from the sum of all human text. So, it would be expected for the LLM to be familiar with Star Trek, given its cultural prominence and the likely prevalence of related content in its training data.

This includes the scripts and countless articles, fan fiction, and forum discussions about the series. Therefore, it’s plausible that adopting the persona of a Star Trek character could potentially activate relevant knowledge structures within the LLM, improving its ability to generate creative and contextually appropriate responses.

It’s an interesting demonstration of how the model’s performance can be influenced by the content of the prompts and the framing or persona that’s implicitly or explicitly adopted.

I wonder what other performance enhancements we might see from these types of creative activations.

:. .:

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

AFRICA

.: Rural Kenyans power West’s AI revolution. Now they want more

Rural Kenyans are increasingly becoming data annotators, providing the building blocks for training artificial intelligence (AI) to recognise patterns in real life.

Despite the challenges, including low pay and difficult subject matter, this work has become a backbone of the country’s economy, with at least 1.2 million Kenyans working online, most of them informally.

The annotation industry has spread far beyond Nairobi, with Kenya emerging as a hub for such online work, rising to compete with countries like India and the Philippines.

While AI might help small-scale businesses thrive, education systems need an overhaul to create an AI innovation hub in African countries.

BIAS
.: To benefit all, diverse voices must take part in leading the growth and regulation of AI

The absence of Latinx/e founders and leaders in discussions about the growth and regulation of AI is a concerning trend. Diverse founders often bring unique perspectives and address critical social needs through their startups. However, their voices remain largely absent from policy discussions.

despite their entrepreneurial talent and determination, Latinx/e founders remain overlooked and undervalued, receiving less than 2% of startup investment funding. Even when they receive it, it’s typically just a fraction of what’s awarded to their non-Hispanic counterparts.

TOOLKIT
.: Learning With AI

Rather than try to ban this technology from classrooms outright, the Learning With AI project asks if this moment offers an opportunity to introduce students to the ethical and economic questions wreaked by these new tools, as well as to experiment with progressive forms of pedagogy that can exploit them.

The University of Maine launched Learning With AI, which includes a range of curated resources, strategies and learning pathways. The toolkit is built on a database of resources which you can explore here.

Ethics

.: Provocations for Balance

➜ What mechanisms can be established to close the gap between where AI innovation happens and who truly benefits?

➜ If diverse voices are absent in AI leadership, how can we broaden participation to harness unique perspectives?

➜ What’s the best approach for introducing young people to AI’s promises and perils?

Inspired by some of the topics this week.

:. .:

Which topic would you like to see featured in a future issue of Promptcraft?

(Click on your choice below)

❤️ The State of Companionship AI

🛠️ How to design your own chatbot

🪞 How AI Is a Mirror to Our Humanity

🦋 AI Augmented Feedback and Critique

🛡️ Walled Gardens – Student Safe Chatbots

.: :.

Questions, comments or suggestions? Please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 45 .: OpenAI’s Sora Video Tool Will Make You Gasp!

Hello Reader,

Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.

In this issue, you’ll discover:

  • Google’s next-generation model: Gemini 1.5;
  • A state of the art text-to-video model called Sora;
  • What happens when AI eclipses your technical skills?

Let’s get started!

~ Tom Barrett

oSnf3bniRuv7Y11VhDU1no

VIDEO

.: OpenAI releases Sora: a state of the art text-to-video model

Summary ➜ OpenAI has introduced Sora, a text-to-video AI model that generates photorealistic HD videos based on written descriptions. Sora has been able to create 60-second synthetic videos with a higher fidelity and consistency than any other text-to-video model currently available. It is worth exploring some of the examples on the OpenAI site and reminding yourself they were generated from simple text prompts.

Why this matters for education ➜ Though this news may not immediately disrupt classrooms, it offers a telling glimpse of powerful AI creativity tools fast approaching. While full integration in schools could be far off, the proliferation of higher-fidelity synthetic content underscores why investing now in student AI and media literacy is vital.

More access to innovative technologies could unlock new forms of student expression. But there is work to do to lay the groundwork of critical thinking on using AI responsibly and ethically. This news is yet another reminder that regardless of if or when such tools enter our schools, nurturing students’ compassion and humanity will be as important as ever.

If you are looking for a slightly more technical exploration of the new Sora model from OpenAI, and what it means for filmmaking, I recommend this great post from Dan Shipper at Every.

OpenAI sees Sora as the first step in a “world simulator” that can model any slice of reality with a text prompt.

Yes, The Matrix.

j4r7RHHEtkwoytwcuzmjed

FRONTIER AI

.: Google’s next-generation model: Gemini 1.5

Summary ➜ Gemini 1.5 has a larger context window, enabling it to process up to 1 million tokens and analyse vast amounts of information in one go. “This means 1.5 Pro can process vast amounts of information in one go — including 1 hour of video, 11 hours of audio, codebases with over 30,000 lines of code or over 700,000 words. In our research, we’ve also successfully tested up to 10 million tokens.”

Why this matters for education ➜ Announcements of powerful new AI models are now commonplace. What matters is how this re-establishes Google as a leader in large language models, now rivalling OpenAI. For educators, having multiple big tech companies investing in AI could bring benefits if it catalyses innovation and increases access to these tools across Google’s education ecosystem.

e2NcZqLegeUbSSaooZvCvm

FUTURE OF WORK

.: When Your Technical Skills Are Eclipsed, Your Humanity Will Matter More Than Ever

Summary ➜ In this short essay from The New York Times, Aneesh Raman and Maria Flynn argue that as AI advances, technical skills like coding will become less valued while human skills like communication and empathy will only increase in importance.

Why this matters for education ➜ Raman and Flynn make a compelling argument that AI will reshape the skills needed for work, requiring less technical expertise and more human collaboration. This matters for education because (i) how to train to be an educator will change, (ii) education systems will be transformed by AI, (ii) education can transform other industries, and (iv) education can powerfully mould the future citizens that will wield these powerful technologies.

.: Other News In Brief

📣 Earlier this month the EdSafe Alliance announced their 33 Women in AI Fellows, “Designed for women technologists and educational leaders, this Fellowship creates a space for learning, support, and building a network.”

🇨🇦 Air Canada must honour refund policy invented by airline’s chatbot.

🤔 OpenAI is testing the ability for ChatGPT to remember things you discuss to make future chats more helpful.

An overview of how Anthropic are approaching the use of their AI systems in elections.

™️ The US Patent and Trademark Office (PTO) has denied OpenAI’s application to register the word GPT as a trademark.

⚡️ How much electricity does AI consume?

💸 Reddit sells training data to unnamed AI company ahead of IPO

🔊 Hear your imagination: ElevenLabs to launch model for AI sound effects

:. .:

Discount Available

.: The humAIn community is growing!

Take a look at my online community to explore, connect and learn about AI for education.

💡 AI learning resources

🗣 Shared community forums

📅 Regular online community events

🫂 Connections with peers worldwide

✨ Guidance from three trusted community leaders

You will be joining fellow educators from Singapore, US, Australia, Spain and the UK.

Find out more and grab the final membership offer before it is gone.

Monthly Review

.: All the January issues in one convenient PDF

iRta3CpdpjYTC7psYCBTF3

Promptcrafted January 2024

Discover the future of learning with Promptcrafted – Tom Barrett’s monthly guide to AI developments impacting education…. Read more

.: :.

What’s on my mind?

.: Unwrapping Promptcraft

As educators exploring integrating AI into teaching and learning in thoughtful and meaningful ways, we stand at an exciting and sobering threshold.

Do you employ a slick third-party app promising to enhance lessons through the power of algorithms effortlessly? Or directly prompt models like Gemini and ChatGPT, navigating the exhilaration and uncertainties of unfiltered AI?

In this short reflection, let’s look at the different approaches. But first, some context.

You might have heard the phrase ‘thin-wrappers’ for accessing AI tools. This software category is a simplified interface or application layer built on top of the large language models. The user is not working directly with the ChatGPT chatbot; it is through an interface or software application, even though the engine might be the same.

Imagine a LessonBot application teachers use to click a few suggested choices and generate lesson planning content. This would be the thin-wrapper application.

The alternative for the teacher would be to open up your favourite flavour of the large language model, write a prompt, and work more directly with the large language model through its native chatbot interface.

I understand we might have various tools to draw on, but for educators, which of these pathways will help them grow the most?

How does this move us closer to a healthier learning ecosystem?

Convenience, time-saving, structure and the importance of beginner starting points have all been shared with me as a rationale for why these tools might be helpful.

As Darren Coxon describes in a recent post on this topic:

using a wrapper versus learning to prompt is a little like the difference between buying a ready meal and creating a recipe ourselves.

And Dr Sabba Quidwai goes further in calling out these thin-wrapper apps as fast food.

The point is we diminish holistic growth over the medium to long term in a range of AI Literacy elements if we only choose these intermediary shortcuts.

Much like the way some people are creating protocols for student assessment, to include the process of AI prompting in the submission, adult learning needs to focus on process and outcome.

Yes, these teacher AI apps might get you an outcome quickly, but has your skill set or mindset also improved? After every interaction, do you have a marginally better knowledge of the capabilities and limitations of LLMs? Has your confidence in AI collaboration and augmentation improved? If we continue to rely solely on these third-party applications, we risk leaving teachers in the dark about how AI functions.

Beyond the issue of teacher skill building by prompting, iterating and engaging directly with these models, there are broader considerations.

One of the critical things for me is that using more tools further reduces transparency.

It might be called a thin wrapper, but it still muddies the view to the engine room and creates more complexity in the architecture of what is happening. It also further introduces the potential for human bias to the experience.

This is at a time when a lack of transparency about what’s happening is a significant critique of AI systems. So if we use these wrappers, these intermediary software products that are kind of shortcuts for teachers, surely there’s more opacity and not less.

What do you think? How might all of this play out?

:. .:

~ Tom

Prompts

.: Refine your promptcraft

Today I am delighted to share some great promptcraft from reader James Whittle, the Head of eLearning and IT at Centenary State High School in Brisbane, Queensland, Australia.

James has been exploring how to use ChatGPT as an informal coaching tool to enhance his decision-making processes, maintain well-being, and improve work quality.

As I have recommended before he uses the audio conversations from ChatGPT to make this easy.

I have found the act of speaking my thoughts aloud is a powerful tool for reflection and clarity. I tend to overthink things without making much progress. However, as I articulate my teaching dilemmas or professional challenges to ChatGPT in this way, I feel like I am making much more progress and improving my ability to define the problems I’m facing. It’s really like the coach I never had!

I appreciate the structure of his prompt below and how the final line makes the expectations clear.

“ChatGPT, as I explore [insert topic or challenge], I’m looking for a sounding board to bring out my own thoughts more clearly.

Considering my situation, where [describe the specific context or issue, without revealing personal or identifiable details], could you provide reflective questions or prompts that help me articulate my approach and solutions?

My goal is to do the majority of the thinking and talking, with your role being to guide me towards my own insights and decisions.”

Take moment to try the prompt and also read the article from James to set it all in context.

This promptcraft from James coincided with some of my own research into AI for coaching and how to design coachbots!

More on that soon.

Remember to make this your own, try different language models and evaluate the completions.

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Learning

.: Boost your AI Literacy

STRATEGY

.: Assessment and Generative AI

For many schools and education systems the emergence of AI tools is a direct provocation to existing models of assessment.

This list of articles, research and strategy documents from John Mikton is a great starting point.

An invitation for schools to explore strategically re-calibrating assessment to highlight critical thinking, creativity, and the practical application of learning. “What is the added value of current assessment practices and how this value can be enhanced with the integration of Generative AI tools” Some resources to consider to support these conversations.

COURSE
.: AI For Everyone

AI For Everyone is a free course from Andrew Ng and DeepLearning.AI that aims to make AI accessible to everyone, including non-technical professionals. The course covers common AI terminology, the realistic capabilities of AI, identifying opportunities to apply AI in organisations, and the process of building machine learning and data science projects.

AI CHEATING
.: Guarding Academic Integrity: A Teacher’s Quixotic Battle Against AI

Some teachers may claim they can catch all these methods of cheating. However, I would argue that they only catch those students who are inept at it, and if you can catch the adept ones, you will have no problem detecting work generated by ChatGPT.

Jack Dougall explores his perspective on the use of AI tools in education and the issue of academic integrity. He acknowledges that students have always found ways to cheat, and AI tools are just another method they can use. Jack argues that the responsibility to prevent cheating lies with teachers, parents, and society.

Ethics

.: Provocations for Balance

➜ If AI can simulate and generate bespoke virtual worlds, will virtual worlds seem more perfect than ours? Could people withdraw more from imperfect real life into flawless AI-generated worlds?

➜ Will family bonds weaken if AI tutors know our children better than parents? Could children become more attached to their perfectly patient AI tutor than imperfect human parents?

➜ If AI expression surpasses humans, and machines write songs stirring our souls more than any poet could, does this sever an essential human connection to art? Will the last strummed guitar be displayed in an “Obsolete Creativity” museum exhibit?

Inspired by some of the topics this week. And I deliberately dialled up the level of provocation nearing Black Mirror setting.

:. .:

Which topic would you like to see featured in a future issue of Promptcraft?

(Click on your choice below)

❤️ The State of Companionship AI

🛠️ How to design your own chatbot

🪞 How AI Is a Mirror to Our Humanity

🦋 AI Augmented Feedback and Critique

🛡️ Walled Gardens – Student Safe Chatbots

.: :.

Questions, comments or suggestions? Please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 44 .: ​More schools in Australia trial AI tool for students

⏰ Don’t forget to join my online community to elevate your AI Literacy, before the price goes up in 6 days.

Hello Reader,

Promptcraft is a weekly AI-focused newsletter for education, improving AI literacy and enhancing the learning ecosystem.

In this issue, you’ll discover:

  • More schools in Australia to trial AI tool for students;
  • OpenAI joins Meta in labeling AI generated images;
  • What can the EU learn from Asia about AI regulations?

Let’s get started!

~ Tom Barrett

K-12 TRIAL

.: NSW schools in Australia trial AI tool for students

Summary ➜ The NSW Department of Education in Australia has developed a new generative artificial intelligence (AI) app called NSWEduChat, designed for education and safe for school-aged children. The app will be trialled in 16 NSW public schools over the first two terms of 2024, with teachers deciding how to use it in their classrooms. The app only responds to students’ questions about school activities and education-related topics and has embedded safeguards to monitor and remove inappropriate content. The trial will be monitored and reviewed to inform the future direction of AI in NSW public schools.

Why this matters for education ➜ This news from Australia is the second trial led by a public education system after South Australia bucked the banning trend last year. I am curious if there are global precedents.

Do you know of any other K-12 education system running similar trials of state-designed chatbots?

It reminds me of the walled garden days when YouTube emerged as a ‘threat’ to education systems as a young primary teacher in England. Tom, the techno-idealist, still wonders about the chasm between walled-garden edu-versions of AI systems and the full capabilities of frontier models.

AI LABELS

.: OpenAI joins Meta in labeling AI generated images

Summary ➜ OpenAI has announced that it is updating its app ChatGPT and its AI image generator model, DALL-E 3, to include metadata tagging identifying images created with AI tools. This move comes shortly after Meta announced a similar measure for labelling AI-generated images across its platforms.

Why this matters for education ➜ A good start regarding better labelling and transparency by design. Fast forward a few years and I can see we will have better tools and encoded standards for displaying augmentation. Although it is not a silver bullet, I wonder what auto-labelling might look like for text. A key attribute this development does not change is the opacity of the training data of proprietary models, which is still as shrouded as ever.

POLICY

.: What can the EU learn from Asia about AI regulations?

Summary ➜ ASEAN has published voluntary and light-touch guidelines for using AI. The ten ASEAN members agreed to the guidelines, which could cause upset within the European Union (EU) as it has been lobbying for other parts of the world to align with its own stricter proposed framework, the AI

Why this matters for education ➜ It is interesting to see how different regions publish different guidance frameworks. This is relevant in education because careers in technology may look very different depending on where you are in the world as AI regulation takes hold. It is also worth paying attention to the alignment of educational guidelines to wider industry policy regarding AI. For example, access to some AI models has been slow or limited in the EU due to restrictions. Not a global level playing field.

.: Other News In Brief

🇮🇩 A deepfake video of the late Indonesian dictator, Suharto, has gone viral ahead of upcoming elections

💰 Deepfake scammer walks off with $25 million in first-of-its-kind AI heist.

🌋 Archaeologists Tap AI to Decipher Ancient Scrolls Nearly Lost to Volcano.

📝 Google Bard becomes Gemini: Ultra 1.0 and a new mobile app available in the US.

🇺🇸 The US has made robocalls that use AI-generated voices illegal.

⚖️ Stability, Midjourney, Runway hit back in AI art lawsuit.

🇨🇳 China’s generative video race heats up.

🐲 Meet ‘Smaug-72B’: The new king of open-source AI

:. .:

Early Bird Discount

.: The humAIn Community is Open!

Take a look at my online community to explore, connect and learn about AI for education, before the price goes up in 6 days.

You will be joining fellow educators from Singapore, US, Australia, Spain and the UK.

Find out more and grab the early bird membership offer before it is gone.

Monthly Review

.: All the January issues in one convenient PDF

iRta3CpdpjYTC7psYCBTF3

Promptcrafted January 2024

Discover the future of learning with Promptcrafted – Tom Barrett’s monthly guide to AI developments impacting education…. Read more

.: :.

What’s on my mind?

.: The Ghost of Politics Past: AI’s Unsettling Role in Indonesian Elections

While I was working on today’s newsletter, I came across reports that there has been widespread use of AI deepfakes in election campaigning in Indonesia.

This news might not be new to some, as we have seen similar issues before in the US. However, this story takes the idea of an election deepfake to a whole new level.

Indonesia is in the midst of an election cycle, and with a population of over 278 million, one of the largest democracies globally, people are being bombarded with deepfake videos and images. It is worth noting that Indonesia has the fourth-largest education system in the world.

The scale and extent of this misuse of AI technology in such a populous country are unprecedented and unsettling. It’s a stark reminder of the urgent need for AI literacy and the ability to discern between real and synthetic content in our digital age.

According to the reports, one political party has generated a deepfake message from a previous president, Suharto, who passed away in 2009.

“I am Suharto, the second president of Indonesia,” the former general says in a three-minute video that has racked up more than 4.7 million views on X and spread to TikTok, Facebook and YouTube.

This incident underlines the pressing need for comprehensive regulations and public education about the potential misuse of AI technologies.

This issue got me thinking about deepfakes for political gain, the nuances of ethical use and where the line of taste is drawn.

If this example sits on one end of a disturbing scale, where would you place the chatbot prompt: “Act as Grace Hopper and help me with my innovation strategy…”?

Are they not the same invocation of a deceased leader in society, just executed to varying levels of fidelity, with different intent?

In an era where digital resurrection is possible, where do we draw the line between honouring a legacy and exploiting a memory?

:. .:

~ Tom

Prompts

.: Refine your promptcraft

Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.

Today I want to share a good example of structured prompting from Jessica Parker who uses the powerful trio: Role, Context, Output.

Please explore her post on LinkedIn about the prompt and upcoming webinars.

The only changes I have made are to add parentheses to any possible variables you could adapt to your context.

Search Strategy Prompt

Role and Purpose

You are a [research librarian] who is an expert at [developing search strategies for literature reviews]. Your purpose is to help me develop a []comprehensive search strategy].

Context

I am a [doctoral student researcher] conducting a study on [higher education faculty and their acceptance of Al-driven assessment methods].

Instructions

I will provide the research question guiding my study. You will then provide guidance and help me develop a [comprehensive literature search strategy by identifying key concepts and terms, databases relevant to my field of study, search strings, and filters].

Output

Always structure your responses using markdown.

  • Identify key concepts and terms and consider synonyms, acronyms, and variations in spelling or terminology.
  • Identify databases that are most relevant to my field.
  • Suggest search strings, including Boolean operators and truncation.
  • Suggest filters for me to use in various databases to refine my search.

Remember to make this your own, try different language models and evaluate the completions.

Also, I have found that it doesn’t matter too much if you drop the sub-headings from the prompt. But different tools will give you different results.

Learning

.: Boost your AI Literacy

INTRO COURSE

.: Generative AI in a Nutshell – how to survive and thrive in the age of AI

Basically a full day AI course crammed into 18 mins of drawing and talking. Target audience: everyone.

video preview

REFERENCE
.: AI Ethics Living Dictionary

The Montreal AI Ethics Institute created the AI Ethics Living Dictionary to make AI ethics more accessible.

The dictionary contains plain language definitions of technical computer science and social science terms related to AI ethics.

The Living Dictionary aims to inspire and empower readers to engage more deeply in AI ethics and contribute to developing ethical, safe, and inclusive AI.

FUTURE WORKFORCE
.: Generative Artificial Intelligence and the Workforce

A useful new report on the impact of GenAI on the workforce from The Burning Glass Institute in the US.

GenAI will touch a broad array of roles. In many cases, however, the impact will be less about automating away tasks than about augmenting workers’ productivity and effectiveness or transforming the definition of job roles altogether

Ethics

.: Provocations for Balance

Why is it disrespectful and unethical to invoke the thoughts, voice or words of a person who is deceased?
Can you think of a scenario where this would be acceptable? Is it OK if the intention is benevolent? What about without consent?
When AI can convincingly replicate the voices and opinions of past leaders, how might this power reshape our understanding of history and truth?
Is there a difference in invoking the persona of a long-deceased historical figure for educational or entertainment purposes versus someone who has recently passed away?

Inspired by some of the ideas raised by my reflection on deepfakes and digital resurrection.

:. .:

.: :.

Questions, comments or suggestions for future topics? Please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett

.: Promptcraft 43 .: ​AI narrows the performance gap

Hello Reader,

Promptcraft is a weekly AI-focused newsletter for education, improving AI literacy and enhancing the learning ecosystem.

In this issue, you’ll discover:

  • More than half of UK undergraduate students are using AI to help with their essays;
  • How much Ethan Mollick can do with AI in 59 seconds;
  • A random controlled trial shows using GPT-4 narrows the performance gap among law students.

Let’s get started!

~ Tom Barrett

547N6aXk2GiYvXsS7bgDij

AI AT WORK

.: What Can be Done in 59 Seconds: An Opportunity (and a Crisis)

Summary ➜ Ethan Mollick conducts an experiment to see how much work can be done in under a minute using AI, producing five reasonably high-quality drafts. However, he also warns of the potential crisis of meaning that could arise as AI-written content becomes more prevalent in organisations and suggests that thoughtful leaders need to consider how to use AI in ways that emphasise the good and not the bad.

Why this matters for education ➜ This article was a provoking read, but it was not the AI demonstration that piqued my interest. It was how Mollick laid out the emerging mystery of what the use of AI in the world of work means.

The ramifications for the nature of ‘work’ are as unclear as some of the training methods of the models we are using. I appreciated this great question which sums up some of the trajectory we are on.

What does your skill and effort mean if people don’t care if your work was done by a machine?

The reality for most educators is, despite the growing paper-work pressure, words are not proxies for effort, intelligence or care, as Ethan Mollick suggests for the majority of professions.

Education pushes ever forward, perhaps cosseted, sometimes belligerent and almost certainly out of sync, from the wider impact of AI in society.

pjuu2iB5Qo1iNuLGyikPrY

HIGHER ED

.: More than half of UK undergraduate students are using AI to help with their essays

Summary ➜ More than half of UK undergraduate students are using AI to help with their essays, according to a survey of over 1,000 students conducted by the Higher Education Policy Institute. The survey found that 53% of respondents used AI to generate material for their work, and 25% used applications such as Google Bard and ChatGPT to suggest topics.

Why this matters for education ➜ It will be interesting to see the results of the EEF project which is set to look into the impact of AI tools on cutting the workload burden of teacher and improve the quality of teaching. In many schools there is a fixation with a research driven approach, but such a stance is soon put to one side when trying these new AI technologies. Perhaps both points of view can be held at the same time, but it does feel a little contradictory.

Prof Becky Francis, the chief executive of the EEF, said: “There’s already huge anticipation around how this technology could transform teachers’ roles, but the research into its actual impact on practice is – currently – limited.”

RESEARCH

.: Lawyering in the Age of Artificial Intelligence

Summary ➜ A University of Minnesota Law School study found that AI, notably GPT-4, slightly improves legal analysis quality and significantly boosts task completion speed for law students. The biggest benefits were seen in lower-skilled students. Users were satisfied and effectively identified tasks where AI helped most, suggesting AI can enhance productivity and equality in law practice.

Why this matters for education ➜ The research reveals AI’s potential to democratise academic performance, notably narrowing the performance gap among students. This levelling effect, especially beneficial for those with lower initial skill levels, suggests AI could transform learning across various domains by making educational outcomes more equitable. It make me wonder about the broader application in enhancing learning efficiency and equality, but at what cost? Could AI similarly level the playing field in other educational areas, reducing barriers and making learning more accessible to all? How might this impact long-term educational strategies and inclusivity across diverse learning environments?

.: Other News In Brief

Google is preparing to fully rename Bard to Gemini.

Apple is set to reveal it’s AI development “later this year”.

An AI-generated image of an Australian state MP raises wider questions on digital ethics.

Hugging Face has launched an open source AI assistant maker called Hugging Chat Assistants.

An interdisciplinary team of researchers have developed a machine learning system to detect mental health crises messages.

The EU Member States have endorsed the EU’s AI Act (AIA), here’s a useful quick guide from Christopher Götz.

:. .:

Spark Dialogue

.: The humAIn Community is Open!

I am delighted to share with all you Promptcrafters, our online community to explore, connect and learn about AI for education is open!

We have already welcomed our first members today from Australia, Spain and the UK, which is very exciting.

Find out more and grab our early bird membership offer.

.: :.

What’s on my mind?

.: Make it stick

ChatGPT was one of the fastest-growing technology tools we have ever seen. It gained 100 million users within just two months after its launch in November 2022.

But what drove that rapid user base and growth, how does this play out regarding traditional technology adoption theory, and is education immune to these societal shifts? These are some of the questions I have been thinking about this week.

Part of the theory you would have seen is to chunk people into different user groups: early adopters, laggards, etc. This is from the work of Everett Rogers, who proposed the diffusion theory in the 60s. Innovations or new technologies tend to spread through a population predictably.

If you look beyond people’s labels, there is a much more nuanced aspect of his work, which explores the attributes of the technology or idea itself.

He hypothesised a direct relationship between the characteristics of the innovation and the percentage of people who adopt it over time.

▶︎ Relative advantage

▶︎ Compatibility

▶︎ Observability

▶︎ Complexity

▶︎ Trialability

You might be thinking about how AI will be integrated into your organisation or how school colleagues can use these powerful tools.

Take a moment to consider each of the attributes of what you might be proposing. Let’s look at what this means for something like ChatGPT.

▶︎ Relative advantage – How does prompting a chatbot put me in a better position than where I was? What’s the advantage: time saved, speed, convenience, overcoming idea blocks, performance boost.

▶︎ Compatibility – How well does the chatbot align with the potential adopters’ values, past experiences, and needs? Does it fit into their current workflow, or will it require a drastic change in habits? I think this is not just a question of infrastructure but also a philosophical challenge to identity (see the question in the lead article above).

▶︎ Observability – Can the results of using AI chatbots be seen and appreciated by others? Is there a demonstrable benefit that can be observed and measured? For instance, the effectiveness of ChatGPT can be observed in the quality of text it produces, the time saved, and the increase in productivity.

▶︎ Complexity – Is the technology easy to understand and use, or does it require significant learning effort and time? ChatGPT, for instance, is relatively simple to use; you type in a prompt, and it generates a response. No steep learning curve is involved despite the underlying technology being vastly complex.

▶︎ Trialability – Can your colleagues try the technology easily? Remember, we all have free access to the most powerful AI model via Microsoft’s Co-Pilot, formerly Bing. This trialability reduces the perceived risk of adoption and encourages exploration, but it is also a question of equity and access.

I always use these characteristics when exploring and developing ideas or working on innovation strategies with leadership teams. They serve as a helpful guide to how we approach helping others on their AI Literacy journey.

:. .:

~ Tom

Prompts

.: Refine your promptcraft

Back to basics this week as we look into a foundation prompting technique, persona primers: establish the role or persona for the LLM to adopt.

Persona priming was one of the first methods I learned to help improve the outputs I get from LLMs. Below I have included some examples to add before your task description.

Establish the role you want the chatbot to adopt that is appropriate for your task.

Act as an expert music teacher and learning designer.
You are an experienced mentor to secondary teachers.
Act as a highly creative learning designer with a specialism in primary teaching in Singapore.
Act as an adept critical thinking strategist, specialised in developing engaging, subject-aligned scenarios that provoke high school students to sharpen their critical, analytical and evaluative thinking abilities.

Most of the time these short persona primers improve the alignment of the output to your task. But you can also experiment with longer role descriptions.

An extra tip for developing personas or roles in more detail is to start with a quick description, and simply prompt your favourite flavoured chatbot to:

Expand on this role description

Remember to make this your own, try different language models and evaluate the completions.

Learning

.: Boost your AI Literacy

RESEARCH
.: A Meta Review of AI in Higher Education

A meta-review of 66 evidence syntheses explores the application of Artificial Intelligence in higher education over the past 5 years, highlighting the need for greater emphasis on ethical, collaborative and rigorous AI research.

The review indicates a need for enhanced ethical considerations, including participant consent, data collection procedures, and consideration of data diversity.

Top 5 benefits of using AI in education = personalised learning, greater insight into student understanding, positive influence on learning outcomes, reduced planning and administration time for educators, greater equity in education and precise assessment and feedback.
Top 5 challenges of using AI in education = lack of ethical consideration, curriculum development, infrastructure, lack of teacher technical knowledge and concerns over the shifting of authority (from human to AI).

More context about the research here from Melissa Bond’s announcement.

US SCHOOLS
.: AI Guidance for US Schools

A handful of policy and guidance links from six US states who have published since the beginning of the school year, shared by Pat Yongpradit.

This is helpful to get a sense of how systems are approaching offering guidance to teachers in the US.

AI BIAS
.: Claude 2 and GPT4 are biased and racist

A helpful reminder from Ryan Tannenbaum about the flaws in the models we are using.

By highlighting bias in these models, we can raise awareness, and hopefully mitigate its affect.

…the training done to these [large language models] masks the racism rather than removes it. But also in making it more subtle it makes it more subversive. Anything these models output hold up a mirror to ourselves.

Ethics

.: Provocations for Balance

Look around you, how much of your cyber-physical experience is managed by an algorithm?
How can we ensure that AI systems used in education are transparent, explainable, and fair? More attention needs to be paid to algorithmic accountability.
AI chatbots could reduce social isolation, but might they diminish human relationships? More research into effects on student wellbeing is warranted.
Research shows benefits of personalisation, but could this lead students down narrow paths? We must consider the risks of using AI to overly tailor educational journeys.

Inspired by some of the Meta Review and this week’s news and developments.

:. .:

.: :.

Questions, comments or suggestions for future topics? Please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett