Hello Reader,

Promptcraft is a weekly AI-focused newsletter for education, improving AI literacy and enhancing the learning ecosystem.

In this issue, you’ll discover:

  • More than half of UK undergraduate students are using AI to help with their essays;
  • How much Ethan Mollick can do with AI in 59 seconds;
  • A random controlled trial shows using GPT-4 narrows the performance gap among law students.

Let’s get started!

~ Tom Barrett

547N6aXk2GiYvXsS7bgDij

AI AT WORK

.: What Can be Done in 59 Seconds: An Opportunity (and a Crisis)

Summary ➜ Ethan Mollick conducts an experiment to see how much work can be done in under a minute using AI, producing five reasonably high-quality drafts. However, he also warns of the potential crisis of meaning that could arise as AI-written content becomes more prevalent in organisations and suggests that thoughtful leaders need to consider how to use AI in ways that emphasise the good and not the bad.

Why this matters for education ➜ This article was a provoking read, but it was not the AI demonstration that piqued my interest. It was how Mollick laid out the emerging mystery of what the use of AI in the world of work means.

The ramifications for the nature of ‘work’ are as unclear as some of the training methods of the models we are using. I appreciated this great question which sums up some of the trajectory we are on.

What does your skill and effort mean if people don’t care if your work was done by a machine?

The reality for most educators is, despite the growing paper-work pressure, words are not proxies for effort, intelligence or care, as Ethan Mollick suggests for the majority of professions.

Education pushes ever forward, perhaps cosseted, sometimes belligerent and almost certainly out of sync, from the wider impact of AI in society.

pjuu2iB5Qo1iNuLGyikPrY

HIGHER ED

.: More than half of UK undergraduate students are using AI to help with their essays

Summary ➜ More than half of UK undergraduate students are using AI to help with their essays, according to a survey of over 1,000 students conducted by the Higher Education Policy Institute. The survey found that 53% of respondents used AI to generate material for their work, and 25% used applications such as Google Bard and ChatGPT to suggest topics.

Why this matters for education ➜ It will be interesting to see the results of the EEF project which is set to look into the impact of AI tools on cutting the workload burden of teacher and improve the quality of teaching. In many schools there is a fixation with a research driven approach, but such a stance is soon put to one side when trying these new AI technologies. Perhaps both points of view can be held at the same time, but it does feel a little contradictory.

Prof Becky Francis, the chief executive of the EEF, said: “There’s already huge anticipation around how this technology could transform teachers’ roles, but the research into its actual impact on practice is – currently – limited.”

RESEARCH

.: Lawyering in the Age of Artificial Intelligence

Summary ➜ A University of Minnesota Law School study found that AI, notably GPT-4, slightly improves legal analysis quality and significantly boosts task completion speed for law students. The biggest benefits were seen in lower-skilled students. Users were satisfied and effectively identified tasks where AI helped most, suggesting AI can enhance productivity and equality in law practice.

Why this matters for education ➜ The research reveals AI’s potential to democratise academic performance, notably narrowing the performance gap among students. This levelling effect, especially beneficial for those with lower initial skill levels, suggests AI could transform learning across various domains by making educational outcomes more equitable. It make me wonder about the broader application in enhancing learning efficiency and equality, but at what cost? Could AI similarly level the playing field in other educational areas, reducing barriers and making learning more accessible to all? How might this impact long-term educational strategies and inclusivity across diverse learning environments?

.: Other News In Brief

Google is preparing to fully rename Bard to Gemini.

Apple is set to reveal it’s AI development “later this year”.

An AI-generated image of an Australian state MP raises wider questions on digital ethics.

Hugging Face has launched an open source AI assistant maker called Hugging Chat Assistants.

An interdisciplinary team of researchers have developed a machine learning system to detect mental health crises messages.

The EU Member States have endorsed the EU’s AI Act (AIA), here’s a useful quick guide from Christopher Götz.

:. .:

Spark Dialogue

.: The humAIn Community is Open!

I am delighted to share with all you Promptcrafters, our online community to explore, connect and learn about AI for education is open!

We have already welcomed our first members today from Australia, Spain and the UK, which is very exciting.

Find out more and grab our early bird membership offer.

.: :.

What’s on my mind?

.: Make it stick

ChatGPT was one of the fastest-growing technology tools we have ever seen. It gained 100 million users within just two months after its launch in November 2022.

But what drove that rapid user base and growth, how does this play out regarding traditional technology adoption theory, and is education immune to these societal shifts? These are some of the questions I have been thinking about this week.

Part of the theory you would have seen is to chunk people into different user groups: early adopters, laggards, etc. This is from the work of Everett Rogers, who proposed the diffusion theory in the 60s. Innovations or new technologies tend to spread through a population predictably.

If you look beyond people’s labels, there is a much more nuanced aspect of his work, which explores the attributes of the technology or idea itself.

He hypothesised a direct relationship between the characteristics of the innovation and the percentage of people who adopt it over time.

▶︎ Relative advantage

▶︎ Compatibility

▶︎ Observability

▶︎ Complexity

▶︎ Trialability

You might be thinking about how AI will be integrated into your organisation or how school colleagues can use these powerful tools.

Take a moment to consider each of the attributes of what you might be proposing. Let’s look at what this means for something like ChatGPT.

▶︎ Relative advantage – How does prompting a chatbot put me in a better position than where I was? What’s the advantage: time saved, speed, convenience, overcoming idea blocks, performance boost.

▶︎ Compatibility – How well does the chatbot align with the potential adopters’ values, past experiences, and needs? Does it fit into their current workflow, or will it require a drastic change in habits? I think this is not just a question of infrastructure but also a philosophical challenge to identity (see the question in the lead article above).

▶︎ Observability – Can the results of using AI chatbots be seen and appreciated by others? Is there a demonstrable benefit that can be observed and measured? For instance, the effectiveness of ChatGPT can be observed in the quality of text it produces, the time saved, and the increase in productivity.

▶︎ Complexity – Is the technology easy to understand and use, or does it require significant learning effort and time? ChatGPT, for instance, is relatively simple to use; you type in a prompt, and it generates a response. No steep learning curve is involved despite the underlying technology being vastly complex.

▶︎ Trialability – Can your colleagues try the technology easily? Remember, we all have free access to the most powerful AI model via Microsoft’s Co-Pilot, formerly Bing. This trialability reduces the perceived risk of adoption and encourages exploration, but it is also a question of equity and access.

I always use these characteristics when exploring and developing ideas or working on innovation strategies with leadership teams. They serve as a helpful guide to how we approach helping others on their AI Literacy journey.

:. .:

~ Tom

Prompts

.: Refine your promptcraft

Back to basics this week as we look into a foundation prompting technique, persona primers: establish the role or persona for the LLM to adopt.

Persona priming was one of the first methods I learned to help improve the outputs I get from LLMs. Below I have included some examples to add before your task description.

Establish the role you want the chatbot to adopt that is appropriate for your task.

Act as an expert music teacher and learning designer.
You are an experienced mentor to secondary teachers.
Act as a highly creative learning designer with a specialism in primary teaching in Singapore.
Act as an adept critical thinking strategist, specialised in developing engaging, subject-aligned scenarios that provoke high school students to sharpen their critical, analytical and evaluative thinking abilities.

Most of the time these short persona primers improve the alignment of the output to your task. But you can also experiment with longer role descriptions.

An extra tip for developing personas or roles in more detail is to start with a quick description, and simply prompt your favourite flavoured chatbot to:

Expand on this role description

Remember to make this your own, try different language models and evaluate the completions.

Learning

.: Boost your AI Literacy

RESEARCH
.: A Meta Review of AI in Higher Education

A meta-review of 66 evidence syntheses explores the application of Artificial Intelligence in higher education over the past 5 years, highlighting the need for greater emphasis on ethical, collaborative and rigorous AI research.

The review indicates a need for enhanced ethical considerations, including participant consent, data collection procedures, and consideration of data diversity.

Top 5 benefits of using AI in education = personalised learning, greater insight into student understanding, positive influence on learning outcomes, reduced planning and administration time for educators, greater equity in education and precise assessment and feedback.
Top 5 challenges of using AI in education = lack of ethical consideration, curriculum development, infrastructure, lack of teacher technical knowledge and concerns over the shifting of authority (from human to AI).

More context about the research here from Melissa Bond’s announcement.

US SCHOOLS
.: AI Guidance for US Schools

A handful of policy and guidance links from six US states who have published since the beginning of the school year, shared by Pat Yongpradit.

This is helpful to get a sense of how systems are approaching offering guidance to teachers in the US.

AI BIAS
.: Claude 2 and GPT4 are biased and racist

A helpful reminder from Ryan Tannenbaum about the flaws in the models we are using.

By highlighting bias in these models, we can raise awareness, and hopefully mitigate its affect.

…the training done to these [large language models] masks the racism rather than removes it. But also in making it more subtle it makes it more subversive. Anything these models output hold up a mirror to ourselves.

Ethics

.: Provocations for Balance

Look around you, how much of your cyber-physical experience is managed by an algorithm?
How can we ensure that AI systems used in education are transparent, explainable, and fair? More attention needs to be paid to algorithmic accountability.
AI chatbots could reduce social isolation, but might they diminish human relationships? More research into effects on student wellbeing is warranted.
Research shows benefits of personalisation, but could this lead students down narrow paths? We must consider the risks of using AI to overly tailor educational journeys.

Inspired by some of the Meta Review and this week’s news and developments.

:. .:

.: :.

Questions, comments or suggestions for future topics? Please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!


.: Tom Barrett