Hello Reader,
Promptcraft is a weekly curated newsletter on AI for education designed to elevate your AI literacy.
In this issue, you’ll discover:
- Los Angeles Unified School District (LAUSD) Launches AI Chatbot “Ed” to Serve as Virtual Student Advisor
- Google DeepMind co-founder, Mustafa Suleyman, joins Microsoft as CEO of its new AI division
- How Apple’s Discussions with Google and OpenAI Could Impact Generative AI
Let’s get started!
~ Tom Barrett
STUDENT CHATBOT
.: Los Angeles Unified School District (LAUSD) Launches AI Chatbot “Ed” to Serve as Virtual Student Advisor
Summary ➜ The AI chatbot is designed to be a virtual advisor for students, aiming to simplify school navigation and support pandemic recovery. Ed can provide personalised guidance, share academic details, and make recommendations, while even assisting with non-academic issues like lost bikes. But some parents and experts have privacy and over-reliance concerns, worried it may replace human connections. For now, the chatbot is available to 54,000 students at 100 “fragile” schools, with plans to expand. Ed looks to create individual learning plans modelled on special education IEPs, touting a 93% accuracy rate over ChatGPT’s 86%.
Why this matters for education ➜ In Australia, two student chatbots are being trialled in New South Wales and South Australia, respectively. The LAUSD version’s focus is on practical student support, including quick access to assessments and grades.
“This is a technology that becomes a personal assistant to students,” Carvalho said at a demonstration at Roybal Learning Center, west of downtown. “It demystifies the navigation of the day … crunches the data in a way that it brings what students need.”
So, for now, it seems to be much more like a student support service than a generative AI system for teaching and learning.
An interesting note about this development is that the initial design regarding access to information is quite closed.
it has to stay within the district universe of information. A student, for example, would not be likely to get a reference to a recent development in the war in Ukraine for a research paper
From the article, it is unclear whether this is just up-to-date news or no information at all. For comparison, South Australia’s Edchat does not have real-time updated information, but students can access training data up to early 2023.
|
MICROSOFT
.: Google DeepMind co-founder joins Microsoft as CEO of its new AI division
Summary ➜ Microsoft has appointed Mustafa Suleyman, co-founder of Google’s DeepMind, as CEO of its new consumer-facing AI division. He will oversee products like Copilot, Bing, and Edge. As executive VP of Microsoft AI, he reports directly to CEO Satya Nadella. The company is also bringing in talent from Suleyman’s startup Inflection AI, like co-founder Karén Simonyan as chief scientist.
Why this matters for education ➜ Suleyman is a leader in AI, strengthening Microsoft’s capabilities. His vision at Inflection AI focused on personal AI agents to support our lives. This notion of proliferated, personalized bots raises interesting questions as Microsoft targets education. In his book, Suleyman advocated for AI safety and containment. As education tools leverage AI, how will Microsoft approach oversight and governance? Millions of students and teachers are impacted. Perhaps Suleyman’s safety focus will manifest in curbing widespread use of chatbots in education. His leadership may steer Microsoft toward more contained, transparent applications of AI for learning. Overall, Suleyman’s experience brings valuable perspective on AI ethics and responsible innovation as Microsoft evolves its education technology.
|
|
IPHONE
.: How Apple’s Discussions with Google and OpenAI Could Impact Generative AI
Summary ➜ Apple is in advanced talks to potentially use Google’s Gemini language model for iPhone features, after earlier OpenAI discussions. Apple aims to integrate generative AI into iPhones later this year, despite dismissing it in 2023. Unlike others, Apple may seek payment rather than paying to adopt an AI model.
Why this matters for education ➜ Did you know that Google pays billions of dollars yearly to Apple, so Google is the default search tool on the iPhone? This AI arrangement is likely to be similar. Although it is unclear how AI will appear on the iPhone, this is a market-shaping deal.
A deal with Apple will cement Google’s prominence in the industry. It will also give Google access to two billion iPhone users who might not otherwise think to consider the company’s generative AI solutions.
While integrating advanced LLMs into iPhones could make AI-powered educational tools more accessible, it’s important to consider equity issues. Not all students may have access to the latest iPhone models, further exacerbating the digital divide in access to AI tools.
|
.: Other News In Brief
🇸🇬 Singapore university sets up AI research facility for ‘public good’
⚽️ As AI football looms, be thankful for those ready to rage against the machine
🌇 Stability AI CEO resigns to ‘pursue decentralised AI’
🤖 OpenAI shows off the first examples of third-party creators using Sora
📈 Nvidia’s AI chip dominance is being targeted by Google, Intel, and Arm
❓ Where’d my results go? Google Search’s chatbot is no longer opt-in
🎤 Can you hear me now? AI-coustics to fight noisy audio with generative AI
📜 The AI Act is done. Here’s what will (and won’t) change
🗣️ HeyGen is About to Close $60M on a $500M Post Valuation
🇮🇳 India reverses AI stance, requires government approval for model launches
|
:. .:
Monthly Review
.: All the February issues in 1 PDF
|
Download a copy of the latest monthly roundup of Promptcraft.
|
.: :.
What’s on my mind?
.: A Quick Critical Analysis of Student Chatbots
In this week’s reflection, I want to revisit walled garden chatbots for students.
I think this strategic path needs more attention after the news that Los Angeles schools have access to a student support chatbot and the active trials in New South Wales and South Australia.
It is also worth adding that I have a partnership with Adelaide Botanic High School, one of the first schools in 2023, to trial Edchat, the chatbot for South Australian schools.
It has been great to have access to it and work alongside some of the leadership team running the project at the school.
When I say walled garden chatbots, I mean chatbots that operate within a closed system or a specific domain, having access only to a limited information set.
These chatbots are designed by the school system the student belongs to, as opposed to the consumer chatbots or AI products on the open web.
For this reflection, I will use the Compass Points thinking routine from Project Zero, which starts in the East.
E: Excites
What am I excited about in this situation?
- Students get hands-on experience with AI tools to support their learning experience.
- There is a lot of momentum which can grow if our students are given the chance to step up.
- The LA example offers something different from the Australian trials: a chatbot more focused on support and practical companionship. I am excited to see how this develops.
N: Need to Know
What information do I need to know or find out?
- How do teachers feel about students gaining access to these tools?
- What professional growth opportunities exist for educators to build their capacity and understanding?
- What are the system prompts for these bots, and how do they mitigate and guard against bias?
S: Stance
- I support the testing and exploration of chatbots safely in schools. Many other school systems are keen to learn from their pioneering examples.
- Educational innovation like these chatbot trials must be supported, encouraged and celebrated.
- Despite the guard railing, I can see that frontier models like GPT-4 power them and can be very flexible in support of a student.
W: Worries
What worries or concerns do I have?
- The design, prompt, and architecture are not visible. Without transparency, it’s difficult to hold developers and operators accountable for any issues that may arise.
- Chatbots and LLMs are designed to respond with the most likely next word. They are geared towards the statistically most common. Without fine-tuning and promptcraft education, this might homogenise the message to our students.
- I am concerned about the pedagogical bias built into students’ AI systems. Imagine a student is an active user with over 100 interactions daily. 1000s every month. What’s the hidden curriculum of each of those nudges and interactions?
:. .:
~ Tom
|
Prompts
.: Refine your promptcraft
One of the most useful and effective prompting techniques is to include an example of what you want to generate. This is called few-shot prompting, as you guide the AI system with a model of what to generate.
For example, if you are trying to develop some model texts or example paragraphs to critique with your class and you have some from the last time, you could add these to your prompt.
When you do not include an example, this is called zero-shot prompting. Zero-shot prompting always leads to a less aligned output, which you must edit and iterate to correct.
This week’s quick tip builds on the few-shot prompting technique by adding some qualifying explanations for ‘what a good one looks like’.
We always do this when we are modelling and teaching, and it works well with promptcraft.
So the prompt structure goes:
- Describe your instructions in clear language.
- Add an example of what you are looking for.
- Describe why this example is a good model.
Let’s say you’re teaching persuasive writing. Instead of asking the AI to generate a persuasive text, you could add a model paragraph from a previous lesson, followed by specific pointers on what makes it effective.
Here’s an example working with the Claude-3-Opus-200k model.
Note how closely the output follows my model paragraph. A little extra promptcraft goes a long way to improve the results.
:. .:
Remember to make this your own, try different language models and evaluate the completions.
Do you have a great prompt you would like me to share in a future Promptcraft issue? Drop me a message by replying to this email.
|
Learning
.: Boost your AI Literacy
BIAS
.: Systematic Prejudices Investigation [PDF]
The UNESCO report sheds light on gender bias within artificial intelligence, revealing biases in word association, sexist content generation, negative content about sexual identity, biased job assignments, and stereotyping.
The study emphasises the need for ethical considerations and bias mitigation strategies in AI development, including diverse representation in teams and training datasets to ensure fairness and inclusivity.
ELECTION .: How AI companies are reckoning with elections
A helpful short article giving a rundown of how some of the largest tech companies and AI platforms are grappling with the impact of AI tools on the democratic process.
Although this focuses on the US, in 2024 billions of people will go to the polls across the world. According to a Time article:
Globally, more voters than ever in history will head to the polls as at least 64 countries (plus the European Union)—representing a combined population of about 49% of the people in the world—are meant to hold national elections
How these popular generative AI companies respond will impact so many of us.
The “Seven Principle Goals” of the AI Elections accord. Image: AI Elections accord |
Several companies […] signed an accord last month, promising to create new ways to mitigate the deceptive use of AI in elections. The companies agreed on seven “principle goals,” like research and deployment of prevention methods, giving provenance for content (such as with C2PA or SynthID-style watermarking), improving their AI detection capabilities, and collectively evaluating and learning from the effects of misleading AI-generated content.
LABELS .: Why watermarking won’t work
This Venturebeat article discusses the challenges posed by the proliferation of AI-generated content and the potential for misinformation and deception.
Tech giants like Meta, Google, and OpenAI are proposing solutions like embedding signatures in AI content to address the issue.
However, questions arise regarding the effectiveness and potential misuse of such watermarking measures.
|
Ethics
.: Provocations for Balance
The Chatbot That Knows Too Much
“Ed” is meant to be a helpful companion, guiding students through their day. But imagine a school system where every question asked, every late assignment admitted to, every awkward social situation confessed to the chatbot becomes permanent record. Mistakes can’t be erased; the AI analyzes your word choice for signs of distress.
What if this data isn’t just used for support, but for discipline, even predicting future “problem” behavior? Is it okay for an AI to judge the private thoughts of a teenager, and worse, limit their opportunities based on what it predicts they might do?
Inspired by some of the topics this week and dialled up.
|
:. .:
.: :.
The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!
|