If you're new here, you may want to try my weekly newsletter. Thanks for visiting!
Hello Reader,
Welcome to Promptcraft, your weekly newsletter on artificial intelligence for education. Every week, I curate the latest news, developments and learning resources so you can consider how AI changes how we teach and learn.
In this issue:
- China proposes AI framework at Belt and Road conference
- Baidu claims its new Ernie 4.0 matches capabilities of GPT-4
- Universal Music Group sues Anthropic for copyright infringement over song lyrics
Let’s get started!
.: Tom
Latest News.: AI Updates & Developments .: China proposes AI framework at Belt and Road conference ➜ At its Belt and Road forum, China proposed a new AI framework calling for equal rights in development and warning against ideological divides and misuse of AI technologies. The Belt and Road forum is a major international conference hosted by China. It brings together leaders and representatives from many countries to discuss the Belt and Road Initiative, China’s ambitious plan to improve trade and infrastructure across Asia, Africa, and Europe. .: Baidu claims its new Ernie 4.0 matches capabilities of GPT-4 ➜ Chinese tech giant Baidu has released version 4.0 of its natural language model Ernie, claiming it matches the capabilities of OpenAI’s recently announced GPT-4 despite lacking comparable hype. .: Research by BSI finds a global “confidence gap” hindering AI adoption ➜ New research from BSI finds a global confidence gap between interest in AI and trust in adopting it, highlighting the need for greater education to build understanding and close this gap. |
.: EU’s AI Act unlikely to pass in 2023 as hoped ➜ The EU’s long-awaited AI Act may fail to pass regulations before December 2023 as hoped, as lawmakers struggle to agree on rules for regulating foundation models and generative AI systems.
|
.: Anthropic explores aligning an AI model with principles sourced from public input ➜ Anthropic collaborated with the Collective Intelligence Project to source training principles from 1,000 Americans. They compared training a model on the public principles versus Anthropic’s own principles.
|
.: Universal Music Group sues Anthropic for copyright infringement over song lyrics ➜ Universal Music Group has filed a lawsuit against AI startup Anthropic, alleging that its natural language model Claude 2 infringes copyright by distributing song lyrics without permission when prompted, including from major pop songs. .: Stanford researchers develop an index to assess foundation model transparency ➜ Researchers at Stanford’s Institute for Human-Centered AI have developed a new Foundation Model Transparency Index to rate major companies on transparency, finding much room for improvement. .: Anthropic research explores decomposing language models for better understanding ➜ A new study from AI company Anthropic explores decomposing language models into interpretable features, aiming to move beyond analyzing individual neurons for greater understanding and control. |
Reflection.: Why this news matters for education There was a comment by Casey Newton in the latest Hardfork podcast [linked below in the Learning section], which struck a chord with me. He stated the future of these AI tools and chatbots is likely to be more personalised. With more personalised preferences and principles, they will become much more helpful to individuals. If you believe that these AIs are going to become tutors and teachers to our students of the future in at least some ways, different states have different curricula, right? And there will be some chatbots that believe in evolution, and there will be some that absolutely do not. And it’ll be interesting to see whether students wind up using VPNs just to get a chatbot that’ll tell them the truth about the history of some awful part of our country’s history.
This raises a pressing concern: how do we prevent personalised chatbots and learning models from becoming closed-off filter bubbles, entrenching bias and preferred narratives? The prospect of students breaking out of localised “truth bubbles” imposed by AI infrastructure is a serious provocation. These AI systems are not neutral or benign. It will take concerted investment in AI literacy and discernment to critically evaluate the models we employ rather than passively enjoying their utility as our judgment erodes. It also takes investment in AI, digital, data, and media literacy to ask questions about the models we use. Not to sit back and enjoy the utility while our discernment slowly erodes. When we zoom out and put this dynamic into the context of the global regulatory space, we see lines drawn and the rapid proliferation of parochial AI systems. Students will experience many AI models throughout their lives, each with a signature, limitations, and inbuilt bias and preferences. Whether deliberate or unintended. Just imagine this scenario momentarily and reflect on what it will take for your education system to mobilise to embrace this challenge. .: ~ Tom |
Prompts.: Refine your promptcraft Another advanced promptcraft technique today. The Maieutic method, attributed to Socrates, is a form of cooperative argumentative dialogue which is used to stimulate critical thinking and to draw out ideas and underlying presumptions. You can specifically instruct an LLM to use the Maieutic method to solve the problem. Here is how you might phrase your prompt:
This instruction asks the LLM to not only provide a recommendation and reasoning, but also to assess the consistency of that reasoning. The LLM’s ability to perform this task effectively will largely depend on the capabilities of the version of the model you use. We always need to remember LLMs hallucinate and might confidently tell you the reasoning is great! Even the most advanced versions may not fully understand or correctly implement the Maieutic method as it is a complex method involving logical consistency checks and iterative questioning. Here is an example response using GPT-4 via Poe .: Remember to make this your own, tinker and evaluate the completions. |
Learning.: Boost your AI Literacy .: Peering into AIs Black Box | Hardfork Podcast ➜ In the most recent edition of the Hardfork podcast Casey Newton and Kevin Roose explore some of the alignment and black-box research announcements I mentioned above. .: Everything you need to know about the UK’s AI Safety Summit ➜ The UK will host the world’s first major summit on AI safety in November at Bletchley Park. Its goal is to develop international collaboration on managing risks from advanced AI through shared understanding and research cooperation. Invitees include the US, Canada, France, Germany and controversially China, as well as tech leaders like Google’s DeepMind, OpenAI and Anthropic.
.: Mind over machine? The psychological barriers to working effectively with AI ➜ While AI models are more accessible & capable than ever before, the latest evidence suggests humans aren’t particularly good at using them.
Overcoming our psychological biases through training, workflows and independent checks can help unlock the benefits. |
Ethics.: Provocations for Balance Who should decide the rules that govern AI systems – tech companies, governments, or the public?
The Anthropic story about sourcing AI principles from public input suggests that public values should help shape AI development. But tech firms and governments clearly want influence too. There’s a debate over who should determine the ethics and regulations for AI. How to balance intellectual property rights with public interest in AI research and applications?
Universal Music’s lawsuit against Anthropic for using song lyrics raises questions about copyright and legal access to data for training AI models. But there are arguments this impedes innovation and public benefits from AI. Where is the line between IP protection and public interest? Should countries coordinate to develop global guidelines for AI, or take more nationalist approaches?
China argued for equal rights and warned against ideological divides in AI at the Belt and Road forum. Meanwhile, the EU and US take more insular approaches on AI regulation. Is global coordination required to govern shared technologies like AI responsibly? ~ Inspired by this week’s developments. |
.:
That’s all for this week; I hope you enjoyed this issue of Promptcraft. I would love some kind, specific and helpful feedback.
If you have any questions, comments, stories to share or suggestions for future topics, please reply to this email or contact me at tom@dialogiclearning.com
The more we invest in our understanding of AI, the more powerful and effective our educational systems become. Thanks for being part of our growing community!
Please pay it forward by sharing the Promptcraft signup page with your networks or colleagues.
Share Promptcraft |
|
.: Tom Barrett/Creator /Coach /Consultant |