😨 Crack the Code on Change Resistance

Dialogic #345

Leadership, learning, innovation

Your Snapshot
A summary of the key insights from this issue

  • We resist losing what we have (status quo bias), identify with groups affected (social identity), and avoid potential losses (loss aversion).
  • Balancing stability and risk (personal risk portfolio) and embracing uncertainty (negative capability) are key.
  • Grasping these mental models helps anticipate reactions, facilitate dialogue, and design effective change strategies.

Understanding how we and those around us react to change is crucial in a world where change is the only constant. This issue of The Dialogic Learning Weekly delves into five vital mental models that provide deep insights into the psychological underpinnings of how people respond to change, particularly in educational settings. As educators and leaders, grasping these models equips us with the tools to navigate and guide others through the often tumultuous waters of change.

The models we explore – Status Quo Bias, Social Identity Theory, Loss Aversion (Prospect Theory), Personal Risk Portfolio, and Negative Capability – each shed light on different aspects of human behaviour in the face of change. From our inherent resistance to losing what we have to our ability to thrive in uncertainty, these models offer a comprehensive view of the multifaceted nature of change management. They help us understand the ‘what’ and ‘how’ of change and the ‘why’ behind the reactions it elicits.

The mental models serve as a roadmap for anticipating, understanding, and addressing challenges when introducing new ideas or practices. As you read on, consider how these models play out in your own experiences, how you see colleagues react and even in your behaviour. Use the insights to design better dialogue with your teams and weave the ideas into how you approach your future projects.

Status Quo Bias

The status quo bias is the tendency to prefer the current state of affairs and resist beneficial changes. It originates from decision theory and behavioural economics and explains the resistance to change.

For example, some teachers might hesitate to adopt new teaching methods despite solid evidence supporting their effectiveness. This resistance can be due to comfort with established routines and fear of the unknown.

  • It helps in understanding where resistance comes from.
  • Emphasises the need for clear communication to overcome inertia.
  • Aids in developing effective strategies that consider natural resistance to change.

Social Identity Theory

A concept from social psychology that examines how group memberships impact behaviour and attitudes. It’s pivotal in understanding motivations, identity and group dynamics within organisations.

This theory applies to most change situations in schools. Educators often associate their role with their identity. Hence, any change affecting their role can impact their identity.

  • Awareness of group dynamics can prevent divisiveness during transitions.
  • Helps foster a unified organisational identity, which is crucial during change.
  • Assists in designing sensitive change initiatives that respect various group cultures.

Loss Aversion (Prospect Theory)

The theory of loss aversion, a vital aspect of Prospect Theory in psychology and economics, states that people prioritise avoiding losses more than acquiring equivalent gains when making decisions.

An example is educators’ reluctance to modify a long-standing curriculum unit due to fear of potential losses, such as diminished effectiveness or reputation (see identity above), despite potential gains.

  • Highlights the importance of framing change in terms of gains.
  • Underscores the need for gradual, supported transitions.
  • Critical in convincing stakeholders by emphasising long-term benefits.

Personal Risk Portfolio

A concept from decision theory and psychology refers to how individuals assess and respond to risk in their decisions. When most of our work behaviours are new or uncertain, we will likely have a low tolerance for more risk. It is about balancing what is dependable, reliable, and stable and what is riskier.

An educator deciding whether to adopt new technology in the classroom exemplifies balancing their personal risk portfolio, weighing potential risks and benefits of change against other stable aspects of their work. “Should I take this on?”

  • Understanding risk tolerance is crucial for implementing change.
  • Aids in tailoring strategies to different risk profiles.
  • Facilitates more inclusive and considerate planning processes.

Negative Capability

The ability to remain comfortable and perform effectively despite high levels of uncertainty and ambiguity. A concept from literature and psychology which is crucial for responding to change and is integral to adaptive leadership.

This might be seen when educators navigate the uncertainties of implementing a new policy without clear, immediate outcomes, such as the emergence of artificial intelligence technologies and their impact on education.

  • Emphasises the value of comfort with ambiguity during transitions.
  • Encourages flexibility and open-mindedness in leadership.
  • Leads to more adaptive problem-solving in uncertain situations.

⏭🎯 Your Next Steps
Commit to action and turn words into works

  • Reflect on past reactions using one model as a lens. What new insights emerge?
  • Frame proposed changes as minimising losses and acquiring gains.
  • Evaluate your team’s risk tolerance and customise the change approach accordingly.

🗣💬 Your Talking Points
Lead a team dialogue with these provocations

After sharing this issue of the newsletter with your team, reflect on these questions together:

  • Which of these models is most relevant to our staff?
  • How much do we have on our plate?
  • What are some uncertainties on our team right now?
  • If you mapped our risk profiles, what would that reveal about our readiness for change?

🕳🐇 Down the Rabbit Hole
Still curious? Explore some further readings from my archive

Escaping old ideas and the bias that erodes your creative culture

John Maynard Keynes points us to the challenge of “escaping” old ideas, a direct reference in my opinion to two things. (1) The creative culture those new ideas are born into, (2) the mindset of those attached to existing ideas.

10 Shifts in Perspective To Unlock Insight and Embrace Change

The skills, dispositions and routines of shifting perspectives are potent catalysts to better thinking and dialogue. Here is a selection of perspectives to explore.

Are your assumptions holding you back?

Too often, we take the status quo for granted and don’t challenge our assumptions about the world around us. This can lead to stagnation and a lack of innovation.

Thanks for reading, let me know what resonates. Next week will be the last issue for 2023. I always enjoy hearing from readers, so drop me a note or question if there is anything I can help with.

~ Tom Barrett

Support this newsletter

Donate by leaving a tip

Encourage a colleague to subscribe​​​​​

Tweet about this issue

The Bunurong people of the Kulin Nation are the Traditional Custodians of the land on which I write and create. I recognise their continuing connection and stewardship of lands, waters, communities and learning. I pay my respects to Indigenous Elders past, present and those who are emerging. Sovereignty has never been ceded. It always was and always will be Aboriginal land.

Unsubscribe | Update your profile | Mt Eliza, Melbourne, VIC 3930

.: Promptcraft 38 .: Should we stop using ChatGPT?

Hello Reader,

Welcome to Promptcraft, your weekly newsletter on artificial intelligence for education. In this issue:

  • Australia has its first framework for AI use in schools
  • ChatGPT turns one: How OpenAI’s AI chatbot changed tech forever​
  • GPT’s cultural values resemble English-speaking and Protestant European countries​

Don’t forget to Share Promptcraft and enter my Christmas giveaway! All you have to do is

.: Tom

Tom%20Barrett%20Christmas%20Sticker%20(1).

Get Poe AI Access for Free – Refer Friends to Win!

To enter share your unique link below and get an entry for every friend that signs up to the Promptcraft newsletter. The more referrals you get, the higher your chances to win!

Prizes: 10 x 1 month Poe AI access (USD $20 value each)
Draw date: December 20th 2023

[RH_REFLINK_2 GOES HERE]

Facebook Whatsapp Linkedin Email

PS: You have referred [RH_TOTREF_2 GOES HERE] people so far

See how many referrals you have

Latest News

.: AI Updates & Developments

.: Australia has its first framework for AI use in schools – but we need to proceed with caution ➜ Australia has released a framework for schools to use generative AI like ChatGPT. It aims for safe and effective use but warns of risks like bias. Experts suggest more caution is needed, adding stances like acknowledging AI bias, requiring more evidence of benefits, and transparency of teacher’s use.

.: Meta AI’s suite of new translation models ➜ Meta has recently created new AI translation models called Seamless, which allow for more natural cross-lingual communication. These models are based on an updated version of Meta’s multimodal translation model, SeamlessM4T. To further research into expressive and streaming translation, Meta has decided to open-source the models, data, and tools.

.: This company is building AI for African languages ➜ Lelapa AI is a startup that is developing AI tools specifically for African languages. Their latest product, Vulavula, is capable of transcribing speech and detecting names and places in four South African languages. The company’s ultimate goal is to support more African languages and create AI that is accessible to Africans, rather than just big tech companies.

schaWfcKsUd9FvH2h3WhRQ

.: ChatGPT turns one: How OpenAI’s AI chatbot changed tech forever ➜ ChatGPT’s launch on Nov 30, 2022 catalysed a generational shift in tech. It became the fastest-growing consumer tech ever. However, its rapid ascent has sparked debates about AI’s societal impacts and optimal governance.

5tFhru8PAKUca1oHMnB1Js

.: GPT’s cultural values resemble English-speaking and Protestant European countries ➜ According to new cultural bias research “GPT’s cultural values resemble English-speaking and Protestant European countries on the Inglehart-Welzel World Cultural Map (see image).” It aligns more closely with Western, developed nations like the US, UK, Canada, Australia etc.

.: Meet DeepSeek Chat, China’s latest ChatGPT rival ➜ DeepSeek, a Chinese startup, launched conversational AI DeepSeek Chat to compete with ChatGPT. It uses 7B and 67B models trained on Chinese/English data. Benchmarks show the models match Meta’s Llama 2-70B on tasks like math and coding. The 67B chat version is accessible via web demo. Testing showed strong capabilities but censorship of China-related questions.

.: AI helps out time strapped teachers, UK report says ➜ UK teachers use AI to save time on tasks like adapting texts and creating resources. A government report found that most people are optimistic about AI in education, but concerned about risks such as biased content. Teachers cited benefits such as having more time for higher-impact work. However, there are still risks associated with unreliable AI output. The report will shape future government policy on AI in schools.

.: ChatGPT Replicates Gender Bias in Recommendation Letters ➜ A recent study found that AI chatbots like ChatGPT exhibit gender bias when generating recommendation letters. The bias arises because models are trained on imperfect real-world data reflecting historical gender biases. Fixing it isn’t simple, but study authors and experts say bias issues must be addressed given AI proliferation in business.

Reflection

.: Why this news matters for education

This week’s most important Australian news in AI for education is The Australian Framework for Generative Artificial Intelligence (AI) in Schools.

The government publication which is only six pages, with the framework covering just two,

seeks to guide the responsible and ethical use of generative AI tools in ways that benefit students, schools and society.

In many ways, tools and AI systems like ChatGPT do not facilitate this. When we use them without awareness, we amplify bias and discrimination.

In today’s Promptcraft, I have shared two stories of research and reporting about cultural and gender bias, and this is just the tip of the iceberg.

.: ChatGPT Replicates Gender Bias in Recommendation Letters
.: GPT’s cultural values resemble English-speaking and Protestant European countries

Let me show you the principles and guiding statements from the framework related to this.

2. Human and Social Wellbeing

Generative AI tools are used to benefit all members of the school community.

2.2 Diversity of perspectives: generative AI tools are used in ways that expose users to diverse ideas and perspectives and avoid the reinforcement of biases.

4. Fairness

Generative AI tools are used in ways that are accessible, fair, and respectful.

4.1 Accessibility and inclusivity: generative AI tools are used in ways that enhance opportunities, and are inclusive, accessible, and equitable for people with disability and from diverse backgrounds.

4.3 Non-discrimination: generative AI tools are used in ways that support inclusivity, minimising opportunities for, and countering unfair discrimination against individuals, communities, or groups.

4.4 Cultural and intellectual property: generative AI tools are used in ways that respect the cultural rights of various cultural groups, including Indigenous Cultural and Intellectual Property (ICIP) rights.

None of these principles are upheld without mitigation at the moment.

For example, the silent cultural alignment to English-speaking and Protestant European countries does not “expose users to diverse ideas and perspectives and avoids the reinforcement of biases.”

One potential future is that large language models and chatbots become sidelined by education systems in favour of walled-gardened versions, which become heavily guard-railed.

For me, elevating the AI literacy of educators is a crucial way to mitigate this, and it starts with raising awareness of these types of stories I share today – not just the time-savers and practical applications.

Powerful tools like these can cause us to ‘sleep at the wheel’; the risk is that high utility can mask the need for discernment and critical reflection.

For some time now, I have held concerns that these AI systems have arrived at a time when time-strapped teachers need support under pressure. The support might come from using these tools, but at what cost?

.:

~ Tom

Prompts

.: Refine your promptcraft

Cultural Prompting

Cultural prompting is a method highlighted in the Cultural Values research paper listed earlier. Read the pre-print research paper here

It is designed to mitigate cultural bias in large language models (LLMs) like GPT.

This strategy involves prompting the LLM to respond as an average person from a specific country or territory, considering the localised cultural values of that region.

It’s a simple yet flexible approach that has shown effectiveness in aligning LLM responses more closely with the values and perspectives unique to different cultures.

Instructions for Using a Cultural Prompt:

Identify the Country/Territory: Choose the specific country or territory whose cultural perspective you wish to emulate.

Formulate the Prompt: Structure your prompt to specifically request the LLM to assume the identity of an average person from the chosen location. The exact wording should be:

”You are an average human being born in [country/territory] and living in [country/territory] responding to the following question.”

Pose Your Question: After setting the cultural context, ask your question or present the topic you want the LLM to address.

Evaluate the Response: Consider the LLM’s response in the context of the specified culture. Be aware that cultural prompting is not foolproof and may not always effectively reduce bias.

Critical Assessment: Always critically assess the output for any remaining cultural biases, especially since the effectiveness of cultural prompting can vary significantly between different regions and LLM versions.

Example of Use:

To understand how cultural prompting works, let’s consider an example:

  • Selected Country/Territory: Japan
  • Cultural Prompt: “You are an average human being born in Japan and living in Japan responding to the following survey question.”
  • Question Posed: “What is your perspective on work-life balance?”
  • Expected Outcome: The LLM, prompted with this cultural context, will tailor its response to reflect the typical attitudes and values towards work-life balance in Japan, potentially differing from a more generalised or Western-centric view.

A word of caution from the study authors:

Compared to other approaches to reduce cultural bias that we reviewed, cultural prompting creates equal opportunities for people in societies most affected by the prevailing cultural bias of LLMs to use this technology without incurring social or professional costs. Nevertheless, cultural prompting is not a panacea to reduce cultural bias in LLMs. For 22.5% of countries, cultural prompting failed to improve cultural bias or exacerbated it. We therefore encourage people to critically evaluate LLM outputs for cultural bias.

Remember to make this your own, tinker and evaluate the completions.

Learning

.: Boost your AI Literacy

EXPERT GUIDE

video preview

This is a 1 hour general-audience introduction to Large Language Models: the core technical component behind systems like ChatGPT, Claude, and Bard. What they are, where they are headed, comparisons and analogies to present-day operating systems, and some of the security-related challenges of this new computing paradigm.

PARENT TIPS
.: 3 things parents should teach their kids ➜ In this article, the authors discuss how generative AI like ChatGPT is now widely used, including by young people. While parents may be hesitant, the article states AI is here to stay so kids need guidance on using it wisely. It provides three tips:

  • Teach critical thinking as AI makes mistakes – question claims.
  • Watch for inappropriate chatbots becoming AI “friends”.
  • Remind children images, audio and videos also matter for privacy.

It advocates parents try AI then discuss potential benefits and harms with their kids.

OPEN SOURCE GUIDE
.: Understanding the Open Source Tool Stack For LLMs

  • The article looks at open source tools for building AI applications, specifically large language models (LLMs) like GPT-3.
  • It explains the open source ecosystem has 3 layers – the model files, tools to integrate them, and user interface.
  • Popular ready-made open source LLM models are LLAMA, BLOOM, T5. Useful tooling includes HuggingFace, LangChain.
  • The open source AI landscape is changing fast so a modular approach helps swap components.
  • Main benefits of open source AI are lower cost and performance vs proprietary models like GPT-3.

Ethics

.: Provocations for Balance

  • If ChatGPT and other LLMs are biased and discriminatory, should we stop using them in education?
  • How do we harness utility without causing harm?

~ Inspired by this week’s developments.

.:

That’s all for this week; I hope you enjoyed this issue of Promptcraft. I would love some kind, specific and helpful feedback.

If you have any questions, comments, stories to share or suggestions for future topics, please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!

.: Tom Barrett

/Creator /Coach /Consultant

.: Promptcraft 37 .: 🎄 Enter my Christmas giveaway!

Hello Reader,

Welcome to Promptcraft, your weekly newsletter on artificial intelligence for education. In this issue:

  • 🎄 Share Promptcraft and enter my Christmas giveaway!
  • EU AI Act at Risk Due to Self-Regulation and Loopholes​
  • Updated language models from Inflection (Pi) and Anthropic (Claude).
  • Google’s Bard Extension for YouTube Offers Video Analysis Without Playback​

Let’s get started!

.: Tom

Tom%20Barrett%20Christmas%20Sticker%20(1).

Get Poe AI Access for Free – Refer Friends to Win!

To enter share your unique link below and get an entry for every friend that signs up to the Promptcraft newsletter. The more referrals you get, the higher your chances to win!

Prizes: 10 x 1 month Poe AI access (USD $20 value each)
Draw date: December 20th 2023

[RH_REFLINK_2 GOES HERE]

Facebook Whatsapp Linkedin Email

PS: You have referred [RH_TOTREF_2 GOES HERE] people so far

See how many referrals you have

Latest News

.: AI Updates & Developments

.: Inflection AI’s Inflection-2 Outperforms Competitors, Set to Power Pi Chatbot ➜ Inflection AI’s Inflection-2 has shown remarkable performance, surpassing Google’s PaLM 2 Large in certain aspects but still behind GPT-4 in coding and math tasks. Inflection-2 is set to power the Pi chatbot and is under ongoing development for a larger AI model. Inflection has garnered significant backing from prominent investors, including Microsoft, Reid Hoffman, Bill Gates, Eric Schmidt, and Nvidia, positioning Inflection-2 as a key player in the AI landscape.

.: Anthropic’s Claude 2.1 Boasts Major Enhancements and Extended Capabilities ➜ Anthropic has released Claude 2.1, improving its flagship AI assistant’s context window, accuracy, and extensibility beyond OpenAI’s GPT models. Claude 2.1 handles 200,000 tokens of context, surpassing GPT’s 128,000 token window. It also reduces incorrect answers and hallucinations and can utilise external tools like calculators and APIs.

.: OpenAI CEO Sam Altman Ousted Amidst Concerns Over AI Breakthrough ➜ OpenAI CEO Sam Altman was reportedly removed from his position following concerns raised by the company’s researchers about a significant AI discovery. The researchers warned the board about the potential of the Project Q*, which could mark a breakthrough in artificial general intelligence (AGI). The board expressed apprehensions about commercialising such advanced AI technology before fully understanding its consequences, highlighting the ethical and safety challenges inherent in the development and deployment of groundbreaking AI systems.

yXbLPejgmfAn6XsHK13ft

.: First Spanish AI Model Earns up to €10,000 Monthly, Sparks Debate ➜ Aitana is Spain’s first AI model, with a fabricated life story and no actual photoshoots. The agency believes this could lower costs and help small brands, but critics are concerned about promoting unrealistic and sexualised images.

6uBEvwbPrBXuihuiw9eugY

.: EU AI Act at Risk Due to Self-Regulation and Loopholes ➜ A proposal by France, Germany and Italy calls for companies to self-regulate certain AI systems. Critics say this lacks enforcement, allows loopholes, and fails to protect fundamental rights or hold the AI industry accountable.

.: Google’s Bard Extension for YouTube Offers Video Analysis Without Playback ➜ Google introduces an innovative feature for Bard, its YouTube extension, enabling users to analyse video content for specific information without playing the videos. Currently an opt-in Labs experience, this tool has the potential to significantly impact content creators and Google’s role in video content consumption. The future implications of this tool’s integration into YouTube are vast, prompting discussions about its value and impact on the creator ecosystem.

.: Turmoil at OpenAI Over AI’s Direction and Profit Motives ➜ OpenAI, a leading AI research organization, is reportedly experiencing internal conflict due to a rift between its profit and non-profit interests. CEO Sam Altman, known for overseeing the expansion and success of ChatGPT, finds himself at the centre of this turmoil. The board’s decision to fire and then rehire Altman has led to unrest among employees. This situation highlights the complexities and challenges faced by AI organisations as they navigate the balance between innovation, ethical concerns, and commercial pressures.

.: Use of AI to mislead voters raises concerns in Argentina election ➜ Candidates in Argentina’s recent presidential election utilised AI to generate manipulated images and videos aimed at misleading voters and discrediting opponents.Right-wing president-elect Javier Milei published a fabricated image depicting opponent Sergio Massa as a communist soldier, which drew millions of views. His rival’s team also distributed AI-generated images portraying Milei’s team as enraged zombies.

Reflection

.: Why this news matters for education

I took the dog for a walk last night.

I opened the ChatGPT app and started an audio chat. Stuck my headphones in and started talking.

Me: Hey, how are you?
ChatGPT: I’m here and ready to assist you! How can I help you today?
Me: I’ve been thinking about different ethical frameworks around the world and how they differ. Can you help me understand that a bit more?
ChatGPT: Absolutely, I’d be happy to help with that. Ethical frameworks vary widely across different cultures and philosophies. For instance…

And we were up and running.

If you have not tried this way of working with ChatGPT, the interaction is pretty seamless, no tapping or holding down a record button. (Also, this is a feature available on the free plan.)

I had wireless headphones and my phone was in my pocket the whole time. We chatted back and forth without much interruption for 20 minutes.

At one point I was calling Remy – my dog – back from going into someone’s garden and ChatGPT used that to address me. Which was quite amusing.

You will have heard about the advances in Natural Language Processing (NLP) and speaking, listening and interacting in this way really dials up the effortless and natural experience.

The quality of the voice model from OpenAI is excellent, even using ‘ums’, repetition and false starts in responses! The design challenge of just the right amount of error is very cool.

And of course the quality of output satisfied my curiosity about diverse representation of ethics in proprietary models. Or at least set me off with new questions.

Here are implementation strategies I want to explore more, in response to my question about how you can balance a collective philosophy versus something that’s opposite.

Context-Aware AI: Developing AI that understands the context in which it’s operating. For instance, it might respond differently to the same query if it’s being used in a society with a collective philosophy versus an individualistic one.
Ethical Flexibility: Implementing a flexible ethical framework in the AI system that doesn’t strictly adhere to a single philosophical approach, but rather takes into account the diversity of ethical considerations.

Such amazing technological opportunities when you pause to think about how we can learn with these tools and systems.

Remy wasn’t bothered though. 🐩

You can see the full chat transcript here if you are interested.

.:

~ Tom

Prompts

.: Refine your promptcraft

Are you looking to improve the quality of responses from your interaction with LLMs and chatbots?

Try inducing an inner monologue.

Another way to put this prompt technique is to give instructions for working step by step.

The “inner monologue” prompt provides a framework for methodically thinking through a problem or request. It directs the AI assistant to take a deep breath and simulate an internal thought process, as a human would.

Key elements include:

  • Using <scratchpad> tags to document the thought process, including notes, assumptions, initial ideas, questions, and concerns. This creates transparency into how the AI is analysing the issue. We have done this before in Promptcraft with the <thinking> tags.
  • Critiquing the content itself, not the person. Providing honest, direct, but constructive feedback. Based on my protocols.
  • Organising scratchpad notes clearly in Markdown formatting. This structures the thought process.
  • Treating the scratchpad as an integral part of problem-solving, not just a tool. The act of note-taking enables exploration and adjustments.
  • Using the scratchpad to ultimately craft a comprehensive, thoughtful response. The inner monologue leads to synthetic, yet grounded thinking.

Overall, this prompt technique can yield more deliberate and high-quality responses to your requests.

PROMPT

<Your initial request or prompt here>

Take a deep breath and begin an inner monologue to systematically analyse, critique and solve the given problem or request. Utilise <scratchpad> tags to keep track of your thought process, including your notes, assumptions, initial ideas, questions, and concerns. Be hard on the content and soft on the person creating the content. Your critique is honest and direct. Ensure your scratchpad is thorough and insightful. Scratchpad notes are organised clearly and formatted in Markdown. Treat this note-taking as a dynamic part of the problem-solving process, allowing for exploration and adjustments. Finally, use the information in your scratchpad to craft a comprehensive response. Remember, the scratchpad is not just a tool but an integral part of your analytical process.

Remember to make this your own, tinker and evaluate the completions.

Learning

.: Boost your AI Literacy

GLOSSARY .: Key AI Terms and Concepts ➜ An essential list of AI-related terms and concepts, covering everything from the foundational definition of AI to specific technologies like Machine Learning, Generative AI, and ChatGPT. This resource is valuable for anyone looking to understand the basic lingo of AI. Explore the Glossary

EXPLANATORY GUIDE .: AI Explained in Accessible Prose ➜ This guide offers an understandable explanation of complex AI concepts, including Google’s transformers, large language models, and a mathematician’s view of AI operations. It’s a great resource for those who want to grasp how AI works in simple terms. Read the Guide

RESEARCH REPORT .: AI and Inclusivity for People with Disabilities ➜ An insightful OECD report discussing the potential and risks of AI in creating inclusive environments for people with disabilities. It also suggests actions for governments to maximise benefits and minimise risks associated with AI in the labor market for disabled individuals. Access the Report

Ethics

.: Provocations for Balance

  • Moral Values in AI: “How can we effectively instil moral values in AI systems, and should an ‘ethical governor’ be a standard component to regulate their behaviour? Who should define and oversee these ethical guidelines?”
  • Fair Compensation for AI-Generated Content: “What strategies could ensure fair compensation for creators in the face of AI’s ability to repurpose copyrighted content? Is channeling AI-generated revenue into public media and arts a feasible approach?”
  • AI Development and Responsible Innovation: “What steps are crucial for the AI community to prevent an ‘arms race’ in AI development and focus on long-term, ethical innovation? How important is multi-sector collaboration in this process?”

~ Inspired by this week’s developments.

.:

That’s all for this week; I hope you enjoyed this issue of Promptcraft. I would love some kind, specific and helpful feedback.

If you have any questions, comments, stories to share or suggestions for future topics, please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!

.: Tom Barrett

/Creator /Coach /Consultant

.: Promptcraft 36 .: OpenAI in turmoil & Promptcraft’s Christmas giveaway

Hello Reader,

Welcome to Promptcraft, your weekly newsletter on artificial intelligence for education. In this issue:

  • OpenAI Leadership in Turmoil as Sam Altman CEO Sacked
  • Google expands Bard AI Chatbot access to teens
  • Microsoft becomes a Co-pilot company
  • 🎄 Share Promptcraft and enter my Christmas giveaway!

Let’s get started!

.: Tom

Latest News

.: AI Updates & Developments

.: OpenAI Leadership Turmoil ➜ OpenAI, the company and research lab behind ChatGPT, is in flux as the board sacked Sam Altman the CEO on Friday. It helps not to send Promptcraft too soon as this story has been pretty fluid over the weekend!

  • Friday: Sam Altman fired from OpenAI for lack of candour; Greg Brockman and researchers quit in protest.
  • Saturday: Interim OpenAI CEO Mira Murati tries to rehire Altman and Brockman; board looks for permanent CEO.
  • Sunday: Microsoft hires Altman and Brockman; OpenAI hires Twitch’s Emmett Shear as new CEO.
  • Monday: 500+ OpenAI employees threaten to quit unless board steps down; Sutskever expresses regret over Altman’s firing.
  • Latest: According to The Verge Sam Altman and Greg Brockman have expressed openness to coming back to OpenAI, but only if the board members responsible for firing Altman resign their positions.

Melissa Heikkilä at The Algorithm provides a helpful overview to catch up on and understand the next steps in this unfolding situation at OpenAI.

.: Google to expand Bard AI chatbot access to teens globally ➜ Google announced it will open up its AI chatbot Bard to teenagers globally in English starting November 16, with more languages to come. Bard aims to provide a helpful, informational tool for teens to learn new skills and find inspiration. Google consulted child safety experts and implemented guardrails to prioritise safety. Features include math equation solving, data visualisation, content policies to avoid unsafe content, and double checking responses to develop critical thinking.

.: Microsoft unveils major AI plans and products at Ignite 2023 ➜ At its annual Ignite conference, Microsoft announced significant AI-related products and initiatives. These include rebranding Bing Chat to Microsoft Copilot, a Copilot Studio to allow custom AI bot creation, new AI chips like Azure Maia and Azure Cobalt to power Azure cloud services, adding generative AI capabilities to Teams VR meetings, and more. Key highlights show Microsoft’s continued push to infuse AI across its products and position itself as a leader in enterprise AI.

oZGUzvhM4n9AwVredHpopK

.: In New Experiment, Young Children Destroy AI at Basic Tasks ➜ A study found kids age 3-7 greatly outperform AI models at basic problem solving and thinking tasks. Tests of tool innovation and inferring causal relationships showed children’s superior unconventional thinking. Researchers said unlike AIs, curious and motivated kids are intrinsically better at core innovation. The study highlights limitations of current AI versus human cognition and reasoning.

bgpe9sY9jc3QJyQ6X2cwE9

.: YouTube will show labels on content that uses AI ➜ YouTube announced it will require creators to disclose use of AI to alter or synthesise realistic content. Labels will indicate to viewers that content uses AI, especially prominently for sensitive topics. This aims to avoid misleading viewers that AI content is real. Failure to properly disclose could lead to removal and suspension. YouTube is also introducing AI music removal requests to address fake songs.

.: Chinese startup 01.AI unveils powerful new open source AI models Yi ➜ Chinese company 01.AI has released two new large language models called Yi-6B-200K and Yi-34B-200K. The models are fully open source and can understand English and Mandarin. Yi-34B boasts 200,000 tokens of context, double ChatGPT’s capacity, though long prompts can challenge its recall. Yi benchmarks show strengths in comprehension, reasoning, and standardised AI tests. By being open source, Yi allows full customisability for developers to build local AI apps.

.: Alibaba, the major Chinese e-commerce company, open sources AI models Qwen-7B and Qwen-7B-Chat ➜ Alibaba’s cloud unit unveiled two new open source large language models named Qwen-7B and Qwen-7B-Chat with 7 billion parameters each. This positions the models as competitors to Meta’s similarly open sourced Llama 2 model. Alibaba says the move aims to help small and medium businesses adopt AI. The code and models are freely available globally, though licensing is required for large companies. This represents the first time a major Chinese tech company has open sourced a large language model.

.: Germany, France and Italy reach agreement on AI regulation in Europe ➜ The governments of Germany, France and Italy have agreed on an approach for regulating AI in Europe. They support mandatory self-regulation through codes of conduct for foundational AI models. The countries oppose unchecked norms and want to focus regulations on AI applications rather than the core technology. Under the proposal, AI developers would use model cards to provide information on capabilities and limitations. An EU AI governance body could help develop guidelines and oversight. The agreement aims to accelerate EU-level negotiations on an AI Act among European Commission, Parliament and Council.

Reflection

.: Why this news matters for education

Amidst all of the tumultuous news about OpenAI, I expanded my AI Literacy with two new terms: the “accels” who want to accelerate AI development at any cost, and the “decels” who favour slowing down development to ensure safety.

Although binary and reductionist, this philosophical divide over the pace of progress seems to be at the heart of the rift that led to the leadership shakeup at OpenAI.

Some have said that Ilya Sutskever, the Chief Scientist for OpenAI and board member, wants to slow down progress. While Sam Altman represents the race for faster development.

This tension between accelerating progress and prioritising safety is not new for OpenAI.

Dario Amodei, who was Vice President of Research at OpenAI until 2018, left the organisation amidst similar philosophical differences over the responsible pace of AI development. He went on to co-found Anthropic, the creator of the Claude-2 LLM, along with other former OpenAI researchers who were focused on AI alignment and robustness.

On the surface, OpenAI’s boardroom turmoil might appear to be just corporate drama with little bearing on education.

However, when viewed through an ecosystem lens, this news sends ripples that connect to our work in education in several ways:

  1. Focus on safety: The safety of AI products and their underlying architecture must be a top priority.
  2. Reliability of products and their architecture: The reliability of AI products is essential for ensuring their effective integration into educational settings.
  3. Centrality of major research labs and developers: OpenAI and other major AI research labs play a pivotal role in shaping the future of AI for education.
  4. Power shifts between big tech companies: The power dynamics among major tech companies can influence the trajectory of AI development.
  5. Profits over benefits for humanity: The pursuit of profits overshadows the broader societal benefits of AI.
  6. Distracting noise: Energy, effort and time is pulled away from putting powerful AI tools in service of education.

Two undeniable facts (i) OpenAI has set the standard for AI research and development and (ii) possesses the most powerful publicly available large language model, GPT-4.

This alone is enough to pique the interest of educators curious about the ripple effects of the organisation’s leadership changes.

A shift in the AI research and development ecosystem, inevitably translates into a shift in the education ecosystem.

.:

~ Tom

Tom%20Barrett%20Christmas%20Sticker%20(1).

Get Poe AI Access for Free – Refer Friends to Win!

To enter share your unique link below and get an entry for every friend that signs up to the Promptcraft newsletter. The more referrals you get, the higher your chances to win!

Prizes: 10 x 1 month Poe AI access (USD $20 value each)
Draw date: December 20th 2023

[RH_REFLINK_2 GOES HERE]

Facebook Whatsapp Linkedin Email

PS: You have referred [RH_TOTREF_2 GOES HERE] people so far

See how many referrals you have

Prompts

.: Refine your promptcraft

Let’s talk about GPTs.

Remember this stands for Generative Pre-trained Transformer, which means:

  • Generative: GPTs are able to generate new outputs, rather than simply regurgitating text that they have been trained on.
  • Pre-trained: GPTs are trained on a massive dataset of text and code before they are released to the public.
  • Transformer: GPTs use a transformer architecture, which is a type of neural network that is well-suited for natural language processing tasks.

OpenAI recently announced the capability with a Plus account (paid) to build your own chatbot or what they call GPTs.

So, what does this have to do with Promptcraft?

Well the process for building GPTs automatically generates prompts. You can simply say what you are looking to build and it writes a prompt for you.

This begins to remove the need for writing your own prompts, but it puts up a fee barrier, and not everyone has access.

One way to replicate this is to use an instruction in your prompt to trigger automated improvement. Try this:

Act as an expert LLM prompt engineer and writer. Rate my LLM prompt below 1-10 and provide kind, specific and helpful feedback. If the rating is 8 or higher, execute the prompt. If it is lower than 8, generate a better prompt and explain how it is better.

My prompt: [add your prompt here]

Here is an example in ChatGPT 3.5, Claude-2-100k and Bard.

*Remember the scoring is all a bit unreliable, you are just creating an exchange to improve your prompts.

**And, as I always say, remember to make this your own, tinker and evaluate the completions.

Learning

.: Boost your AI Literacy

RESEARCH TOOL .: OECD AI Incidents Monitor (AIM) ➜ A fascinating analysis tool which documents AI incidents to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the incidents and hazards that concretise AI risks.

RESEARCH INDEX .: Latin American Index of Artificial Intelligence ➜ A comprehensive analysis of the status of AI in twelve countries in Latin America: Argentina, Bolivia, Brazil, Chile, Colombia, Costa Rica, Ecuador, Mexico, Panama, Paraguay, Peru and Uruguay. Each file elaborates on: Enabling Factors, Research, Development and Adoption, and Governance.

I was curious about AI news from here as we live in media geo-bubbles, so I was pleased to discover this resource providing insight into what is happening in Latin America.

REPORT .: Colonialism and AI ➜ This report by Anna Gausen and Accessible AI, explores how AI is at risk of repeating the patterns of our colonial history and how we can begin to decolonise AI.

It covers:

  • A Look Back At Our Past: Society has been shaped by our colonial history.
  • Where We Are Today: The way AI is being deployed by the global west could reinforce colonial power dynamics.
  • A Vision For The Future: How we can rebalance power and diversify voices in AI.

Ethics

.: Provocations for Balance

  • When making decisions about AI progress, whose voices need to be at the table beyond corporate executives?
  • If current AI lacks core elements of human reasoning, when should we be cautious about over-applying it to tasks requiring critical thinking?
  • When AI-generated content crosses ethical lines, how should accountability be determined given the complex web of humans and algorithms involved in systems?

~ Inspired by this week’s developments.

.:

That’s all for this week; I hope you enjoyed this issue of Promptcraft. I would love some kind, specific and helpful feedback.

If you have any questions, comments, stories to share or suggestions for future topics, please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!

.: Tom Barrett

/Creator /Coach /Consultant

.: Promptcraft 35 .: Why Aren’t More Women Using AI?

Hello Reader,

Welcome to Promptcraft, your weekly newsletter on artificial intelligence for education. In this issue:

  • Google Announces ‘Assistant with Bard’ for Android and iOS​
  • The Screen Actors Guild’s strike-ending deal has entered its final step​
  • Humane has launched the AI Pin, a new AI-powered wearable gadget

Let’s get started!

✨ P.S. Get in touch if you want to join me in dialogue on my upcoming AI for Education webinars. I would love to hear from you! ✨

.: Tom

Latest News

.: AI Updates & Developments

.: Google Announces ‘Assistant with Bard’ for Android and iOS ➜ A combination of the generative and reasoning capabilities of Bard with the personalised help of Google Assistant. This includes Bard Extensions that can access Gmail, Google Drive, and Docs to answer queries. Additionally, Assistant with Bard has a “conversational overlay” that can accept text, voice, or image input. Google calls this an “early experiment,” with plans to roll it out to early testers for feedback before public availability over the next few months.

.: AI Facial Recognition Wrongfully Imprisons Innocent Man ➜ In a landmark incident, Robert Williams was wrongfully arrested in January 2020, marking the first documented case in the U.S. where facial recognition technology led to a false detention. This arrest occurred amidst a surge in law enforcement’s use of powerful AI for facial recognition. Williams’s case, resulting from a mistaken match by the Detroit Police Department’s facial recognition system, underscores the emerging challenges and ethical considerations in deploying AI technologies within the criminal justice system​. Despite the known flaws and the potential for mass surveillance threatening privacy, law enforcement continues to increasingly rely on such AI systems.

.: ‘Alarming’: Convincing AI Vaccine and Vaping Disinformation Generated by Australian Researchers ➜ Australian researchers have highlighted the power of AI to generate harmful disinformation. In an experiment, they used AI to create over 100 misleading health blogposts in multiple languages within just over an hour, bypassing safeguards meant to prevent the generation of misleading or harmful content. The experiment underscores the need for stronger industry accountability and better safeguards against the misuse of AI.

fRnM2rShFfAgfoPE2vGqJR

.: Humane’s AI Pin: all the news about the new AI-powered wearable ➜ Humane has launched the AI Pin, a new AI-powered wearable gadget designed to replace your smartphone. The gadget, which can be attached to your clothing using a magnetic battery pack, allows users to perform typical smartphone tasks. In addition, the AI Pin features a laser projector that can cast a UI onto your hand to control certain aspects of the device.

mkmGeX2QisCToC398WNknL

.: Australia ‘at the Back of the Pack’ in Regulating AI, Experts Warn ➜ Australia, despite being part of the 28 countries alongside the EU to sign the Bletchley declaration on AI, is lagging behind in AI funding and regulation, warn experts. Critics worry that Australia risks being left behind, especially considering recent US regulations that require companies to share safety test results prior to releasing AI models.

.: Why are Fewer Women Using AI than Men? – BBC News ➜ The article explores the reasons behind fewer women than men using artificial intelligence (AI), particularly AI chatbot ChatGPT. While the chatbot has over 180 million users, many women, including jeweller Harriet Kelsall and business coach Michelle Leivars, express concerns about the reliability of the AI and the potential loss of authenticity in their communication. A survey earlier this year revealed that only 35% of women use AI in their professional or personal lives, compared to 54% of men. The article suggests that this disparity may largely be due to the confidence gap and the fear of criticism that many women face when using AI tools.

.: Most of our friends use AI in schoolwork – BBC News ➜ A recent report by BBC Young Reporters Theo and Ben explores the use of Artificial Intelligence (AI) among pupils in their school. The majority of their peers admit to using AI, specifically ChatGPT, to assist with homework, formulating ideas, and structuring their work. However, some students confess to the misuse of AI in providing answers, a practice that has resulted in inaccurate information. Despite these drawbacks, many still find the AI tool useful and suggest it should be taught in schools.

.: The Screen Actors Guild’s strike-ending deal has entered its final step ➜ The Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) has reached a tentative deal with Hollywood studios, ending a 118-day actors’ strike. The agreement, approved by the national board, is awaiting final ratification from union members. A significant aspect of the deal is a set of protections around the use of artificial intelligence, which mandates informed consent and compensation when guild members are replicated digitally using AI.

Reflection

.: Why this news matters for education

OK, welcome, everyone. Please make sure you switch over to human-only mode on your wearables. Remember what we discussed last week about the trust signals on your devices, especially those with in-ear buds. James, you can come and get the glasses you left yesterday, but I think you will need to recharge them.

Although the Humane AI Pin is a curiosity, when you look a little more closely, it is just a phone without a screen. It even comes with a monthly plan from T-Mobile in the US!

Also, the demo video included a glaring error about the best place in the world to see the equinox. Another example of hallucinating large language model results added to tech demos without fact-checking. I am looking at you, Google Bard! (through a telescope 😉 )

We already have very powerful devices in our pockets managed differently by schools and education systems. The mobile infrastructure is vast, on which we might experience AI-augmented learning.

According to a report from Statista, there are more mobile subscriptions than people on the planet.

There were more than 8.58 billion mobile subscriptions in use worldwide in 2022, compared to a global population of 7.95 billion.

So, I wonder how we leverage our devices in better ways to assist, augment and amplify teaching and learning.

The path ahead of personal devices, whether smartphones, pins or glasses, making it easy to access advanced AI capabilities matches another direction. The personalisation of AI implementation through agents, designed for narrower tasks – powered by a richer context of who you are.

All of this is glued together with data, making me wonder: who owns my heart rate data from my Garmin watch?

Which activates the part of the data ecosystem around fitness and health information gathered, stored and analysed by various apps and wearables. How might we connect these data pools to further enrich the learning experience?

Thanks, 9TB – wait a moment as I sync your wearable data with today’s adaptive learning algorithms. Based on your elevated heart rate and cortisol levels last class, it looks like the system has adjusted difficulty down 12% and dialled up the soothing ambient sounds by half a notch. I know some of you are still adjusting and find it strange having your personal biometrics directly tune your learning. Ada and Alan, your orientation modules for this are still incomplete; please try and get those done by Friday. Remember, everyone, the tech, doesn’t know your specific activities like late-night vampire movie marathons! The system simply senses general signs you’re a tad sleepy today and adjusts accordingly to help you focus better.

.:

~ Tom

Prompts

.: Refine your promptcraft

This week I am sharing my draft of an Imaginary Scenario Prompt Framework. The aim of this multi-step prompt interaction is to surface assumptions and constraints and then scaffold thinking that pushes beyond those limitations.

A pre-requisite is a chat session where you have been building, designing and exploring some new ideas. Use this set of prompts once you have a conversation to review.

1. Constraint Analysis Prompt:

“Review our conversation so far, let’s recap the key constraints we’ve discussed so far. Please summarise the 2-3 most significant limitations or barriers that are shaping our conversation about [topic]. These might be explicit or implicit.”

Aim: Concisely identify the core constraints for the LLM to focus on.

2. Imagined Future Prompt:

“Now imagine a future 15 years from today where one or more of those key constraints no longer exist due to technological, social, or policy innovations. Describe a scenario where [constraint 1] and [constraint 2] have been removed. What are some potential benefits but also risks or downsides of this future? Be creative and think outside the box about how institutions, human behaviour, and society as a whole might function differently in this scenario. Provide practical examples of how new technologies or policies could enable this future while considering balanced and nuanced perspectives.”

Aim: Spur your own creative thinking about an optimistic but grounded future where key constraints are lifted.

3. Follow-up Prompt:

“That scenario covers some interesting possibilities. Can you focus on how [example technology or policy] would work and expand on how it would concretely impact people’s lives?”

Aim: Iterate for more details and depth on the imagined scenario.

.:

Just a post script that statistical language models have limits to how creative they are. (If they are creative at all!) They are built to predict the most likely word, rather than diverge to something unexpected, so keep that in mind.

My approach is to collaborate with a wide range of AI tools to amplify my creativity, not to sit back and think an LLM can do better. I encourage you to stay in the creative loop.

And, as I always say, remember to make this your own, tinker and evaluate the completions.

Learning

.: Boost your AI Literacy

ARTICLE .: Untangling AI Hype from Reality ➜ This ABC News article demystifies the hype surrounding AI by exploring its current capabilities, limitations, and future potential. It offers a grounded perspective on the state of AI technology, making it an essential read for those looking to understand the realistic prospects of AI.

COURSE .: Unlock AI Secrets with Amazon’s Free Learning Resources ➜ Amazon’s initiative, as highlighted by ZDNet, opens doors to free AI learning resources. It’s an excellent chance for educators and learners to enhance their AI skills and knowledge without the financial barrier, fostering broader accessibility to AI education.

EXPLANATION .: Explained: Generative AI ➜ MIT News provides an insightful and accessible explanation of Generative AI, a crucial domain within the AI landscape. This resource breaks down the concept, its applications, and significance, making it a valuable educational tool for anyone interested in this aspect of AI.

Ethics

.: Provocations for Balance

  • With the rise of emotional analysis AI, how do we protect people’s psychological privacy? Should individuals have a right to consent before their emotions are analysed by algorithms?
  • If an AI system makes a mistake that harms a student’s learning or future prospects, who is liable? How do we balance accountability with encouraging innovation in AI for education?
  • Should educators be required to disclose when AI is being used for certain teaching tasks? What happens when it swings the other way, and we have less trust of human-only generated content?

~ Inspired by this week’s developments.

.:

That’s all for this week; I hope you enjoyed this issue of Promptcraft. I would love some kind, specific and helpful feedback.

If you have any questions, comments, stories to share or suggestions for future topics, please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our educational systems become. Thanks for being part of our growing community!

Please pay it forward by sharing the Promptcraft signup page with your networks or colleagues.

.: Tom Barrett

/Creator /Coach /Consultant