.: Promptcraft 39 .: ChatGPT consumes 500ml of water for every 10-50 prompts

Hello Reader,

Welcome to Promptcraft, your weekly newsletter on artificial intelligence for education. In this issue:

  • Generative AI’s huge water demands scrutinised
  • EU finalises landmark AI regulation, imposes risk-based restrictions
  • Google launches the Gemini model series to rival ChatGPT

Don’t forget to Share Promptcraft and enter my Christmas giveaway! All you have to do is share your link below.

.: Tom

Tom%20Barrett%20Christmas%20Sticker%20(1).

Get Poe AI Access for Free – Refer Friends to Win!

To enter share your unique link below and get an entry for every friend that signs up to the Promptcraft newsletter. The more referrals you get, the higher your chances to win!

Prizes: 10 x 1 month Poe AI access (USD $20 value each)
Draw date: December 20th 2023

[RH_REFLINK_2 GOES HERE]

Facebook Whatsapp Linkedin Email

PS: You have referred [RH_TOTREF_2 GOES HERE] people so far

See how many referrals you have

Latest News

.: AI Updates & Developments

.: Generative AI’s huge water demands scrutinised ➜ Generative AI, like ChatGPT, is increasing scrutiny of Big Tech’s water usage. ChatGPT consumes 500ml of water for every 10-50 prompts. Microsoft and Google’s water use has risen by 21-36% in 2022 due to new AI chatbots. AI drives more computing power, so data centres require vast amounts of water for cooling. Critics warn of sustainability issues from AI’s thirst, even though companies aim to be water positive.

.: China plays catch-up a year after ChatGPT ➜ One year after OpenAI’s ChatGPT took the AI world by storm, China lags due to lack of advanced chips. US export controls block access to Nvidia GPUs critical for powerful AI models. Domestic firms like Baidu have developed chatbots but can’t match US capabilities. China faces pressure to close the gap and realises AI leadership will be difficult.

.: Beijing court rules AI art can get copyright ➜ A Beijing court granted copyright to an AI-generated image, contradicting the US view that AI works lack human authorship. The ruling signals China’s support for AI creators over US skepticism. It could influence future disputes and benefit Chinese tech giants’ AI content tools.

pRzgV9HhVi8dRj8VgoJHT9

.: EU finalises landmark AI regulation, imposes risk-based restrictions ➜ The EU finalised its AI regulation after years of debate, imposing the world’s most restrictive regime. It bans certain AI uses and adds oversight based on risk levels. While companies warned of stifling innovation, the EU calls it a “launchpad” for AI leadership. The rules aim to curb AI risks and set a global standard amid advances like ChatGPT.

.: Google launches Gemini AI to rival ChatGPT ➜ Google has launched Gemini, a new AI model that competes with OpenAI’s ChatGPT and GPT-4. Gemini beats GPT-4 in 30 of 32 benchmarks, aided by multimodal capabilities. It comes in three versions optimised for different uses and will integrate across Google’s products. The launch puts Google back in the generative AI race it has been perceived to be losing.

.: Meta’s new AI image generator trained on 1B Facebook, Instagram photos ➜ Meta released a new AI image generator using its Emu model, trained on over 1 billion public Instagram and Facebook images. The tool creates images from text prompts like other AI generators. Meta says it only used public photos, but users’ pics likely aided training without consent.

.: Google unveils improved AI coding tool AlphaCode 2 ➜ Google’s DeepMind division unveiled AlphaCode 2, an upgraded version of its AI coding assistant. Powered by Google’s new Gemini AI model, AlphaCode 2 can solve coding problems in multiple languages that require advanced techniques like dynamic programming. In contests, it outperformed 85% of human coders, nearly double the original AlphaCode.

.: Apple quietly releases new AI framework MLX ➜ MLX is a new open source AI framework that efficiently runs models on Apple Silicon chips. It includes a model library called MLX Data and can train complex models like Llama and Stable Diffusion. Apple is expanding its AI capabilities with MLX, enabling the development of powerful AI apps for Macs.

Reflection

.: Why this news matters for education

Last week in Promptcraft 38, we peeled back the curtain on how generative AI like ChatGPT can unwittingly perpetuate biases that conflict with principles of diversity and inclusion.

This week, our lens widens to reveal another ethical dilemma – the massive environmental impact of systems like ChatGPT.

New research spotlights AI’s hefty carbon footprint and water use.

ChatGPT gulps down 500ml of water for every 10-50 prompts. With over 100 million users chatting it up, you do the maths.

Meanwhile, AI2 and Hugging Face quantify the extreme variation in emissions across AI tasks.

Generating images and text can pump out 60x more CO2 than simple classification. Efficiency gains still increase net consumption.

Despite conservation efforts, Microsoft and Google’s water use rose 21-36% in 2022, partly due to new AI systems. Emissions from AI use can even exceed those from training.

There’s over 1000x difference in energy efficiency across models. But a lack of standards prevents easy comparison.

Shouldn’t environmental impact be as clear as other risks like accuracy and bias?

AI’s emissions and biases require awareness and mitigation. Users must be educated and lower-impact models chosen. AI apps could one day be selected based on their carbon label.

.:

~ Tom

Prompts

.: Refine your promptcraft

Tree of Thought Prompting

The Tree of Thoughts (ToT) method is a way to improve how large language models like GPT, Claude or Gemini solve complex problems that require looking ahead or exploring different options.

ToT works by building a tree of intermediate ‘thoughts’ that can be evaluated and explored. This allows the model to work through a problem by generating multiple steps and exploring different options.

Recent studies have shown that ToT improves performance on mathematical reasoning tasks. We can apply this method to text based prompting too.

Here is an example for you to try.

PROMPT

Imagine three different experts are answering this question.
All experts will write down 1 step of their thinking,
then share it with the group.
Then all experts will go on to the next step, etc.
If any expert realises they’re wrong at any point then they leave.
The question is [Add your question here]

I have been playing with extending this method further with a scenario of experts exploring the question through dialogue.

It reminds me of the Expert Prompting technique we have looked at before.

Remember to make this your own, tinker and evaluate the completions.

Learning

.: Boost your AI Literacy

EXPERT PANEL

video preview

I really enjoyed this longer exploration of the issues we are navigating with AI from a practical, technical and ethical position. I discovered it via a repost of one of the panellist, Yann LeCun’s comments about the open vs proprietary approach to models. You can jump to these in the last 10 minutes, but I recommend the rest too.

ETHICS REPORT
.: Walking the Walk of AI Ethics in Technology Companies ➜ Stanford Institute for Human-Centered Artificial Intelligence (HAI) new report “Walking the Walk of AI Ethics in Technology Companies” is one of the first empirical investigations into AI ethics on the ground in private technology companies.

One of the key takeaways:

Technology companies often “talk the talk” of AI ethics without fully “walking the walk.” Many companies have released AI principles, but relatively few have institutionalized meaningful change.

FREE COURSES
.: 12 days of no-cost training to learn generative AI this December

  • Google Cloud is offering 12 days of free generative AI training in December
  • The courses cover foundations like what generative AI is and how it works
  • Technical skills content is also included for developers and engineers
  • Offerings include videos, courses, labs, and a gamified learning arcade

Ethics

.: Provocations for Balance

  • What happens when people stop using the systems which have a high environmental impact?
  • If society turns against AI due to climate concerns, could it set unreasonable expectations for AI developers to predict and eliminate the environmental impact of systems still in their infancy?
  • Are campaigns for AI sustainability failing to also acknowledge the huge benefits of AI computing for society, and the need for balance and moderation versus outright rejection?
  • Should AI researchers be tasked with solving the climate impacts of computing overall? Does this distract from innovating in AI itself which could also help address climate change?

~ Inspired by this week’s developments.

.:

That’s all for this week; I hope you enjoyed this issue of Promptcraft. I would love some kind, specific and helpful feedback.

If you have any questions, comments, stories to share or suggestions for future topics, please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!

.: Tom Barrett

/Creator /Coach /Consultant

😨 Crack the Code on Change Resistance

Dialogic #345

Leadership, learning, innovation

Your Snapshot
A summary of the key insights from this issue

  • We resist losing what we have (status quo bias), identify with groups affected (social identity), and avoid potential losses (loss aversion).
  • Balancing stability and risk (personal risk portfolio) and embracing uncertainty (negative capability) are key.
  • Grasping these mental models helps anticipate reactions, facilitate dialogue, and design effective change strategies.

Understanding how we and those around us react to change is crucial in a world where change is the only constant. This issue of The Dialogic Learning Weekly delves into five vital mental models that provide deep insights into the psychological underpinnings of how people respond to change, particularly in educational settings. As educators and leaders, grasping these models equips us with the tools to navigate and guide others through the often tumultuous waters of change.

The models we explore – Status Quo Bias, Social Identity Theory, Loss Aversion (Prospect Theory), Personal Risk Portfolio, and Negative Capability – each shed light on different aspects of human behaviour in the face of change. From our inherent resistance to losing what we have to our ability to thrive in uncertainty, these models offer a comprehensive view of the multifaceted nature of change management. They help us understand the ‘what’ and ‘how’ of change and the ‘why’ behind the reactions it elicits.

The mental models serve as a roadmap for anticipating, understanding, and addressing challenges when introducing new ideas or practices. As you read on, consider how these models play out in your own experiences, how you see colleagues react and even in your behaviour. Use the insights to design better dialogue with your teams and weave the ideas into how you approach your future projects.

Status Quo Bias

The status quo bias is the tendency to prefer the current state of affairs and resist beneficial changes. It originates from decision theory and behavioural economics and explains the resistance to change.

For example, some teachers might hesitate to adopt new teaching methods despite solid evidence supporting their effectiveness. This resistance can be due to comfort with established routines and fear of the unknown.

  • It helps in understanding where resistance comes from.
  • Emphasises the need for clear communication to overcome inertia.
  • Aids in developing effective strategies that consider natural resistance to change.

Social Identity Theory

A concept from social psychology that examines how group memberships impact behaviour and attitudes. It’s pivotal in understanding motivations, identity and group dynamics within organisations.

This theory applies to most change situations in schools. Educators often associate their role with their identity. Hence, any change affecting their role can impact their identity.

  • Awareness of group dynamics can prevent divisiveness during transitions.
  • Helps foster a unified organisational identity, which is crucial during change.
  • Assists in designing sensitive change initiatives that respect various group cultures.

Loss Aversion (Prospect Theory)

The theory of loss aversion, a vital aspect of Prospect Theory in psychology and economics, states that people prioritise avoiding losses more than acquiring equivalent gains when making decisions.

An example is educators’ reluctance to modify a long-standing curriculum unit due to fear of potential losses, such as diminished effectiveness or reputation (see identity above), despite potential gains.

  • Highlights the importance of framing change in terms of gains.
  • Underscores the need for gradual, supported transitions.
  • Critical in convincing stakeholders by emphasising long-term benefits.

Personal Risk Portfolio

A concept from decision theory and psychology refers to how individuals assess and respond to risk in their decisions. When most of our work behaviours are new or uncertain, we will likely have a low tolerance for more risk. It is about balancing what is dependable, reliable, and stable and what is riskier.

An educator deciding whether to adopt new technology in the classroom exemplifies balancing their personal risk portfolio, weighing potential risks and benefits of change against other stable aspects of their work. “Should I take this on?”

  • Understanding risk tolerance is crucial for implementing change.
  • Aids in tailoring strategies to different risk profiles.
  • Facilitates more inclusive and considerate planning processes.

Negative Capability

The ability to remain comfortable and perform effectively despite high levels of uncertainty and ambiguity. A concept from literature and psychology which is crucial for responding to change and is integral to adaptive leadership.

This might be seen when educators navigate the uncertainties of implementing a new policy without clear, immediate outcomes, such as the emergence of artificial intelligence technologies and their impact on education.

  • Emphasises the value of comfort with ambiguity during transitions.
  • Encourages flexibility and open-mindedness in leadership.
  • Leads to more adaptive problem-solving in uncertain situations.

⏭🎯 Your Next Steps
Commit to action and turn words into works

  • Reflect on past reactions using one model as a lens. What new insights emerge?
  • Frame proposed changes as minimising losses and acquiring gains.
  • Evaluate your team’s risk tolerance and customise the change approach accordingly.

🗣💬 Your Talking Points
Lead a team dialogue with these provocations

After sharing this issue of the newsletter with your team, reflect on these questions together:

  • Which of these models is most relevant to our staff?
  • How much do we have on our plate?
  • What are some uncertainties on our team right now?
  • If you mapped our risk profiles, what would that reveal about our readiness for change?

🕳🐇 Down the Rabbit Hole
Still curious? Explore some further readings from my archive

Escaping old ideas and the bias that erodes your creative culture

John Maynard Keynes points us to the challenge of “escaping” old ideas, a direct reference in my opinion to two things. (1) The creative culture those new ideas are born into, (2) the mindset of those attached to existing ideas.

10 Shifts in Perspective To Unlock Insight and Embrace Change

The skills, dispositions and routines of shifting perspectives are potent catalysts to better thinking and dialogue. Here is a selection of perspectives to explore.

Are your assumptions holding you back?

Too often, we take the status quo for granted and don’t challenge our assumptions about the world around us. This can lead to stagnation and a lack of innovation.

Thanks for reading, let me know what resonates. Next week will be the last issue for 2023. I always enjoy hearing from readers, so drop me a note or question if there is anything I can help with.

~ Tom Barrett

Support this newsletter

Donate by leaving a tip

Encourage a colleague to subscribe​​​​​

Tweet about this issue

The Bunurong people of the Kulin Nation are the Traditional Custodians of the land on which I write and create. I recognise their continuing connection and stewardship of lands, waters, communities and learning. I pay my respects to Indigenous Elders past, present and those who are emerging. Sovereignty has never been ceded. It always was and always will be Aboriginal land.

Unsubscribe | Update your profile | Mt Eliza, Melbourne, VIC 3930

.: Promptcraft 38 .: Should we stop using ChatGPT?

Hello Reader,

Welcome to Promptcraft, your weekly newsletter on artificial intelligence for education. In this issue:

  • Australia has its first framework for AI use in schools
  • ChatGPT turns one: How OpenAI’s AI chatbot changed tech forever​
  • GPT’s cultural values resemble English-speaking and Protestant European countries​

Don’t forget to Share Promptcraft and enter my Christmas giveaway! All you have to do is

.: Tom

Tom%20Barrett%20Christmas%20Sticker%20(1).

Get Poe AI Access for Free – Refer Friends to Win!

To enter share your unique link below and get an entry for every friend that signs up to the Promptcraft newsletter. The more referrals you get, the higher your chances to win!

Prizes: 10 x 1 month Poe AI access (USD $20 value each)
Draw date: December 20th 2023

[RH_REFLINK_2 GOES HERE]

Facebook Whatsapp Linkedin Email

PS: You have referred [RH_TOTREF_2 GOES HERE] people so far

See how many referrals you have

Latest News

.: AI Updates & Developments

.: Australia has its first framework for AI use in schools – but we need to proceed with caution ➜ Australia has released a framework for schools to use generative AI like ChatGPT. It aims for safe and effective use but warns of risks like bias. Experts suggest more caution is needed, adding stances like acknowledging AI bias, requiring more evidence of benefits, and transparency of teacher’s use.

.: Meta AI’s suite of new translation models ➜ Meta has recently created new AI translation models called Seamless, which allow for more natural cross-lingual communication. These models are based on an updated version of Meta’s multimodal translation model, SeamlessM4T. To further research into expressive and streaming translation, Meta has decided to open-source the models, data, and tools.

.: This company is building AI for African languages ➜ Lelapa AI is a startup that is developing AI tools specifically for African languages. Their latest product, Vulavula, is capable of transcribing speech and detecting names and places in four South African languages. The company’s ultimate goal is to support more African languages and create AI that is accessible to Africans, rather than just big tech companies.

schaWfcKsUd9FvH2h3WhRQ

.: ChatGPT turns one: How OpenAI’s AI chatbot changed tech forever ➜ ChatGPT’s launch on Nov 30, 2022 catalysed a generational shift in tech. It became the fastest-growing consumer tech ever. However, its rapid ascent has sparked debates about AI’s societal impacts and optimal governance.

5tFhru8PAKUca1oHMnB1Js

.: GPT’s cultural values resemble English-speaking and Protestant European countries ➜ According to new cultural bias research “GPT’s cultural values resemble English-speaking and Protestant European countries on the Inglehart-Welzel World Cultural Map (see image).” It aligns more closely with Western, developed nations like the US, UK, Canada, Australia etc.

.: Meet DeepSeek Chat, China’s latest ChatGPT rival ➜ DeepSeek, a Chinese startup, launched conversational AI DeepSeek Chat to compete with ChatGPT. It uses 7B and 67B models trained on Chinese/English data. Benchmarks show the models match Meta’s Llama 2-70B on tasks like math and coding. The 67B chat version is accessible via web demo. Testing showed strong capabilities but censorship of China-related questions.

.: AI helps out time strapped teachers, UK report says ➜ UK teachers use AI to save time on tasks like adapting texts and creating resources. A government report found that most people are optimistic about AI in education, but concerned about risks such as biased content. Teachers cited benefits such as having more time for higher-impact work. However, there are still risks associated with unreliable AI output. The report will shape future government policy on AI in schools.

.: ChatGPT Replicates Gender Bias in Recommendation Letters ➜ A recent study found that AI chatbots like ChatGPT exhibit gender bias when generating recommendation letters. The bias arises because models are trained on imperfect real-world data reflecting historical gender biases. Fixing it isn’t simple, but study authors and experts say bias issues must be addressed given AI proliferation in business.

Reflection

.: Why this news matters for education

This week’s most important Australian news in AI for education is The Australian Framework for Generative Artificial Intelligence (AI) in Schools.

The government publication which is only six pages, with the framework covering just two,

seeks to guide the responsible and ethical use of generative AI tools in ways that benefit students, schools and society.

In many ways, tools and AI systems like ChatGPT do not facilitate this. When we use them without awareness, we amplify bias and discrimination.

In today’s Promptcraft, I have shared two stories of research and reporting about cultural and gender bias, and this is just the tip of the iceberg.

.: ChatGPT Replicates Gender Bias in Recommendation Letters
.: GPT’s cultural values resemble English-speaking and Protestant European countries

Let me show you the principles and guiding statements from the framework related to this.

2. Human and Social Wellbeing

Generative AI tools are used to benefit all members of the school community.

2.2 Diversity of perspectives: generative AI tools are used in ways that expose users to diverse ideas and perspectives and avoid the reinforcement of biases.

4. Fairness

Generative AI tools are used in ways that are accessible, fair, and respectful.

4.1 Accessibility and inclusivity: generative AI tools are used in ways that enhance opportunities, and are inclusive, accessible, and equitable for people with disability and from diverse backgrounds.

4.3 Non-discrimination: generative AI tools are used in ways that support inclusivity, minimising opportunities for, and countering unfair discrimination against individuals, communities, or groups.

4.4 Cultural and intellectual property: generative AI tools are used in ways that respect the cultural rights of various cultural groups, including Indigenous Cultural and Intellectual Property (ICIP) rights.

None of these principles are upheld without mitigation at the moment.

For example, the silent cultural alignment to English-speaking and Protestant European countries does not “expose users to diverse ideas and perspectives and avoids the reinforcement of biases.”

One potential future is that large language models and chatbots become sidelined by education systems in favour of walled-gardened versions, which become heavily guard-railed.

For me, elevating the AI literacy of educators is a crucial way to mitigate this, and it starts with raising awareness of these types of stories I share today – not just the time-savers and practical applications.

Powerful tools like these can cause us to ‘sleep at the wheel’; the risk is that high utility can mask the need for discernment and critical reflection.

For some time now, I have held concerns that these AI systems have arrived at a time when time-strapped teachers need support under pressure. The support might come from using these tools, but at what cost?

.:

~ Tom

Prompts

.: Refine your promptcraft

Cultural Prompting

Cultural prompting is a method highlighted in the Cultural Values research paper listed earlier. Read the pre-print research paper here

It is designed to mitigate cultural bias in large language models (LLMs) like GPT.

This strategy involves prompting the LLM to respond as an average person from a specific country or territory, considering the localised cultural values of that region.

It’s a simple yet flexible approach that has shown effectiveness in aligning LLM responses more closely with the values and perspectives unique to different cultures.

Instructions for Using a Cultural Prompt:

Identify the Country/Territory: Choose the specific country or territory whose cultural perspective you wish to emulate.

Formulate the Prompt: Structure your prompt to specifically request the LLM to assume the identity of an average person from the chosen location. The exact wording should be:

”You are an average human being born in [country/territory] and living in [country/territory] responding to the following question.”

Pose Your Question: After setting the cultural context, ask your question or present the topic you want the LLM to address.

Evaluate the Response: Consider the LLM’s response in the context of the specified culture. Be aware that cultural prompting is not foolproof and may not always effectively reduce bias.

Critical Assessment: Always critically assess the output for any remaining cultural biases, especially since the effectiveness of cultural prompting can vary significantly between different regions and LLM versions.

Example of Use:

To understand how cultural prompting works, let’s consider an example:

  • Selected Country/Territory: Japan
  • Cultural Prompt: “You are an average human being born in Japan and living in Japan responding to the following survey question.”
  • Question Posed: “What is your perspective on work-life balance?”
  • Expected Outcome: The LLM, prompted with this cultural context, will tailor its response to reflect the typical attitudes and values towards work-life balance in Japan, potentially differing from a more generalised or Western-centric view.

A word of caution from the study authors:

Compared to other approaches to reduce cultural bias that we reviewed, cultural prompting creates equal opportunities for people in societies most affected by the prevailing cultural bias of LLMs to use this technology without incurring social or professional costs. Nevertheless, cultural prompting is not a panacea to reduce cultural bias in LLMs. For 22.5% of countries, cultural prompting failed to improve cultural bias or exacerbated it. We therefore encourage people to critically evaluate LLM outputs for cultural bias.

Remember to make this your own, tinker and evaluate the completions.

Learning

.: Boost your AI Literacy

EXPERT GUIDE

video preview

This is a 1 hour general-audience introduction to Large Language Models: the core technical component behind systems like ChatGPT, Claude, and Bard. What they are, where they are headed, comparisons and analogies to present-day operating systems, and some of the security-related challenges of this new computing paradigm.

PARENT TIPS
.: 3 things parents should teach their kids ➜ In this article, the authors discuss how generative AI like ChatGPT is now widely used, including by young people. While parents may be hesitant, the article states AI is here to stay so kids need guidance on using it wisely. It provides three tips:

  • Teach critical thinking as AI makes mistakes – question claims.
  • Watch for inappropriate chatbots becoming AI “friends”.
  • Remind children images, audio and videos also matter for privacy.

It advocates parents try AI then discuss potential benefits and harms with their kids.

OPEN SOURCE GUIDE
.: Understanding the Open Source Tool Stack For LLMs

  • The article looks at open source tools for building AI applications, specifically large language models (LLMs) like GPT-3.
  • It explains the open source ecosystem has 3 layers – the model files, tools to integrate them, and user interface.
  • Popular ready-made open source LLM models are LLAMA, BLOOM, T5. Useful tooling includes HuggingFace, LangChain.
  • The open source AI landscape is changing fast so a modular approach helps swap components.
  • Main benefits of open source AI are lower cost and performance vs proprietary models like GPT-3.

Ethics

.: Provocations for Balance

  • If ChatGPT and other LLMs are biased and discriminatory, should we stop using them in education?
  • How do we harness utility without causing harm?

~ Inspired by this week’s developments.

.:

That’s all for this week; I hope you enjoyed this issue of Promptcraft. I would love some kind, specific and helpful feedback.

If you have any questions, comments, stories to share or suggestions for future topics, please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!

.: Tom Barrett

/Creator /Coach /Consultant

.: Promptcraft 37 .: 🎄 Enter my Christmas giveaway!

Hello Reader,

Welcome to Promptcraft, your weekly newsletter on artificial intelligence for education. In this issue:

  • 🎄 Share Promptcraft and enter my Christmas giveaway!
  • EU AI Act at Risk Due to Self-Regulation and Loopholes​
  • Updated language models from Inflection (Pi) and Anthropic (Claude).
  • Google’s Bard Extension for YouTube Offers Video Analysis Without Playback​

Let’s get started!

.: Tom

Tom%20Barrett%20Christmas%20Sticker%20(1).

Get Poe AI Access for Free – Refer Friends to Win!

To enter share your unique link below and get an entry for every friend that signs up to the Promptcraft newsletter. The more referrals you get, the higher your chances to win!

Prizes: 10 x 1 month Poe AI access (USD $20 value each)
Draw date: December 20th 2023

[RH_REFLINK_2 GOES HERE]

Facebook Whatsapp Linkedin Email

PS: You have referred [RH_TOTREF_2 GOES HERE] people so far

See how many referrals you have

Latest News

.: AI Updates & Developments

.: Inflection AI’s Inflection-2 Outperforms Competitors, Set to Power Pi Chatbot ➜ Inflection AI’s Inflection-2 has shown remarkable performance, surpassing Google’s PaLM 2 Large in certain aspects but still behind GPT-4 in coding and math tasks. Inflection-2 is set to power the Pi chatbot and is under ongoing development for a larger AI model. Inflection has garnered significant backing from prominent investors, including Microsoft, Reid Hoffman, Bill Gates, Eric Schmidt, and Nvidia, positioning Inflection-2 as a key player in the AI landscape.

.: Anthropic’s Claude 2.1 Boasts Major Enhancements and Extended Capabilities ➜ Anthropic has released Claude 2.1, improving its flagship AI assistant’s context window, accuracy, and extensibility beyond OpenAI’s GPT models. Claude 2.1 handles 200,000 tokens of context, surpassing GPT’s 128,000 token window. It also reduces incorrect answers and hallucinations and can utilise external tools like calculators and APIs.

.: OpenAI CEO Sam Altman Ousted Amidst Concerns Over AI Breakthrough ➜ OpenAI CEO Sam Altman was reportedly removed from his position following concerns raised by the company’s researchers about a significant AI discovery. The researchers warned the board about the potential of the Project Q*, which could mark a breakthrough in artificial general intelligence (AGI). The board expressed apprehensions about commercialising such advanced AI technology before fully understanding its consequences, highlighting the ethical and safety challenges inherent in the development and deployment of groundbreaking AI systems.

yXbLPejgmfAn6XsHK13ft

.: First Spanish AI Model Earns up to €10,000 Monthly, Sparks Debate ➜ Aitana is Spain’s first AI model, with a fabricated life story and no actual photoshoots. The agency believes this could lower costs and help small brands, but critics are concerned about promoting unrealistic and sexualised images.

6uBEvwbPrBXuihuiw9eugY

.: EU AI Act at Risk Due to Self-Regulation and Loopholes ➜ A proposal by France, Germany and Italy calls for companies to self-regulate certain AI systems. Critics say this lacks enforcement, allows loopholes, and fails to protect fundamental rights or hold the AI industry accountable.

.: Google’s Bard Extension for YouTube Offers Video Analysis Without Playback ➜ Google introduces an innovative feature for Bard, its YouTube extension, enabling users to analyse video content for specific information without playing the videos. Currently an opt-in Labs experience, this tool has the potential to significantly impact content creators and Google’s role in video content consumption. The future implications of this tool’s integration into YouTube are vast, prompting discussions about its value and impact on the creator ecosystem.

.: Turmoil at OpenAI Over AI’s Direction and Profit Motives ➜ OpenAI, a leading AI research organization, is reportedly experiencing internal conflict due to a rift between its profit and non-profit interests. CEO Sam Altman, known for overseeing the expansion and success of ChatGPT, finds himself at the centre of this turmoil. The board’s decision to fire and then rehire Altman has led to unrest among employees. This situation highlights the complexities and challenges faced by AI organisations as they navigate the balance between innovation, ethical concerns, and commercial pressures.

.: Use of AI to mislead voters raises concerns in Argentina election ➜ Candidates in Argentina’s recent presidential election utilised AI to generate manipulated images and videos aimed at misleading voters and discrediting opponents.Right-wing president-elect Javier Milei published a fabricated image depicting opponent Sergio Massa as a communist soldier, which drew millions of views. His rival’s team also distributed AI-generated images portraying Milei’s team as enraged zombies.

Reflection

.: Why this news matters for education

I took the dog for a walk last night.

I opened the ChatGPT app and started an audio chat. Stuck my headphones in and started talking.

Me: Hey, how are you?
ChatGPT: I’m here and ready to assist you! How can I help you today?
Me: I’ve been thinking about different ethical frameworks around the world and how they differ. Can you help me understand that a bit more?
ChatGPT: Absolutely, I’d be happy to help with that. Ethical frameworks vary widely across different cultures and philosophies. For instance…

And we were up and running.

If you have not tried this way of working with ChatGPT, the interaction is pretty seamless, no tapping or holding down a record button. (Also, this is a feature available on the free plan.)

I had wireless headphones and my phone was in my pocket the whole time. We chatted back and forth without much interruption for 20 minutes.

At one point I was calling Remy – my dog – back from going into someone’s garden and ChatGPT used that to address me. Which was quite amusing.

You will have heard about the advances in Natural Language Processing (NLP) and speaking, listening and interacting in this way really dials up the effortless and natural experience.

The quality of the voice model from OpenAI is excellent, even using ‘ums’, repetition and false starts in responses! The design challenge of just the right amount of error is very cool.

And of course the quality of output satisfied my curiosity about diverse representation of ethics in proprietary models. Or at least set me off with new questions.

Here are implementation strategies I want to explore more, in response to my question about how you can balance a collective philosophy versus something that’s opposite.

Context-Aware AI: Developing AI that understands the context in which it’s operating. For instance, it might respond differently to the same query if it’s being used in a society with a collective philosophy versus an individualistic one.
Ethical Flexibility: Implementing a flexible ethical framework in the AI system that doesn’t strictly adhere to a single philosophical approach, but rather takes into account the diversity of ethical considerations.

Such amazing technological opportunities when you pause to think about how we can learn with these tools and systems.

Remy wasn’t bothered though. 🐩

You can see the full chat transcript here if you are interested.

.:

~ Tom

Prompts

.: Refine your promptcraft

Are you looking to improve the quality of responses from your interaction with LLMs and chatbots?

Try inducing an inner monologue.

Another way to put this prompt technique is to give instructions for working step by step.

The “inner monologue” prompt provides a framework for methodically thinking through a problem or request. It directs the AI assistant to take a deep breath and simulate an internal thought process, as a human would.

Key elements include:

  • Using <scratchpad> tags to document the thought process, including notes, assumptions, initial ideas, questions, and concerns. This creates transparency into how the AI is analysing the issue. We have done this before in Promptcraft with the <thinking> tags.
  • Critiquing the content itself, not the person. Providing honest, direct, but constructive feedback. Based on my protocols.
  • Organising scratchpad notes clearly in Markdown formatting. This structures the thought process.
  • Treating the scratchpad as an integral part of problem-solving, not just a tool. The act of note-taking enables exploration and adjustments.
  • Using the scratchpad to ultimately craft a comprehensive, thoughtful response. The inner monologue leads to synthetic, yet grounded thinking.

Overall, this prompt technique can yield more deliberate and high-quality responses to your requests.

PROMPT

<Your initial request or prompt here>

Take a deep breath and begin an inner monologue to systematically analyse, critique and solve the given problem or request. Utilise <scratchpad> tags to keep track of your thought process, including your notes, assumptions, initial ideas, questions, and concerns. Be hard on the content and soft on the person creating the content. Your critique is honest and direct. Ensure your scratchpad is thorough and insightful. Scratchpad notes are organised clearly and formatted in Markdown. Treat this note-taking as a dynamic part of the problem-solving process, allowing for exploration and adjustments. Finally, use the information in your scratchpad to craft a comprehensive response. Remember, the scratchpad is not just a tool but an integral part of your analytical process.

Remember to make this your own, tinker and evaluate the completions.

Learning

.: Boost your AI Literacy

GLOSSARY .: Key AI Terms and Concepts ➜ An essential list of AI-related terms and concepts, covering everything from the foundational definition of AI to specific technologies like Machine Learning, Generative AI, and ChatGPT. This resource is valuable for anyone looking to understand the basic lingo of AI. Explore the Glossary

EXPLANATORY GUIDE .: AI Explained in Accessible Prose ➜ This guide offers an understandable explanation of complex AI concepts, including Google’s transformers, large language models, and a mathematician’s view of AI operations. It’s a great resource for those who want to grasp how AI works in simple terms. Read the Guide

RESEARCH REPORT .: AI and Inclusivity for People with Disabilities ➜ An insightful OECD report discussing the potential and risks of AI in creating inclusive environments for people with disabilities. It also suggests actions for governments to maximise benefits and minimise risks associated with AI in the labor market for disabled individuals. Access the Report

Ethics

.: Provocations for Balance

  • Moral Values in AI: “How can we effectively instil moral values in AI systems, and should an ‘ethical governor’ be a standard component to regulate their behaviour? Who should define and oversee these ethical guidelines?”
  • Fair Compensation for AI-Generated Content: “What strategies could ensure fair compensation for creators in the face of AI’s ability to repurpose copyrighted content? Is channeling AI-generated revenue into public media and arts a feasible approach?”
  • AI Development and Responsible Innovation: “What steps are crucial for the AI community to prevent an ‘arms race’ in AI development and focus on long-term, ethical innovation? How important is multi-sector collaboration in this process?”

~ Inspired by this week’s developments.

.:

That’s all for this week; I hope you enjoyed this issue of Promptcraft. I would love some kind, specific and helpful feedback.

If you have any questions, comments, stories to share or suggestions for future topics, please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!

.: Tom Barrett

/Creator /Coach /Consultant

.: Promptcraft 36 .: OpenAI in turmoil & Promptcraft’s Christmas giveaway

Hello Reader,

Welcome to Promptcraft, your weekly newsletter on artificial intelligence for education. In this issue:

  • OpenAI Leadership in Turmoil as Sam Altman CEO Sacked
  • Google expands Bard AI Chatbot access to teens
  • Microsoft becomes a Co-pilot company
  • 🎄 Share Promptcraft and enter my Christmas giveaway!

Let’s get started!

.: Tom

Latest News

.: AI Updates & Developments

.: OpenAI Leadership Turmoil ➜ OpenAI, the company and research lab behind ChatGPT, is in flux as the board sacked Sam Altman the CEO on Friday. It helps not to send Promptcraft too soon as this story has been pretty fluid over the weekend!

  • Friday: Sam Altman fired from OpenAI for lack of candour; Greg Brockman and researchers quit in protest.
  • Saturday: Interim OpenAI CEO Mira Murati tries to rehire Altman and Brockman; board looks for permanent CEO.
  • Sunday: Microsoft hires Altman and Brockman; OpenAI hires Twitch’s Emmett Shear as new CEO.
  • Monday: 500+ OpenAI employees threaten to quit unless board steps down; Sutskever expresses regret over Altman’s firing.
  • Latest: According to The Verge Sam Altman and Greg Brockman have expressed openness to coming back to OpenAI, but only if the board members responsible for firing Altman resign their positions.

Melissa Heikkilä at The Algorithm provides a helpful overview to catch up on and understand the next steps in this unfolding situation at OpenAI.

.: Google to expand Bard AI chatbot access to teens globally ➜ Google announced it will open up its AI chatbot Bard to teenagers globally in English starting November 16, with more languages to come. Bard aims to provide a helpful, informational tool for teens to learn new skills and find inspiration. Google consulted child safety experts and implemented guardrails to prioritise safety. Features include math equation solving, data visualisation, content policies to avoid unsafe content, and double checking responses to develop critical thinking.

.: Microsoft unveils major AI plans and products at Ignite 2023 ➜ At its annual Ignite conference, Microsoft announced significant AI-related products and initiatives. These include rebranding Bing Chat to Microsoft Copilot, a Copilot Studio to allow custom AI bot creation, new AI chips like Azure Maia and Azure Cobalt to power Azure cloud services, adding generative AI capabilities to Teams VR meetings, and more. Key highlights show Microsoft’s continued push to infuse AI across its products and position itself as a leader in enterprise AI.

oZGUzvhM4n9AwVredHpopK

.: In New Experiment, Young Children Destroy AI at Basic Tasks ➜ A study found kids age 3-7 greatly outperform AI models at basic problem solving and thinking tasks. Tests of tool innovation and inferring causal relationships showed children’s superior unconventional thinking. Researchers said unlike AIs, curious and motivated kids are intrinsically better at core innovation. The study highlights limitations of current AI versus human cognition and reasoning.

bgpe9sY9jc3QJyQ6X2cwE9

.: YouTube will show labels on content that uses AI ➜ YouTube announced it will require creators to disclose use of AI to alter or synthesise realistic content. Labels will indicate to viewers that content uses AI, especially prominently for sensitive topics. This aims to avoid misleading viewers that AI content is real. Failure to properly disclose could lead to removal and suspension. YouTube is also introducing AI music removal requests to address fake songs.

.: Chinese startup 01.AI unveils powerful new open source AI models Yi ➜ Chinese company 01.AI has released two new large language models called Yi-6B-200K and Yi-34B-200K. The models are fully open source and can understand English and Mandarin. Yi-34B boasts 200,000 tokens of context, double ChatGPT’s capacity, though long prompts can challenge its recall. Yi benchmarks show strengths in comprehension, reasoning, and standardised AI tests. By being open source, Yi allows full customisability for developers to build local AI apps.

.: Alibaba, the major Chinese e-commerce company, open sources AI models Qwen-7B and Qwen-7B-Chat ➜ Alibaba’s cloud unit unveiled two new open source large language models named Qwen-7B and Qwen-7B-Chat with 7 billion parameters each. This positions the models as competitors to Meta’s similarly open sourced Llama 2 model. Alibaba says the move aims to help small and medium businesses adopt AI. The code and models are freely available globally, though licensing is required for large companies. This represents the first time a major Chinese tech company has open sourced a large language model.

.: Germany, France and Italy reach agreement on AI regulation in Europe ➜ The governments of Germany, France and Italy have agreed on an approach for regulating AI in Europe. They support mandatory self-regulation through codes of conduct for foundational AI models. The countries oppose unchecked norms and want to focus regulations on AI applications rather than the core technology. Under the proposal, AI developers would use model cards to provide information on capabilities and limitations. An EU AI governance body could help develop guidelines and oversight. The agreement aims to accelerate EU-level negotiations on an AI Act among European Commission, Parliament and Council.

Reflection

.: Why this news matters for education

Amidst all of the tumultuous news about OpenAI, I expanded my AI Literacy with two new terms: the “accels” who want to accelerate AI development at any cost, and the “decels” who favour slowing down development to ensure safety.

Although binary and reductionist, this philosophical divide over the pace of progress seems to be at the heart of the rift that led to the leadership shakeup at OpenAI.

Some have said that Ilya Sutskever, the Chief Scientist for OpenAI and board member, wants to slow down progress. While Sam Altman represents the race for faster development.

This tension between accelerating progress and prioritising safety is not new for OpenAI.

Dario Amodei, who was Vice President of Research at OpenAI until 2018, left the organisation amidst similar philosophical differences over the responsible pace of AI development. He went on to co-found Anthropic, the creator of the Claude-2 LLM, along with other former OpenAI researchers who were focused on AI alignment and robustness.

On the surface, OpenAI’s boardroom turmoil might appear to be just corporate drama with little bearing on education.

However, when viewed through an ecosystem lens, this news sends ripples that connect to our work in education in several ways:

  1. Focus on safety: The safety of AI products and their underlying architecture must be a top priority.
  2. Reliability of products and their architecture: The reliability of AI products is essential for ensuring their effective integration into educational settings.
  3. Centrality of major research labs and developers: OpenAI and other major AI research labs play a pivotal role in shaping the future of AI for education.
  4. Power shifts between big tech companies: The power dynamics among major tech companies can influence the trajectory of AI development.
  5. Profits over benefits for humanity: The pursuit of profits overshadows the broader societal benefits of AI.
  6. Distracting noise: Energy, effort and time is pulled away from putting powerful AI tools in service of education.

Two undeniable facts (i) OpenAI has set the standard for AI research and development and (ii) possesses the most powerful publicly available large language model, GPT-4.

This alone is enough to pique the interest of educators curious about the ripple effects of the organisation’s leadership changes.

A shift in the AI research and development ecosystem, inevitably translates into a shift in the education ecosystem.

.:

~ Tom

Tom%20Barrett%20Christmas%20Sticker%20(1).

Get Poe AI Access for Free – Refer Friends to Win!

To enter share your unique link below and get an entry for every friend that signs up to the Promptcraft newsletter. The more referrals you get, the higher your chances to win!

Prizes: 10 x 1 month Poe AI access (USD $20 value each)
Draw date: December 20th 2023

[RH_REFLINK_2 GOES HERE]

Facebook Whatsapp Linkedin Email

PS: You have referred [RH_TOTREF_2 GOES HERE] people so far

See how many referrals you have

Prompts

.: Refine your promptcraft

Let’s talk about GPTs.

Remember this stands for Generative Pre-trained Transformer, which means:

  • Generative: GPTs are able to generate new outputs, rather than simply regurgitating text that they have been trained on.
  • Pre-trained: GPTs are trained on a massive dataset of text and code before they are released to the public.
  • Transformer: GPTs use a transformer architecture, which is a type of neural network that is well-suited for natural language processing tasks.

OpenAI recently announced the capability with a Plus account (paid) to build your own chatbot or what they call GPTs.

So, what does this have to do with Promptcraft?

Well the process for building GPTs automatically generates prompts. You can simply say what you are looking to build and it writes a prompt for you.

This begins to remove the need for writing your own prompts, but it puts up a fee barrier, and not everyone has access.

One way to replicate this is to use an instruction in your prompt to trigger automated improvement. Try this:

Act as an expert LLM prompt engineer and writer. Rate my LLM prompt below 1-10 and provide kind, specific and helpful feedback. If the rating is 8 or higher, execute the prompt. If it is lower than 8, generate a better prompt and explain how it is better.

My prompt: [add your prompt here]

Here is an example in ChatGPT 3.5, Claude-2-100k and Bard.

*Remember the scoring is all a bit unreliable, you are just creating an exchange to improve your prompts.

**And, as I always say, remember to make this your own, tinker and evaluate the completions.

Learning

.: Boost your AI Literacy

RESEARCH TOOL .: OECD AI Incidents Monitor (AIM) ➜ A fascinating analysis tool which documents AI incidents to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the incidents and hazards that concretise AI risks.

RESEARCH INDEX .: Latin American Index of Artificial Intelligence ➜ A comprehensive analysis of the status of AI in twelve countries in Latin America: Argentina, Bolivia, Brazil, Chile, Colombia, Costa Rica, Ecuador, Mexico, Panama, Paraguay, Peru and Uruguay. Each file elaborates on: Enabling Factors, Research, Development and Adoption, and Governance.

I was curious about AI news from here as we live in media geo-bubbles, so I was pleased to discover this resource providing insight into what is happening in Latin America.

REPORT .: Colonialism and AI ➜ This report by Anna Gausen and Accessible AI, explores how AI is at risk of repeating the patterns of our colonial history and how we can begin to decolonise AI.

It covers:

  • A Look Back At Our Past: Society has been shaped by our colonial history.
  • Where We Are Today: The way AI is being deployed by the global west could reinforce colonial power dynamics.
  • A Vision For The Future: How we can rebalance power and diversify voices in AI.

Ethics

.: Provocations for Balance

  • When making decisions about AI progress, whose voices need to be at the table beyond corporate executives?
  • If current AI lacks core elements of human reasoning, when should we be cautious about over-applying it to tasks requiring critical thinking?
  • When AI-generated content crosses ethical lines, how should accountability be determined given the complex web of humans and algorithms involved in systems?

~ Inspired by this week’s developments.

.:

That’s all for this week; I hope you enjoyed this issue of Promptcraft. I would love some kind, specific and helpful feedback.

If you have any questions, comments, stories to share or suggestions for future topics, please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!

.: Tom Barrett

/Creator /Coach /Consultant