Hello Reader,

Welcome to Promptcraft, your weekly newsletter on artificial intelligence for education. In this issue:

  • Generative AI’s huge water demands scrutinised
  • EU finalises landmark AI regulation, imposes risk-based restrictions
  • Google launches the Gemini model series to rival ChatGPT

Don’t forget to Share Promptcraft and enter my Christmas giveaway! All you have to do is share your link below.

.: Tom

Tom%20Barrett%20Christmas%20Sticker%20(1).

Get Poe AI Access for Free – Refer Friends to Win!

To enter share your unique link below and get an entry for every friend that signs up to the Promptcraft newsletter. The more referrals you get, the higher your chances to win!

Prizes: 10 x 1 month Poe AI access (USD $20 value each)
Draw date: December 20th 2023

[RH_REFLINK_2 GOES HERE]

Facebook Whatsapp Linkedin Email

PS: You have referred [RH_TOTREF_2 GOES HERE] people so far

See how many referrals you have

Latest News

.: AI Updates & Developments

.: Generative AI’s huge water demands scrutinised ➜ Generative AI, like ChatGPT, is increasing scrutiny of Big Tech’s water usage. ChatGPT consumes 500ml of water for every 10-50 prompts. Microsoft and Google’s water use has risen by 21-36% in 2022 due to new AI chatbots. AI drives more computing power, so data centres require vast amounts of water for cooling. Critics warn of sustainability issues from AI’s thirst, even though companies aim to be water positive.

.: China plays catch-up a year after ChatGPT ➜ One year after OpenAI’s ChatGPT took the AI world by storm, China lags due to lack of advanced chips. US export controls block access to Nvidia GPUs critical for powerful AI models. Domestic firms like Baidu have developed chatbots but can’t match US capabilities. China faces pressure to close the gap and realises AI leadership will be difficult.

.: Beijing court rules AI art can get copyright ➜ A Beijing court granted copyright to an AI-generated image, contradicting the US view that AI works lack human authorship. The ruling signals China’s support for AI creators over US skepticism. It could influence future disputes and benefit Chinese tech giants’ AI content tools.

pRzgV9HhVi8dRj8VgoJHT9

.: EU finalises landmark AI regulation, imposes risk-based restrictions ➜ The EU finalised its AI regulation after years of debate, imposing the world’s most restrictive regime. It bans certain AI uses and adds oversight based on risk levels. While companies warned of stifling innovation, the EU calls it a “launchpad” for AI leadership. The rules aim to curb AI risks and set a global standard amid advances like ChatGPT.

.: Google launches Gemini AI to rival ChatGPT ➜ Google has launched Gemini, a new AI model that competes with OpenAI’s ChatGPT and GPT-4. Gemini beats GPT-4 in 30 of 32 benchmarks, aided by multimodal capabilities. It comes in three versions optimised for different uses and will integrate across Google’s products. The launch puts Google back in the generative AI race it has been perceived to be losing.

.: Meta’s new AI image generator trained on 1B Facebook, Instagram photos ➜ Meta released a new AI image generator using its Emu model, trained on over 1 billion public Instagram and Facebook images. The tool creates images from text prompts like other AI generators. Meta says it only used public photos, but users’ pics likely aided training without consent.

.: Google unveils improved AI coding tool AlphaCode 2 ➜ Google’s DeepMind division unveiled AlphaCode 2, an upgraded version of its AI coding assistant. Powered by Google’s new Gemini AI model, AlphaCode 2 can solve coding problems in multiple languages that require advanced techniques like dynamic programming. In contests, it outperformed 85% of human coders, nearly double the original AlphaCode.

.: Apple quietly releases new AI framework MLX ➜ MLX is a new open source AI framework that efficiently runs models on Apple Silicon chips. It includes a model library called MLX Data and can train complex models like Llama and Stable Diffusion. Apple is expanding its AI capabilities with MLX, enabling the development of powerful AI apps for Macs.

Reflection

.: Why this news matters for education

Last week in Promptcraft 38, we peeled back the curtain on how generative AI like ChatGPT can unwittingly perpetuate biases that conflict with principles of diversity and inclusion.

This week, our lens widens to reveal another ethical dilemma – the massive environmental impact of systems like ChatGPT.

New research spotlights AI’s hefty carbon footprint and water use.

ChatGPT gulps down 500ml of water for every 10-50 prompts. With over 100 million users chatting it up, you do the maths.

Meanwhile, AI2 and Hugging Face quantify the extreme variation in emissions across AI tasks.

Generating images and text can pump out 60x more CO2 than simple classification. Efficiency gains still increase net consumption.

Despite conservation efforts, Microsoft and Google’s water use rose 21-36% in 2022, partly due to new AI systems. Emissions from AI use can even exceed those from training.

There’s over 1000x difference in energy efficiency across models. But a lack of standards prevents easy comparison.

Shouldn’t environmental impact be as clear as other risks like accuracy and bias?

AI’s emissions and biases require awareness and mitigation. Users must be educated and lower-impact models chosen. AI apps could one day be selected based on their carbon label.

.:

~ Tom

Prompts

.: Refine your promptcraft

Tree of Thought Prompting

The Tree of Thoughts (ToT) method is a way to improve how large language models like GPT, Claude or Gemini solve complex problems that require looking ahead or exploring different options.

ToT works by building a tree of intermediate ‘thoughts’ that can be evaluated and explored. This allows the model to work through a problem by generating multiple steps and exploring different options.

Recent studies have shown that ToT improves performance on mathematical reasoning tasks. We can apply this method to text based prompting too.

Here is an example for you to try.

PROMPT

Imagine three different experts are answering this question.
All experts will write down 1 step of their thinking,
then share it with the group.
Then all experts will go on to the next step, etc.
If any expert realises they’re wrong at any point then they leave.
The question is [Add your question here]

I have been playing with extending this method further with a scenario of experts exploring the question through dialogue.

It reminds me of the Expert Prompting technique we have looked at before.

Remember to make this your own, tinker and evaluate the completions.

Learning

.: Boost your AI Literacy

EXPERT PANEL

video preview

I really enjoyed this longer exploration of the issues we are navigating with AI from a practical, technical and ethical position. I discovered it via a repost of one of the panellist, Yann LeCun’s comments about the open vs proprietary approach to models. You can jump to these in the last 10 minutes, but I recommend the rest too.

ETHICS REPORT
.: Walking the Walk of AI Ethics in Technology Companies ➜ Stanford Institute for Human-Centered Artificial Intelligence (HAI) new report “Walking the Walk of AI Ethics in Technology Companies” is one of the first empirical investigations into AI ethics on the ground in private technology companies.

One of the key takeaways:

Technology companies often “talk the talk” of AI ethics without fully “walking the walk.” Many companies have released AI principles, but relatively few have institutionalized meaningful change.

FREE COURSES
.: 12 days of no-cost training to learn generative AI this December

  • Google Cloud is offering 12 days of free generative AI training in December
  • The courses cover foundations like what generative AI is and how it works
  • Technical skills content is also included for developers and engineers
  • Offerings include videos, courses, labs, and a gamified learning arcade

Ethics

.: Provocations for Balance

  • What happens when people stop using the systems which have a high environmental impact?
  • If society turns against AI due to climate concerns, could it set unreasonable expectations for AI developers to predict and eliminate the environmental impact of systems still in their infancy?
  • Are campaigns for AI sustainability failing to also acknowledge the huge benefits of AI computing for society, and the need for balance and moderation versus outright rejection?
  • Should AI researchers be tasked with solving the climate impacts of computing overall? Does this distract from innovating in AI itself which could also help address climate change?

~ Inspired by this week’s developments.

.:

That’s all for this week; I hope you enjoyed this issue of Promptcraft. I would love some kind, specific and helpful feedback.

If you have any questions, comments, stories to share or suggestions for future topics, please reply to this email or contact me at tom@dialogiclearning.com

The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!

.: Tom Barrett

/Creator /Coach /Consultant