If you're new here, you may want to try my weekly newsletter. Thanks for visiting!
Don’t miss my new learning community about AI for education.
Hello Reader,
Promptcraft is a weekly AI-focused newsletter for education, improving AI literacy and enhancing the learning ecosystem.
In this issue, you’ll discover:
- The first deal OpenAI has made with a university;
- New guidelines released in the US state of Washington for K12 schools;
- How access is opening up to ChatGPT for all state schools in Australia.
Let’s get started!
~ Tom Barrett
HIGHER ED .: OpenAI signs up its first higher education customer, Arizona State University Summary ➜ Arizona State University has become the first higher education customer to pilot ChatGPT Enterprise developed by OpenAI. ASU will offer its faculty and staff accounts to discover and create AI applications for learning, research, and operations. The collaboration seeks to widen the responsible use of AI throughout the university. Why this matters for education ➜ This development marks a positive step for higher education, which has grappled with a fascination with plagiarism and cheating issues. Even though we are still learning how to use these tools effectively, the Arizona State University (ASU) leadership is setting a precedent that could offer a new perspective. This could help elevate our dialogue beyond plagiarism and AI detection. According to Synthedia, ASU has approximately 145,000 students and 20,000 faculty staff. Given this large number, it is unlikely that all of them will be able to receive an enterprise account, as it would be pretty expensive for the university. Nevertheless, the partnership between ASU and OpenAI is an important signal, suggesting that we may see education accounts for OpenAI tools in some form. This deal will help build the technical and economic infrastructure to provide such tools directly to education organisations. Soon, your students might access OpenAI through a single sign-on, just like Canva, Adobe, Google, or Microsoft. |
AUSTRALIA .: ChatGPT is coming to Australian schools Summary ➜ Access to OpenAI’s ChatGPT will be made available to Australian state schools in 2024, following the approval of a framework that guides the use of AI by education ministers in December. The framework sets out principles such as privacy, equity, and the proper attribution of AI-generated work. When ChatGPT was first introduced in 2022, most states banned it in schools due to concerns such as plagiarism. However, South Australia permitted its use to teach students about AI. Why this matters for education ➜ I was one of the voices in 2022 wondering why banning is still considered an appropriate response. I think it was my younger self speaking, recalling when YouTube was banned in schools. The ban buffer gave system leaders the time to develop a better understanding. However, I wonder what is materially different in the Australian school ecosystem now there is a national framework? Are teachers and students better prepared? Opening up access is fine, but a framework publication alone is not enough. |
K12 GUIDELINES .: The State of Washington Embraces AI for Public Schools Summary ➜ Washington State in the United States has released new guidelines which encourage the use of AI in K-12 public schools. The guidelines aim to promote students’ AI literacy, ensure ethical usage, provide teacher training, and apply design principles that support learning. They acknowledge AI’s potential benefits in education while recognising associated risks such as bias and overreliance. Why this matters for education ➜ Another school system approaching AI “with great excitement and appropriate caution”. Notable from the announcement is how the new guidelines are based on the principle of “embracing a human-centred approach to AI”. It will be interesting to see how the new guidelines are implemented in schools and how the public school system is supported to adapt. For example, is there additional funding for schools or teachers to access the necessary resources and professional learning? |
.: Other News In Brief
|
:. .:
.: Join the community waitlist
|
Join the waitlist |
.: :.
What’s on my mind?.: Clinical Compassion One of my favourite shows, ’24 Hours in A&E,’ once offered a poignant insight at King’s College Hospital in London. A nurse shared how, regardless of a patient’s awareness, a caring human touch could calm anxiety and reduce heart rate. This simple act of human connection resonates profoundly in our rapidly advancing world. .: :. Now, let’s step into a near future, just a heartbeat away. In the brisk environment of a bustling hospital, an AI system interacts with patients, sharing up-to-date information and diagnosing illnesses with precision and empathy that challenge the finest doctors. Remarkably, this isn’t just fiction. Recently, a Google AI system demonstrated it could surpass human doctors in both bedside manner and diagnostic accuracy when trained to conduct medical interviews. This AI matched or even outperformed doctors in conversing with simulated patients and listing possible diagnoses based on the patient’s medical history, positioning artificial intelligence not just as an assistant but as a leader in roles traditionally defined by human touch. A different AI system tailors learning paths in a school near the hospital. It identifies the most important, relevant and appropriate next step in learning for a student and eclipses even the most experienced educators in personalising education. With its ability to analyse historical student data and optimise learning strategies, this AI system tirelessly offers kind, specific and helpful feedback when the student needs it. The system’s interactions with parents have been rated 4.8 stars out of 5 for nearly 18 months. I have been reflecting on the emotional or relational cost of using AI tools to augment human interaction. What might we be losing in technology’s embrace? The emotional depth, the subtle nuances of relationships, the warmth of human contact – can AI ever replicate these, or are they at risk of being diminished in the shadow of digital efficiency? The near future dilemma is stark. Consider the veteran physician, witnessing AI systems diagnose more accurately than her colleagues. Or an educator, observing an AI effortlessly chart her students’ educational journeys. Both professionals stand at a pivotal crossroads, questioning their roles in a landscape increasingly shaped by AI. The line between AI utility and the value of human judgment becomes blurred. When does reliance on AI’s precision start to overshadow essential human attributes? At what point does maintaining traditional roles in education and healthcare hinder the potential benefits AI could bring? As the future scenarios and near futures crash into our present, how do we balance AI’s brilliance with the irreplaceable qualities of humanity? This challenge is not just technical but deeply philosophical, compelling us to put the human experience under the microscope and figure out ‘who am I in all of this?’ and what remains indispensable. :. .: ~ Tom |
Prompts.: Refine your promptcraft Honing your prompt writing skills is crucial for getting the most out of AI chatbots. Today, I want to share three simple techniques I rely on to lift my prompting game, but they don’t require prompt writing structures. 1:.Regenerate Multiple ResponsesAsk the same question 3-4 times, having the chatbot regenerate a new response each time. Review the different perspectives and ideas generated. In ChatGPT, use the regenerate button under the response. In Google Bard, you can access the drafts to see multiple versions and regenerate them. Looking at multiple responses side-by-side can spark new connections.
Key Promptcraft Tactic ➜ Regenerate every response (and image) 3 or 4 times to build a broad and diverse selection of ideas.
2:.Iterate Through Feedback LoopsEngage in a back-and-forth collaboration with the chatbot. Respond to its initial reply by pushing for more details, examples, or counterarguments. Ask follow-up questions and provide guidance to steer the conversation. For instance, if you ask, “What are the benefits of virtual reality in classrooms?” you can say, ” Interesting, but how might VR be challenging for teachers to implement?” This iterative approach, digging deeper through feedback and refinement, can produce more thoughtful responses.
Key Promptcraft Tactic ➜ Don’t expect a perfect response immediately; stay in the chat and iterate, pushing and pulling the responses to refine the ideas.
3:.Switch Underlying LLMsTry re-prompting the same question using different large language models. If you have only been using ChatGPT, try others like Google Bard, Claude-2 from Anthropic, or Microsoft’s free Co-Pilot version of GPT-4. Varying the AI engine generating the text can result in different perspectives, creativity, and responses. Each LLM has unique strengths. Getting multiple views based on diverse models leads to more robust responses.
Key Promptcraft Tactic ➜ Try the same prompt on other LLMs to harness the different strengths and capabilities. I do this quickly and easily using the Poe platform.
Remember to make this your own, try different language models and evaluate the completions. |
Learning.: Boost your AI Literacy
|
Image from gdfconf.com |
OPEN SOURCE
.: Considerations for Governing Open Foundation Models
This briefing report by Stanford University’s Human-Centered AI group highlights the benefits of open foundation models and calls for a greater focus on their marginal risks.
Here are some of the key insights from the report:
➜ Foundation models with readily available weights offer benefits by decreasing market concentration, promoting innovation, and increasing transparency.
➜ Proposals like downstream harm liability and licensing may unfairly harm open foundation model developers.
➜ Policymakers must consider potential unintended consequences of AI regulation on open foundation models’ innovation ecosystem.
JAILBREAK
.: “Your GPTs aren’t safe”
Just a quick reminder and some background info: you may have heard of OpenAI’s GPT Store, which allows users to publish their bots to a public marketplace.
However, reports of data breaches and sensitive data leaks have increased due to user-uploaded content. Some users are “jailbreaking” the bots using prompting techniques, revealing some interesting insights into how LLMs respond to interaction (and how strange their responses can be).
Nathan Hunter even set up a competition offering a $250 prize to anyone who could break into his published GPT bot, and someone successfully used a popular prompting technique to do just that.
Here is Nathan explaining the promptcraft lessons this experiment reveals:
1) Hacking a GPT isn’t about writing code, it’s about a conversation. Social manipulation is easy when working with a tool that loves to take on roles and personalities on command.
2) Your GPTs aren’t safe. If you want to make them public, then make sure none of the instructions or documents contain data you wouldn’t want the world to access.
Ethics.: Provocations for Balance
|
:. .:
Which section was the most helpful?(Click on your choice below) 📰 Curated news and updates |
.: :.
Questions, comments or suggestions for future topics? Please reply to this email or contact me at tom@dialogiclearning.com
The more we invest in our understanding of AI, the more powerful and effective our education ecosystem becomes. Thanks for being part of our growing community!
|
|