What’s on my mind?
.: Unwrapping Promptcraft
As educators exploring integrating AI into teaching and learning in thoughtful and meaningful ways, we stand at an exciting and sobering threshold.
Do you employ a slick third-party app promising to enhance lessons through the power of algorithms effortlessly? Or directly prompt models like Gemini and ChatGPT, navigating the exhilaration and uncertainties of unfiltered AI?
In this short reflection, let’s look at the different approaches. But first, some context.
You might have heard the phrase ‘thin-wrappers’ for accessing AI tools. This software category is a simplified interface or application layer built on top of the large language models. The user is not working directly with the ChatGPT chatbot; it is through an interface or software application, even though the engine might be the same.
Imagine a LessonBot application teachers use to click a few suggested choices and generate lesson planning content. This would be the thin-wrapper application.
The alternative for the teacher would be to open up your favourite flavour of the large language model, write a prompt, and work more directly with the large language model through its native chatbot interface.
I understand we might have various tools to draw on, but for educators, which of these pathways will help them grow the most?
How does this move us closer to a healthier learning ecosystem?
Convenience, time-saving, structure and the importance of beginner starting points have all been shared with me as a rationale for why these tools might be helpful.
As Darren Coxon describes in a recent post on this topic:
using a wrapper versus learning to prompt is a little like the difference between buying a ready meal and creating a recipe ourselves.
And Dr Sabba Quidwai goes further in calling out these thin-wrapper apps as fast food.
The point is we diminish holistic growth over the medium to long term in a range of AI Literacy elements if we only choose these intermediary shortcuts.
Much like the way some people are creating protocols for student assessment, to include the process of AI prompting in the submission, adult learning needs to focus on process and outcome.
Yes, these teacher AI apps might get you an outcome quickly, but has your skill set or mindset also improved? After every interaction, do you have a marginally better knowledge of the capabilities and limitations of LLMs? Has your confidence in AI collaboration and augmentation improved? If we continue to rely solely on these third-party applications, we risk leaving teachers in the dark about how AI functions.
Beyond the issue of teacher skill building by prompting, iterating and engaging directly with these models, there are broader considerations.
One of the critical things for me is that using more tools further reduces transparency.
It might be called a thin wrapper, but it still muddies the view to the engine room and creates more complexity in the architecture of what is happening. It also further introduces the potential for human bias to the experience.
This is at a time when a lack of transparency about what’s happening is a significant critique of AI systems. So if we use these wrappers, these intermediary software products that are kind of shortcuts for teachers, surely there’s more opacity and not less.
What do you think? How might all of this play out?
:. .:
~ Tom