Invention of Advance Technology

Pellentesque nec ipsum nec dolor pretium ultrices et eleifend nisl. Donec mattis at sem eu auctor. Aenean varius ligula lacinia condimentum maximus. Nam est lorem, suscipit sit amet sodales vitae, consectetur ac libero. In pharetra lectus in aliquam congue. Quisque posuere luctus ligula eu gravida. Aliquam sed dui purus. Morbi dignissim ut sapien sed blandit. Pellentesque laoreet arcu in ex dignissim, id euismod elit ornare.

Getting Your Hands Dirty

About four years ago I ran into this lovely blog post from designer Bret Victor, titled: A Brief Rant on the Future of Interaction Design. It struck a particular chord with me and my developing dissatisfaction with the interactive experience we see in the classroom.

When we consider the type of play and tactile exploration of the world we experience when we are really young and then put that against the way we interact with devices and screens nowadays. As Victor explains much of the discussion about interaction and user interface and experience design misses something fundamental.

In this rant, I’m not going to talk about human needs. Everyone talks about that; it’s the single most popular conversation topic in history.

And I’m not going to talk about technology. That’s the easy part, in a sense, because we control it. Technology can be invented; human nature is something we’re stuck with.

I’m going to talk about that neglected third factor, human capabilities. What people can do. Because if a tool isn’t designed to be used by a person, it can’t be a very good tool, right?

As he progresses through the post he helps the reader, well in fact, reminds the reader about our amazing our hands are. The tools we use to interact and manipulate so many different objects around us everyday. And it is this dexterity and capability we underplay with our current designs of the digital interface.

I call this technology Pictures Under Glass. Pictures Under Glass sacrifice all the tactile richness of working with our hands, offering instead a hokey visual facade.

Is that so bad, to dump the tactile for the visual? Try this: close your eyes and tie your shoelaces. No problem at all, right? Now, how well do you think you could tie your shoes if your arm was asleep? Or even if your fingers were numb? When working with our hands, touch does the driving, and vision helps out from the back seat.

Pictures Under Glass is an interaction paradigm of permanent numbness. It’s a Novocaine drip to the wrist. It denies our hands what they do best. And yet, it’s the star player in every Vision Of The Future.

And it was this image below from Bret Victor’s post that immediately made me think of the complexity of how we manipulate and sense the world around us with our hands. The variations are huge.

Hands

So where does that leave us in learning and education? How does this make us rethink the way we are working digitally in the classroom?

I for one maintain a healthy dissatisfaction for the classroom technologies we see. But specifically how our children are physically interacting with these technologies and their associated resources. I imagine a time when the human capability Victor refers to and how children use it to learn about the world, forms a much stronger part of their technology experience. 

Interestingly after nearly four years (since the original rant from Bret Victor) the challenge hasn’t changed. Our “Pictures under glass” experiences are more refined than ever, but the integration of meaningful tactility and the convergence of complex haptics seems just as far away. I wonder when we will see a shift from refining the visual to exploring the tactility of our interaction experience.

Image from Bret Victor’s post A Brief Rant on the Future of Interaction Design

The Design Principles Behind Google Glass and the Social Influence It Could Have

I have always found it interesting to peer behind the veil a little of nascent technology and learn how it is developed. On these Google Glass developer pages you can dig a little deeper into the design principles behind one of the four current Google X lab projects.

Aimed at developers building on the Glass platform, or as they coin it developing Glassware, they outline the simple design principles needed:

  • Design for Glass – Don’t try to replace a smartphone, tablet, or laptop by transferring features designed for these devices to Glass. Instead, focus on how Glass and your services complement each other, and deliver an experience that is unique.
  • Don’t get in the way – Glass is designed to be there when you need it and out of the way when you don’t. Your Glassware must function in the same way.
  • Keep it relevant – Deliver information at the right place and time for each of your users. The most relevant experiences are also the most magical and lead to increased engagement and satisfaction.
  • Avoid the unexpected – Unexpected functionality and bad experiences on Glass are much worse than on other devices, because Glass is so close to your users’ senses.
  • Build for people – Design interfaces that use imagery, colloquial voice interactions, and natural gestures. Focus on a fire-and-forget usage model where users can start actions quickly and continue with what they’re doing.

But what is most revealing and consequently most fascinating for me is the focus on language and how this is being tailored as an integral feature of this type of technology. It is being coined as “wearable tech” but in many ways the proximity to us, to our physical persons means that the device or platform has to work with our own language settings.

The “natural speak” commands will be the most potent way these devices will become closer to our everyday lives and influence them too. We can wear them, however until they work seamlessly with the idiosyncrasies of our spoken word, they will always fall short.

The developer pages offer some of the following examples for voice commands needed to develop on the Glass platform:

Guideline Good Example Bad Example
Is general enough to apply to multiple Glassware, but still has a clear purpose “ok glass, learn a song” “ok glass, learn something”, “ok glass, learn a song on guitar”
Is colloquial and can explain Glass features in a conversation “ok glass, take a picture” (“You can use Glass to take a picture”) “ok glass, take picture” (“You can use Glass to take picture”)
Is comfortable to say in public “ok glass, find a doctor” “ok glass, find a gynecologist”
Brings the user from intent to action as quickly as possible “ok glass, find a recipe for” (this allows users to speak “chicken kiev” and immediately see the recipe) “ok glass, show me a cookbook” (this forces users to look through a list for what they want)
Avoids brand words “ok glass, make a video call” “ok glass, start a hangout”
Is long enough to ensure high recognition quality (at least three syllables) “ok glass, make a video call” “ok glass, hangout”
Fits on a single line (less than 600px wide at 40px Roboto Thin) “ok glass, add a calendar event” “ok glass, create a new calendar event”

One of the most interesting directions these sorts of guidelines take us is the way that such a device or tool may influence our use of language and consequently the way we think. For example the focus on the commands being “colloquial”, “comfortable to say in public” and how they should strike a balance for technical purposes by being “long enough to ensure high recognition quality (at least three syllables)”. In a way this is describing how Glass users will have to talk to interact successfully.

Google Glass

With such high constraint the written form that is displayed needs careful thought on Glass and in many ways is some of the most influential aspects of the product design as, in some way, it makes real the experience and relationship you have with the wearable device. It becomes a response to your commands. Here are some of the guidelines for the written form:

Keep it brief. Be concise, simple and precise. Look for alternatives to long text such as reading the content aloud, showing images or video, or removing features.

Keep it simple. Pretend you’re speaking to someone who’s smart and competent, but doesn’t know technical jargon and may not speak English very well. Use short words, active verbs, and common nouns.

Be friendly. Use contractions. Talk directly to the reader using second person (“you”). If your text doesn’t read the way you’d say it in casual conversation, it’s probably not the way you should write it.

Put the most important thing first. The first two words (around 11 characters, including spaces) should include at least a taste of the most important information in the string. If they don’t, start over. Describe only what’s necessary, and no more. Don’t try to explain subtle differences. They will be lost on most users.

Avoid repetition. If a significant term gets repeated within a screen or block of text, find a way to use it just once.

Again we might explore how these simple guidelines strongly influence a user as they depict the character of the technology being worn. BJ Fogg has written about the social cues we pick up on from technology and their social influence. Bear these elements in mind when we are learning and experiencing more everyday about personalised or wearable technology.

…people respond to computer systems as though the computers were social entities that used principles of motivation and influence.

As shown in Table 5.1, I propose that five primary types of social cues cause people to make inferences about social presence in a computing product: physical, psychological, language, social dynamics, and social roles. The rest of this chapter will address these categories of social cues and explore their implications for persuasive technology.

Primary social cues

We have had a quick look at how the Language cue is being carefully tailored on the Glass platform (and elsewhere in Search and Siri of course) and it is pretty easy to begin to understand how the other elements appear in the user experience.

Psychologically we pick up on how a device such as Glass can learn our preferences and begin to provide hyper contextual information to us, as explained earlier in one of the design principles: “The most relevant experiences are also the most magical and lead to increased engagement and satisfaction.”

The psychological connection here is linked to the social dynamic and how it would seem our technology is cooperating positively with us. The reciprocity of our interactions would fall in line with some of the research BJ Fogg outlines in his chapter – the more helpful technology is to us the more engaged we become and the more likely we are to reciprocate.

The social role of the device is an interesting one – my son would happily call Google Search his assistant or guide and so it would not seem too big a step to appreciate a wearable technology being a close ally in getting life done more efficiently.

The physical cue is perhaps the most curious because it is not so much a floating disembodied AI head doing our bidding but something that is closer to being part of us. Physically it would seem the cue has in fact become much more subtle in the fire-and-forget notifications and the seamless in-vision experience. Yet the overt nature of wearing the technology has caused some interesting consternation, raising questions about privacy and safety.

Funnily enough I have not had the chance to play with the device or even experience it yet, but the developer pages have certainly helped me to better understand the direction things are heading in and made me reflect about the influence this type of technology will have on the way we speak and think.

If you are a Glass Explorer I would love to hear your thoughts on some of the subjects raised in this post – please share a comment below.

Pic: Google Glass by wilbertbaan

15 Interesting Ways to use Google Maps to Support Learning

As many of you will know I am a bit of a map nerd and have always enjoyed peering down at the Earth through a map or using tools like Google Earth and Maps. In the past I have explored ways to use the strong visual resource to inspire writing and all sorts of other learning.

It is good to dust off this Interesting Ways resource, which is still emerging – it would be lovely to have you help extend the resource with more ideas about using Google Maps to support learning.

To add an idea use the little cog icon on the presentation above and click “Open Editor” – jump to the last slide and follow the instructions.

It would be great to see this resource developed further and is a great opportunity for you and your colleagues to share some of the creative ways you use Google Maps.

Google Teacher Academy UK 2012: My Reflections and the Future

Earlier this week the Google Teacher Academy ran for a second UK outing at the new London offices on St Giles High Street. It was a privilege once again to have the opportunity to help plan, organise and be part of the 2 days.

50 educators from around the world came together for some rapid professional learning and discussion and the chance to work alongside Google employees to help make change happen in their communities. These are a handful of my reflections about the 2 days and what the future may hold for the event.

2012 04 04 17.45.54

Google Engineers

One of the most unique features of the teacher academy is the access to and contribution by Google product managers and employees to the learning. During our 2 day event we had the chance to spend some time in the company of YouTube, Google Docs and Google+ product managers who joined us for hangouts. The Google Docs team were there in force and shared with us some incredible new features to this ever changing tool. Jeff Harris the product lead for Google Docs document, presentation, and drawing editors did some great demos and talked about the future developments of the tool. It is always exciting to have access to this type of group and have them share their expertise with us.

Google+ Potential

One thing that the GTA did for me was to put the potential of G+ back on the table, not because of any great demo or future road-map session, it was more to do with a group using it loads. There was lots of sharing to just the GTAUK group and so the circles came into their own, I think I will probably spend a bit more time figuring out how best to use it alongside Twitter.

Whoop!

I do enjoy a dose of “whooping” (I suspect you are pleased I didn’t add “cough” to that phrase) to raise the enthusiasm in the room. Don’t get me wrong I am not so keen on the use of the ‘whoop” in cinemas where it doesn’t have much place, but at the GTA the enthusiasm for the learning opportunities we can offer our classes was great. And when you unpick it, that is all it is, an enthusiastic public gesture of our delight for a potential future learning opportunity for our students. Jo Badge describes it as the GTA “philosophy” and in many ways it is important as it kept the energy up – you wonder what the event would be like without the wearing of our emotions on our sleeves. Huzzah!

Reflections from GTAUK participants

Reflections on google teacher academy UK 2012 #gtauk « DrBadgr by Jo Badge.

Learnbuzz reflections on the GTA from Steph Ladbrooke.

Google Teacher Academy: Reflection | Anseo.net from Simon Lewis.

Carry on Learning: GTAUK posts from Sheli Blackburn

The Future of the GTA in the UK

It has been about 5 or 6 years since I began to email Cristin Frodella from Google about bringing the GTA to the UK and it has been great to now see the second event conclude. However this leaves me somewhat pensive about the event over here, the model of organisation and how much more could be done. The bottom line is that I want more of this type of opportunity for UK teachers, not just a few places over the course of 2 years but more like 3 big, full blown academy events every year.

It doesn’t seem that much to ask for UK teachers, who are, in my opinion, one of the most innovative and inspiring communities of teachers in the world. This is what I am pushing for and will do what I can to help make it happen.