The Design Principles Behind Google Glass and the Social Influence It Could Have

I have always found it interesting to peer behind the veil a little of nascent technology and learn how it is developed. On these Google Glass developer pages you can dig a little deeper into the design principles behind one of the four current Google X lab projects.

Aimed at developers building on the Glass platform, or as they coin it developing Glassware, they outline the simple design principles needed:

  • Design for Glass – Don’t try to replace a smartphone, tablet, or laptop by transferring features designed for these devices to Glass. Instead, focus on how Glass and your services complement each other, and deliver an experience that is unique.
  • Don’t get in the way – Glass is designed to be there when you need it and out of the way when you don’t. Your Glassware must function in the same way.
  • Keep it relevant – Deliver information at the right place and time for each of your users. The most relevant experiences are also the most magical and lead to increased engagement and satisfaction.
  • Avoid the unexpected – Unexpected functionality and bad experiences on Glass are much worse than on other devices, because Glass is so close to your users’ senses.
  • Build for people – Design interfaces that use imagery, colloquial voice interactions, and natural gestures. Focus on a fire-and-forget usage model where users can start actions quickly and continue with what they’re doing.

But what is most revealing and consequently most fascinating for me is the focus on language and how this is being tailored as an integral feature of this type of technology. It is being coined as “wearable tech” but in many ways the proximity to us, to our physical persons means that the device or platform has to work with our own language settings.

The “natural speak” commands will be the most potent way these devices will become closer to our everyday lives and influence them too. We can wear them, however until they work seamlessly with the idiosyncrasies of our spoken word, they will always fall short.

The developer pages offer some of the following examples for voice commands needed to develop on the Glass platform:

Guideline Good Example Bad Example
Is general enough to apply to multiple Glassware, but still has a clear purpose “ok glass, learn a song” “ok glass, learn something”, “ok glass, learn a song on guitar”
Is colloquial and can explain Glass features in a conversation “ok glass, take a picture” (“You can use Glass to take a picture”) “ok glass, take picture” (“You can use Glass to take picture”)
Is comfortable to say in public “ok glass, find a doctor” “ok glass, find a gynecologist”
Brings the user from intent to action as quickly as possible “ok glass, find a recipe for” (this allows users to speak “chicken kiev” and immediately see the recipe) “ok glass, show me a cookbook” (this forces users to look through a list for what they want)
Avoids brand words “ok glass, make a video call” “ok glass, start a hangout”
Is long enough to ensure high recognition quality (at least three syllables) “ok glass, make a video call” “ok glass, hangout”
Fits on a single line (less than 600px wide at 40px Roboto Thin) “ok glass, add a calendar event” “ok glass, create a new calendar event”

One of the most interesting directions these sorts of guidelines take us is the way that such a device or tool may influence our use of language and consequently the way we think. For example the focus on the commands being “colloquial”, “comfortable to say in public” and how they should strike a balance for technical purposes by being “long enough to ensure high recognition quality (at least three syllables)”. In a way this is describing how Glass users will have to talk to interact successfully.

Google Glass

With such high constraint the written form that is displayed needs careful thought on Glass and in many ways is some of the most influential aspects of the product design as, in some way, it makes real the experience and relationship you have with the wearable device. It becomes a response to your commands. Here are some of the guidelines for the written form:

Keep it brief. Be concise, simple and precise. Look for alternatives to long text such as reading the content aloud, showing images or video, or removing features.

Keep it simple. Pretend you’re speaking to someone who’s smart and competent, but doesn’t know technical jargon and may not speak English very well. Use short words, active verbs, and common nouns.

Be friendly. Use contractions. Talk directly to the reader using second person (“you”). If your text doesn’t read the way you’d say it in casual conversation, it’s probably not the way you should write it.

Put the most important thing first. The first two words (around 11 characters, including spaces) should include at least a taste of the most important information in the string. If they don’t, start over. Describe only what’s necessary, and no more. Don’t try to explain subtle differences. They will be lost on most users.

Avoid repetition. If a significant term gets repeated within a screen or block of text, find a way to use it just once.

Again we might explore how these simple guidelines strongly influence a user as they depict the character of the technology being worn. BJ Fogg has written about the social cues we pick up on from technology and their social influence. Bear these elements in mind when we are learning and experiencing more everyday about personalised or wearable technology.

…people respond to computer systems as though the computers were social entities that used principles of motivation and influence.

As shown in Table 5.1, I propose that five primary types of social cues cause people to make inferences about social presence in a computing product: physical, psychological, language, social dynamics, and social roles. The rest of this chapter will address these categories of social cues and explore their implications for persuasive technology.

Primary social cues

We have had a quick look at how the Language cue is being carefully tailored on the Glass platform (and elsewhere in Search and Siri of course) and it is pretty easy to begin to understand how the other elements appear in the user experience.

Psychologically we pick up on how a device such as Glass can learn our preferences and begin to provide hyper contextual information to us, as explained earlier in one of the design principles: “The most relevant experiences are also the most magical and lead to increased engagement and satisfaction.”

The psychological connection here is linked to the social dynamic and how it would seem our technology is cooperating positively with us. The reciprocity of our interactions would fall in line with some of the research BJ Fogg outlines in his chapter – the more helpful technology is to us the more engaged we become and the more likely we are to reciprocate.

The social role of the device is an interesting one – my son would happily call Google Search his assistant or guide and so it would not seem too big a step to appreciate a wearable technology being a close ally in getting life done more efficiently.

The physical cue is perhaps the most curious because it is not so much a floating disembodied AI head doing our bidding but something that is closer to being part of us. Physically it would seem the cue has in fact become much more subtle in the fire-and-forget notifications and the seamless in-vision experience. Yet the overt nature of wearing the technology has caused some interesting consternation, raising questions about privacy and safety.

Funnily enough I have not had the chance to play with the device or even experience it yet, but the developer pages have certainly helped me to better understand the direction things are heading in and made me reflect about the influence this type of technology will have on the way we speak and think.

If you are a Glass Explorer I would love to hear your thoughts on some of the subjects raised in this post – please share a comment below.

Pic: Google Glass by wilbertbaan