Charles Brian Quinn
I recently attended a Google Glass Design Sprint hosted by the Glass team. I’ve been working on various Glassware since July, but this was the first formal design process I’ve gone through. I learned a few things by putting myself in the mind of a designer.
Glassware is deceptively simple when you storyboard it; it may have only two or three different views. However, populating those two or three views with the contextually relevant data might require incredibly large amounts of computing power behind it. And while it might seem like a good idea to allow users to access their entire history on your service, they might then find themselves wading through cards to find things, wasting their battery with unneeded screen on time and network accesses.
The magic of Glassware happens when you combine completely separate sources to deliver content in an innovative way. A great example of this is Refresh, which meshes together Google Calendar, Facebook, LinkedIn and Twitter. With this combination, Refresh not only informs users of who their next meeting is with, but offers key information about them. It is the social network-backed version of Thad Starner’s remembrance agent.
Quickness and ease of use cannot be emphasized enough. If a user is distracted by the real world while using your app, Glass will go to sleep and reset to the home card when they next wake it up. If your user was deep inside the menus of your app, he or she will not be happy to have to again navigate down inside the bundle in order to resume the process.
This happens even with the native apps: if you are in the middle of captioning and sharing a picture and Glass times out, you have to start over again. At other times, questionable network connectivity means that the voice recognition takes so long you forget you were doing something.
Instead of less being more, Glass encourages you to always have detailed information available for being read aloud or drilled down into, but you should not default to putting it in users’ faces. Look at the example below: the main card the user sees is just the joke and the punchline, so they can quickly tell it without heavy device interaction. If the user wants to see who submitted a joke or any other metadata, he or she can just tap again to get more info on a detail card.
Unlike normal Android applications, Glassware currently always runs on the exact same device and screen. This means that you can very carefully place elements and not have to worry about tablets or orientation changes. It also means that you must also use Glassware on a physical Glass device before you can validate the entire user experience—and you have to do it more than once. Without living and breathing your Glassware in your day-to-day life, you can’t tell if it will distract the user at inappropriate times or be difficult to use in situations where voice commands are inappropriate.
While any mobile app should have the UI storyboarded, Glass needs a little more. Unlike a touchscreen device, Glass operates on far more than just screen taps. Additionally, Glassware can be entered from both the main launcher and from a live or static timeline card.
In our “tell a joke” example, the user can request a brand new joke via the launcher or scroll in their timeline history to find other jokes they’ve told recently. The two different entry points are shown, with the launcher entry at the top of the diagram and the history entry in the middle. I’ve chosen to use arrows with a circle on one end to show entry points.
Make a couple of different flows for your app, and then actually try them. Since we are pioneers in the wearable application space right now, no best practices or standards have been developed. As long as you keep to a strict tree structure of scenes, things should make sense to the user. If, for example, you can’t get the interaction on your diagram by using only straight arrows between boxes on a grid, it may not be an intuitive navigation pattern for the user. Don’t worry if you accidentally invent a pattern that is not possible with the current GDK: as long as it consists of Cards and CardScrollViews, it can be cobbled together as a prototype and suggested to the Glass team.
Even if you don’t have all the backend components ready, go ahead and build your UI with mock data and run it on Glass. Try the voice trigger, insert data in your timeline and leave a live card up for a bit. Does it feel right, or is it getting in the way of the rest of your Glass usage? As others try out what you’ve built, you’ll likely discover that users will try to use a gesture to perform an interaction that you hadn’t considered.
If you already know Android programming, get your hands on Glass and dive in. If you’re not an Android expert but want to hop on the Glass train, sign up for one of our Android bootcamps to learn the fundamentals of developing good Android apps. What you learn there can easily be transferred to developing specific Glass apps.
Next week, I’ll be at the Glass Design Sprint at the MIT Media Lab, followed by the WearScript Workshop, where we will be hacking on WearScript, an open-source project I’ve been working on. The full agenda is here. If you’d like more info about attending, email me.
Charles Brian Quinn