A new way of navigating between interfaces
One of the most interesting things about smart glasses is the various ways in how we can control the navigation of the interface. The tech industry are starting to explore new ways of navigating between interfaces. For smart glasses to be effective as a wearable device, it needs to go beyond the traditional standards touch (haptic) with your mouse or keyboard.
Voice actions, gestural and eye contact are the emerging navigation systems that could enhance the performance features of smart glasses.
Gesture recognition
Making the physical environment an interface to digital informationGesture-based interaction like hand gestures are becoming very popular with the wave of dating apps like Tinder that enable the users to swipe right or left to transition between screens. Gestural design has been seen as a gamification element to engage user activity. Gestural swiping and scrolling can maximize the time taken to tap or use voice commands, and is much more intimate.
Based on research conducted by IDEO in one of their exercises, they had asked a group of people to perform an action/gesture that corresponds to a statement. The idea was to identify universal gestures that could be applied to user actions.
Video: Inventing Gestures for Common Actions
The key insight was that gestures need to be sequential and are generation-specific. When it came to turning-up the volume for instance, some might turn an invisible knob, however people under 30s lifted their palm or pinched their fingers.
Research categorizes gestures into types that either replace or augment language.
Gestural eye contact can allow the users to perform tasks discretely if they feel uncomfortable or embarrassed using voice commands or even hand gestures in public. Eye tracking technology can help to identify objects and present immediate information to the user. Gestural head moments when it comes to smart glasses could help the user swipe between screens or sliders on the interface. This would be limited to right and left or up and down head movements. The most ideal and extreme approach would be to navigate around the interface by reading the users thoughts. That would allow a person to simply just think of something and the visual or information would appear.
Voice recognition
Voice technology like Siri, Alexa, Google Assistant has surged in recent years and is quickly over-taking the traditional form of type searching with voice search. Voice search is another great way to navigate especially when it comes to tiny interfaces. Similarly to smart glasses the smartwatch has a very tiny interface and voice navigation really allows the user to still utilize the space without feeling frustrated by the lack of it.
When thinking about the capabilities of smart glasses the possibilities are endless if they are AR-based and voice-based. The main focus of smart glasses would have to be to make our lives easier and more productive.
When it comes to multisensory interactions like voice, gestural and vision no one knows when and why to implement them. Smart glasses have many different usages, and as a result would need different interaction modalities such as sound (voice), and vision (gesture) in order to truly enhance the user flow and user engagement of the different features.
References:
David. R. "Why Gesture is the Next Big Thing in Design" IDEO:
Comments