UX Innovation > User Experience Strategies Blog



Do Smartphones Need Gesture HMI?

by Paul Brown | Aug 01, 2019

In a blog posting on July 29th, Google highlighted some key features of the upcoming Pixel 4, including Motion Sense.  Motion Sense will be powered by Soli, a motion sensing radar located at the top of the Pixel 4.  Combining unique software algorithms with the advanced hardware sensor, Soli sense motions around the phone, enabling it to recognize gestures and detect when the user is nearby.


The sensors and cameras enabling Motion Sense and face unlock in Pixel 4 (Source: Google)
Pixel4

Gestures are not something new to smartphones.  In 2013, Samsung introduced the Galaxy S4 with a host of gestures.  However, most of these gestures were cumbersome and inefficient, had low adoption, and many were removed from future Samsung devices.

According to Google’s blog post, the number of initial gestures on the Pixel 4 will allow the user to undertake the following three functions, just by waving your hand:

  • Skip songs
  • Snooze alarms
  • Silence phone calls

Using gestures to snooze alarms and silence phone calls could be very useful.  These are both tasks that will likely occur when the user is not holding the phone.  Waving a hand over the phone when either event occurs is a very simple action, and one that requires less cognitive effort than picking up the phone and pressing buttons (physical or on the touchscreen).  However, there may be a concern that the user accidentally silences a phone call when they move their hand towards the phone to pick it up and answer the call.  The required gesture and how it can differentiate a user’s intent is key here.

The example of skipping songs using gestures was highlighted in the video on Google’s blog release.  In the video, the phone is shown as active in front of the user, and they wave their hand from right to left in front of the phone to skip through tracks.  This could be a useful gesture for skipping tracks if the user is listening to music and the phone is set down somewhere – rather than having to pick up the phone and press a control, they could just wave their hand.  However, it would need to also work when the display is in standby (something not highlighted in the video).  If the user has to activate the display first by touching it, then it is simpler to complete the task by also physically interacting with the phone.

In a Technology Planning Report on HMI, Strategy Analytics has previously identified a number of potential barriers to adoption of gesture HMI:

  • Can users easily remember how to perform a specific gesture and will the system be accurate?
  • Can users recall when they can and cannot use gestures?
  • Is the device in range of the user to perform the gesture?
  • As gesture control expands, are controls used consistent between different devices?

Gesture HMI: Barriers to Adoption
gesture

With so many options to choose from, it is important to determine the ideal HMI for the task in hand.  There is no one ideal HMI across all devices and platforms.  Critical success factors in identifying the ideal HMI will depend on the following factors:

  • What is the context in which it will/should be used?
  • How can it make the current experience more convenient?
  • Will it require more than one HMI implementation (e.g. touchscreen & gestures)?
  • Can the HMI learn from the user (e.g. voice profiling) and enhance the overall experience?

As smartphone manufacturers look to implement gesture control, they need to ensure that it meets these four key criteria:

  1. Useful: Meeting emerging user needs
  2. Usable: Help users to interact with their devices in a more efficient and intuitive way
  3. Compelling: Creating magical experiences that make technology desirable
  4. Have limited barriers to implementation: There should be few issues such as user training and data privacy, which would hinder user adoption

Overall, gesture control for smartphones is best suited for passive tasks, where the user is not likely to be already holding the phone.  If the user already has the phone in their hand, much of the convenience that gestures can bring is lost, as it will often be much simpler for the user to interact directly with the touchscreen.  Looking to the future, Strategy Analytics sees gesture control as having a more prominent role to play in the home and the car than it will in personal devices.

- Paul Brown

Previous Post: Strategy Analytics Discusses 5G Use Cases at MWC 2019
Leave a comment