Automotive > In-Vehicle UX Blog

Tesla Autopilot Update: Designing for Future, Neglecting the Present

by Chris Schreiner | Apr 17, 2020

As part of its most recent Autopilot update, Tesla has refined the system's status display, sharpening the visual indicators presented to drivers.  Detected pedestrians now appear as gray icons, traffic cones and rubbish bins are displayed, and even live traffic signal information is displayed (no doubt related to an impressive upgrade which allows Autopilot to stop at red lights).  Tesla teased this development late last year, by announcing its "Full Self Driving Sneak Preview."

Autopilot with traffic lights and construction cone     Autopilot with traffic signal and pedestrians     Autopilot yellow light
All images:  Twitter @RogerMcMorrow

This evolution in Tesla's Autopilot display (with an eye toward a more advanced "Full Self Driving" capability) appears to follow design language from Waymo and Google's AV teams.  Waymo/Google claims that these small elements (showing a great deal of detail, but not so much as to overload a display) are crucial for trust establishment.  And indeed, trust establishment is an important piece of future ridership, especially for initial exposure on the first few rides.  Interestingly, this "less but more" design strategy directly conflicts with the inadvisable "raw-radar-feedback" design Tesla employed with Smart Summon (which Strategy Analytics evaluated late in 2019).

However, a few things are troublesome about applying these subtle design elements for a driverless taxi to a semi-automated feature such as Autopilot.

As with the Model 3 infotainment system (which Strategy Analytics evaluated in 2018), this Autopilot update is a case of a feature landing in a future context that does not exist yet.  Richly detailed UI makes sense in a fully automated Waymo/Google robo-taxi context, where a rear-seat rider is using a display as their primary source for system status and situation awareness.  But for a semi-automated system, where an operator must attend to a display and to a vehicle's environment and react at a moment's notice, HMI must be optimized for quick glances, not for trust establishment.  In such context, does a rich display full of small design flourishes truly add value? 

At a more fundamental level relevant to trust, there is a fine line to walk with accuracy of semi-automated features.  In any system, trust is easily broken, and this is especially true for a drive-critical feature.  If a system fails relatively often, especially at vulnerable points in the journey (such as failure to brake for a red traffic signal), users aren't likely to use the system again.  But the other end of this accuracy spectrum is problematic as well.  If a semi-automated driving feature is very accurate with infrequent errors or disengagements, this can lead to a false sense of security.  Operators are more likely to turn their attention to something else, which drives up reaction/transition times and exponentially increases crash risk. We have seen this play out in academic research, where semi-automated features tend to increase reaction times regardless of the presence of a secondary task.  We have also seen real-world examples of this error or misuse, in crashes involving an Uber ATG test vehicle, and even Tesla Autopilot itself.

HMI for semi-automated features, even one originating from a relatively mature organization like Tesla's, requires a re-focus on the "human" portion of the interface.  As such, a number of open questions remain to be researched:

To what extent do users really notice or care about small UI flourishes in a semi-automated driving feature?

Ride-alongs with target segments or owners of semi-automated driving features could explore this question further.

Small design flourishes (such as live traffic signal information and pedestrian schematics) may not be appropriate when the driver is fully control, but what is the leeway around distraction for semi-automation? Would this help keep drivers in the loop?

A dynamic evaluation of existing or conceptual semi-automated features could go a long way toward answering these questions.

What might consumer expectations be on accuracy of semi-automated features, and if a hazard was "missed," would that cause more harm than good?

Explorations of target consumer segments via interviews, generative design sessions, ride-alongs, and so forth could help bring clarity to this issue.

For more information about our in-vehicle consumer research, or to arrange a briefing, please contact your nearest Strategy Analytics office.

Previous Post: Radio in the Car: A Consumer Study | Next Post: COVID-19: The Fate of the Fearless

Let's talk

Now you know a little about us, get in touch and tell us what your business problem is.
Inquiry / Message:

please enter captcha from left