Media & Services > User Experience Strategies Blog

AT&T Showcases WATSON Speech Technology

by Chris Schreiner | 11月 17, 2010

On Tuesday, AT&T showcased their WATSON speech recognition engine for analysts and press by providing demonstrations of applications and services which utilize the technology. WATSON has been deployed in IVR services for over 20 years, but has yet to break through into other areas, which AT&T is trying to change.

While AT&T showcased WATSON in several areas, including assistive services and customer care, the areas that caught my interest were mobile applications, social media, and “The Living Room of the Future”.

The mobile application section didn’t have any compelling new applications, but instead was more of a showcase for how WATSON is used in Vlingo’s app for Android and iPhone, AT&T’s Yellow Pages for mobile, and others. Where WATSON and AT&T’s natural language processing stood out in this area was in some of the algorithms used to analyze online data and translate it into information consumers can utilize. For instance, the Have2Eat application searches online data for keywords in reviews, determines the most salient words and sentences, and provides users with the most relevant snippets of reviews.

A more compelling example of this technology was a DVR search feature in “The Living Room of the Future.” Demonstrated on an iPad, users could speak a search item, such as “News programs in the past week which featured Barack Obama.” The DVR would search not only through the titles and descriptions of shows, but also the metadata such as closed-caption text to find specific references to Barack Obama within a program. From there, the user can select from a list of search results on the iPad which is then displayed on the TV.

Also of interest was speech-enabled social TV, which allowed users to speak tweets into a speech-enabled remote control, see other tweets on the program they were viewing, hot topics mentioned in the tweets, as well as analytical data on trends in tweets.

Speech recognition is starting to gain some traction in the automotive space with Ford’s SYNC and MyFord Touch, but with the exception of Google’s voice search has yet to breakthrough with anything compelling in the mobile space that has caught the consumer’s eye. A common question by attendees to the AT&T Labs demonstrators was “Why would I speak that when I can just type it in?” They left unconvinced that voice would be easier. However, for complex input such as the DVR search feature and tweets on social TV, voice may win out in the end.

- Chris Schreiner

Related Reports:
MyFord Touch Has Compelling Embedded Voice HMI
Voice HMI: Connected Car Opportunities and UX Best Practices
Google Navigation Impresses Consumers

Previous Post: Do App Stores Impact Wireless Device Sales? | Next Post: What do UK Smartphone Owners Want from a Mobile App Store?

Let's talk

Now you know a little about us, get in touch and tell us what your business problem is.
Inquiry / Message:

please enter captcha from left