The 2016 Nvidia GTC event in San Jose closed its doors yesterday. There were over 5,000 attendees – 25% up on 2015. My subjective assessment was that the growth in automotive attendees was somewhat greater than that, with standing room only at some sessions.
On the pure product side, there was little new for automotive. Nvidia had already made its big announcements at CES, which was pretty much exclusively automotive-themed. It did announce support for end-to-end mapping for self-driving cars, using Drive PX 2 in the cars and Tesla GPUs in the cloud – but that’s another blog for another day.
The two key words that I heard this year were not actually mentioned in the context of automotive at all. The first was data-centers, and the second the number twenty.
Data-Centers?
What relevance does Nvidia’s growing data-center business have to automotive you may be asking? Here’s why I see it as important. Nvidia’s growth-to-date in automotive infotainment has been largely on the back of leveraging its gaming/professional 3D graphics chips. Although that market is still growing, given the steep growth curve of automotive ADAS and Self-Driving Car requirements, there is likely to be less leveraging possible in the future.
Developing the large, complex and powerful GPUs that automotive is saying it needs is not cheap, and if Nvidia had to do it for automotive alone then that could prove a stretch. Its data-center business is growing rapidly. Deep learning is being implemented now in the cloud, significantly before we are seeing it in production on the road. It may have been genius strategy or it may have been a stroke of luck, but I see Nvidia as having a firm foothold in this rapidly growing market that can complement and help feed its automotive ambitions.
Twenty?
So what about the number twenty? Again, it was mentioned in the context of data-centers and not automotive, but during his keynote Jen-Hsun Huang mentioned that the new NVIDIA GPU Inference Engine (GIE) allows image classification to be undertaken at 20 frames-per-second-per-watt on a Tesla M4.
To date, Nvidia has arguably focused on speeding up the training of deep learning networks. It is now giving at least equal billing to accelerating and power-optimizing their implementation. In discussions with other semiconductor vendors I have often heard some version of “we’re happy for the development cars to go to Nvidia – when they want to go into production they’ll come back to us” stated to me.
The 20 fps/W figure, coupled with strong suggestions that Drive PX will be a much more scalable platform than has been publicly stated to date, means that other semiconductor vendors wanting to supply into highly automated vehicles may need to start looking at themselves closely in the mirror for signs of complacency.
The liquid-cooled Drive PX 2 at 150W power consumption and 8 teraflops worth of processing power is a great development platform. But it is not something I can see fitted to a mass market vehicle in the next few years. Some scaled-down variant, offering power consumption/performance trade-offs at around the 20 fps/W level would be a different matter altogether.
So – for me the two key words that automotive needs to hear from Nvidia GTC were not those mentioned in the main automotive sessions at all. And that is all the more reason that you need to hear them.