Nvidia CEO Jensen Huang was positively giddy as he showed off the company’s next-generation chips at one of the first events of CES 2018.
The processor, called the Drive Xavier, is made to take on one of the most demanding use cases today self-driving cars. And as you’d expect, it’s leaps and bounds better than the previous generation of chips, ticking off plenty of “oh wow” boxes on the spec sheet: 9 billion transistors in an 8-core CPU, a new 512-core GPU, capable of 30 trillion operations per second while consuming just 30 watts.
But that wasn’t why Huang was so excited.
Nvidia didn’t just build a powerful system on a chip, it made it compact. The company’s previous generation of self-driving chip tech, the Drive PX 2, was a chunky piece of hardware, weighing several pounds and roughly the size of a bulky laptop. By contrast, the Nvidia Drive Xavier is so light that Huang said he could "barely feel it" during his keynote.
— Pete Pachal (@petepachal) January 8, 2018
Thanks to its relative lack of bulk, Nvidia's new tech will theoretically ease the design burden for carmakers wanting to deploy self-driving tech (read: all of them). Now, instead of needing to account for more weight and moving other systems to accommodate the self-driving engine, the system will be more plug-and-play.
In turn, that will help with consumer acceptance. Most cars on the road today equipped with self-driving tech look the part, with a bulky camera array on top of the cabin and sensors peppering the chassis. The need for extra equipment may contribute to the choice of car model: Uber and Alphabet’s Waymo both chose SUVs as testbeds for self-driving technology, not compacts.
Nvidia’s skinny self-driving silicon looks to change that. The Pegasus is brand new (Huang said it was only finished a couple of weeks ago), but it underpins what CES 2018 will be remembered for adding to driverless-car technology: making it as seamless as possible, especially for consumers.
Toyota kicked things off a few days before CES 2018, revealing its “Platform 3.0” self-driving concept car. While it still boasts a sensor-laden “hat” on top of the cabin, it’s otherwise a very sleek sedan. And you really start to buy into the “intelligent minimalism” that Toyota says it was going for when you look at Waymo’s design, which looks like a descendant of the Ghostbusters’ Ecto 1 ghostmobile.
Aptiv (formerly Delphi) showed off an austere self-driving car as well. This one actually ditches the top sensor/camera package entirely, instead replacing it with sensors along the front, back and sides of the BMW 5 Series sedan. Lyft is using the cars at CES to show off its vision for incorporating self-driving into ride-sharing, and is letting some regular users in the Las Vegas area take them on rides to some 20 specific destinations. While the rider needs to specify a self-driving ride before they get in, only eagle-eyed bystanders will notice anything special about the car, thanks to the minimalist sensors made possible by Aptiv’s architecture.
Another company at CES is looking to go even further in concealing the telltale signs of a self-driving car. Koito, a Japanese company that makes headlights and tail lights for many of the major automakers, showed off a type of headlight the integrates LIDAR sensors. Not only would the new light eliminate the need to mount LIDAR on doors and grills, but it would theoretically better protect them from the elements as well.
Of course, all these changes to make self-driving cars invisible on the roads leads to a natural question: Should self-driving cars be indiscernible from human-piloted vehicles? Drivers and pedestrians may actually want to know if they car they’re reacting to is a robot, at least in these initial years where the public is still learning (and perhaps a little uneasy) about the tech.
Koito has thought of this, too, and proposes a new kind of light that signifies a car in self-driving mode. Just like a video camera’s red light tells onlookers that it’s on and recording, this lamp would telegraph to everyone on the road the car is in self-driving mode. (Exactly what it would look like would need to be determined, but I vote for the oscillating red eye that was KITT’s hood ornament on Knight Rider.)
Of course, it’s all well and good to make self-driving cars look like regular ones, but they need to act like human-driven cars as well. That’s relatively easy on a highway, where the variables and the things you can do to react to them are at a minimum, but on city streets, human personalities and local culture lead to variety of driving styles (just think of how much driving is different in Austin vs. Boston).
Progress is being made on this front, too, and Aptiv gave its driving platform an intentionally “boring” personality for the cars its lending to Lyft. Theoretically, you could program a self-driving car for racing or even aggressive driving (though it’s unclear why you’d want to). In any case, the clear trend is to make autonomous cars drive like regular people do — just without the nasty accidents.
We already know self-driving is real, it works, and it’s coming faster than we ever thought. The next step will be to convince the public they have nothing to fear. That means building these robot vehicles so they look like the cars of today and act, well, more human.
Source : http://mashable.com/2018/01/08/ces-2018-self-driving-cars/