Order out of chaos

For more advanced autonomous functions to be developed, artificial intelligence needs to grow, and it will be Tier Ones and silicon firms that will make it happen

The industry is keen to develop advanced driver assistance systems to the point that vehicles eventually become fully autonomous. We are slowly reaching the point where a great many vehicles house the sensor systems – cameras, radar and lidar – required to read their surroundings, but what isn’t in place is the necessary software.

Developing the correct algorithms is only part of the challenge, however. For autonomous functions to become available in series production vehicles, they need to be safe, and the best way to ensure that is to use systems that can learn from their surroundings. In many ways they need to be intelligent.

Firms are beginning to look at systems relating to deep learning. This is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers.

If you show a computer enough images of, for example, different types of vehicles, it will eventually be able to categorise them itself when on the road. So it will know the difference between a sedan and an SUV, or a motorbike and a heavy-goods vehicle, and what risk they might present.

Nvidia has been developing the hardware for this approach, and the firm’s GPU processing technology will be used in Volvo’s fully autonomous vehicles when they go on the roads of Gothenburg in Sweden. The computer system is called Drive PX 2.

And that GPU-based processing architecture has had a big impact on systems’ ability to deal with the huge amount of data needed.

Nvidia’s senior solutions architect, Toru Baji, said: “The training part is very heavy in terms of the burden – the data number is huge. For example, if you want to create a good autonomous driving vehicle it's in the order of 100 million images for training. The reason is because the car has to be driven in various environments.”

Faster processing

And for the car to learn about all the variations an image might have, multiple exposures of it will be needed, and they may also need to be rotated, or the aspect ratio changed.

That number of images would previously have taken weeks to process on a more traditional CPU-based system. Baji gives the example of a 16-core CPU that could cope with 2.5 million images a day. Compare that to the 43 million images that can be processed using GPU systems. But that’s only part of the road to artificial intelligence and autonomous functions.

Baji said: “There are three pillars that we provide at Nvidia. The first one is Cuda, a parallel programming environment, which we developed from our supercomputing. Then the deep-learning accelerating software library. And lastly we have the Digits programming support environment, which is trying to efficiently execute deep learning, and prepares the training data set.” 

tags: March 2016 Connectivity
share: