Tesla’s vision-only autonomous driving system will be powered by a supercomputer with 1.8 EFLOPS

Share this article


In brief: Tesla is at the forefront of self-driving systems. Currently Tesla cars use cameras, radar, and LiDAR sensors to collect data that helps the system navigate safely, but the car manufacturer now plans to replace it with a vision-only system using cameras and a powerful supercomputer hosting a neural network.

Collecting data for a self-driving system using only cameras instead of radars, LiDAR, and other components may seem inferior, but there are some benefits from this approach. When cutting on the amount of technology packed inside the vehicle, two other things reduce proportionally: costs and weight.

Moreover, there’s Elon Musk’s argument on vision vs radar: “When radar and vision disagree, which one do you believe? Vision has much more precision, so better to double down on vision than do sensor fusion.” He then added that vision is also faster than radar and LiDAR, concluding that as “vision processing gets better, it just leaves radar far behind.”

For the neural network host, Tesla will be using a system called Dojo. The system is still under development, but during CVPR 2021, Tesla’s head of AI Andrej Karpathy revealed the prototype system that will eventually be replaced by Dojo. The last-gen prototype of Dojo has 5,760 GPUs delivering up to 1.8 EFLOPS (exaFLOPS) and is equipped with 10PB of NVMe storage and a 1.6 TB/s connection. According to Karpathy, this system should sit at the fifth place of the TOP500 supercomputer list.

As for the car, each will be equipped with eight cameras capable of collecting footage at 36FPS. The collected footage is sent to the supercomputer, where it will be processed at a speed matching that of a human driver.

Compared to humans, the system will offer advantages like 360ยบ awareness, better reaction times, and a non-distractible entity controlling the car. Karpathy also mentioned some cases where the system will get into action, including emergency braking to prevent a pedestrian from being hit and warn drivers about traffic lights.

Although the neural network part is still lacking, Tesla has already stopped equipping Model Y and Model 3 cars built in North America with anything besides cameras. As it seems, most of the work in Tesla’s new self-driving system is done by the cameras, so the lack of the neural network isn’t crucial to functioning accurately.

Source link

Tags: , , , , , , ,
2022 Audi RS 3 will drift to your heart's content – Roadshow
Best Prime Day 2021 TV deals: Save on LG, Vizio, Toshiba, Insignia Fire TV and more

Best rewards credit cards for June 2021

iPhone 12 water test: How deep can Apple’s phone really go?

ASRock’s DeskMini Max concept houses AMD Ryzen, dGPU and liquid cooling in a 10L case

Fast and Furious 9 review: The Godfather II of Vin Diesel car crash movies

Nvidia is increasing supply of the RTX 3060 from next month

Nvidia is increasing supply of the RTX 3060 from next month

I switched from Android to iPhone and here’s what happened

iPhone 13 camera: The specs and features the rumors say we’ll see

Watch Microsoft

Watch Microsoft’s Windows 11 unveiling right here at 8am PT / 11am ET

Aliens could have already spotted Earth from over 2,000 different star systems

Earth-like worlds in the Milky Way may be a lot less common than we thought

You May Also Like