AI Autonomous vehicles Laser Scanning Lidar Research safety Smart Cities

Vision Systems vs. Lidar for AVs

Graphic of vision systems

Self-driving cars have been a topic of intense discussion and development for several years now. While many companies focus on a range of sensors like LIDAR, radar, and ultrasonic sensors, there are companies that insist on a vision systems only approach.

From an article in by Harvinder Sidhu

What is the LIDAR approach?

Traditionally, autonomous vehicles have been equipped with an array of expensive sensors. LIDAR, short for Light Detection and Ranging, uses lasers to create a 3D map of the car’s surroundings. Radar helps in detecting objects’ speed and distance, and ultrasonic sensors aid in close-range detection. These systems often require a lot of computational power and can significantly drive up the cost of the vehicle.

What is the vision approach?

A vision-only approach is relies mainly on cameras, coupled with advanced machine learning algorithms, to give the vehicle its “sense” of direction. Cameras serve as the eyes of the car, capturing images that are then processed in real-time by the vehicle’s onboard computer. This setup significantly reduces costs and computational requirements.

Tesla is famous for its efforts to crack self-driving using a vision-only approach, but there is also an open source startup called comma AI that has a vision-based system called openpilot, as well as sells the hardware to run openpilot on called comma 3X in its latest iteration.

How does the computer understand how to drive a car?

By now a lot of us are already all familiar with OpenAI’s ChatGPT, a generative AI tool trained on a whole load of data to help it understand the world. So how does an AI know how to drive a car?

The answer is simple – by watching videos on how a car is driven.

Through this process, the AI driving model learns to recognise patterns and distinguish between a pedestrian, a cyclist, another vehicle, or even road signs and signals.

Once the machine learning model understands the environment, it makes real-time driving decisions. Should the car slow down because a pedestrian is crossing? Should it speed up to merge onto a highway? All of these decisions are made based on the continuous input of visual data and the patterns the model has learned.

For a human to get better at driving, it has to have more driving experience. The same goes with the AI model. It improves over time as it is fed more good quality data.

Where does Comma and Tesla get their driving videos?

Comma AI gathers its training data primarily through crowd-sourcing. When users install Comma AI’s hardware into their cars, the devices collect video data as well as other sensor information while driving. This data is then anonymized and used to improve the machine learning model.

This allows vision systems to be trained on a diverse range of real-world driving conditions, including different types of roads, traffic situations, and even varying weather conditions. Second, the data is naturally updated and expanded, which means the model is continuously refined as more users participate and share their data.

For the complete article on vision systems CLICK HERE.

Note – If you liked this post click here to stay informed of all of the 3D laser scanning, geomatics, UAS, autonomous vehicle, Lidar News and more. If you have an informative 3D video that you would like us to promote, please forward to and if you would like to join the Younger Geospatial Professional movement click here

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: