Radar has been the go-to in the automotive world for years, and has been used in several forms of advanced driver assistance systems (ADAS), but lidar holds the promise of increased safety. Blind-spot monitoring systems use radar to detect vehicles before a lane change, adaptive cruise control uses radar to maintain a consistent distance between two vehicles on the road, and automatic emergency braking systems use radar to stop a vehicle before it makes contact with an obstacle.
From an article in Autoweek by Chris Teague.
Lidar promises to improve on those features with more accurate environment mapping and quicker processing from the rapid-fire nature of the systems. Because of its 360-degree capabilities, lidar should improve the accuracy and quality of safety alerts.
How Lidar Works with AVs
First, it’s important to note that, currently, autonomous or self-driving vehicles don’t exist for sale to everyday consumers. Vehicles such as those from Tesla or Super Cruise-equipped Cadillacs offer the ability to ride hands-free for extended periods of time, but do so only in extremely limited circumstances, such as on highways and interstates.
When self-driving vehicles do eventually make their way into the wild on a large-scale basis, the amount of data needed and the speed at which it needs to be collected is staggering. In order to piece together a decision-making process that is anywhere remotely near the level of complexity that a human brain can manage, autonomous vehicles need to have an accurate and real-time picture of the world around them. This is especially true in urban environments, where human drivers encounter other people, animals, and a variety of vehicles in a short period of time.
What Are the Downsides?
Lidar is considered to be the standard for many companies working on autonomous vehicles, but the technology is not fully accepted by all automakers. Tesla and its founder Elon Musk have been critical of lidar as the driver for AV awareness, because the technology is only re-creating an image of its surroundings, rather than getting a visual representation of what’s going on. An example of this is with small obstacles in the road. Lidar is more than capable of identifying that there is something in the road that needs to be avoided, but is not able to tell exactly what it’s looking at. To lidar, a balloon floating in the center of the road looks exactly the same as a large rock, so there are times when a non-threat is treated with outsized importance and times when a real threat may not be recognized as such. In a vacuum, this isn’t a tremendous problem, but in the real world it’s far from ideal to have a vehicle misunderstanding what it’s looking at.
Tesla argues, as do others, that using a vision-based system with cameras can achieve the same awareness that a lidar system brings, but with the added level of security that comes from pictures of the actual environment. Tesla’s systems use cameras and learn over time, which makes them more able to deal with unpredictable environments. That functionality, combined with the fact that cameras are currently far less expensive than lidar, has led some to question the need for expensive sensors.
The answer to which sensor or camera is going to be the best for autonomous vehicles is more complicated than determining whether or not a vehicle can “see.” The tests being conducted have so far mostly been performed in limited and somewhat controlled environments that don’t completely represent the conditions that an AV might see on a daily basis.
For the complete article on lidar holds the promise CLICK HERE.
Note – If you liked this post click here to stay informed of all of the 3D laser scanning, geomatics, UAS, autonomous vehicle, Lidar News and more. If you have an informative 3D video that you would like us to promote, please forward to firstname.lastname@example.org and if you would like to join the Younger Geospatial Professional movement click here.