3D Modeling Autonomous vehicles Lidar safety

Frame Based Versus Event Based Lidar Systems

graphic of Event Based Lidar vs. Frame Based
Event Based Lidar vs. Frame Based

Leading autonomous-vehicle (AV) companies have mostly been relying on frame based sensors, such as LiDAR. However, the tech has historically struggled to detect, track, and classify objects fast and reliably enough to prevent many corner-case collisions, like a pedestrian suddenly appearing from behind a parked vehicle.

From an article in Electronic Design by Pär-Olof Johannesson.

Frame-based systems certainly play a part in the advancement of AVs. But considering the harrowing statistics that indicate 80% of all crashes and 65% of all near-crashes involve driver inattention within the last three seconds of the crash, the industry needs to be working toward a faster, more reliable system that addresses safety concerns within short distances, when we’re most likely to see a collision.

Framed-base sensors have a role, but it shouldn’t be a one man show when it comes to AV technology. Enter event-based sensors, technology being developed within autonomous-driving systems offering advanced technology to provide vehicles with enhanced safety measures where LiDAR falls short.

How Does LiDAR Fall Short?

LiDAR, or light detection and ranging, is the most prominent frame-based technology within the AV space. It uses invisible laser beams to scan objects. LiDAR’s ability to scan and detect objects is extremely fast when you’re comparing it to the human eye. However, in the grand scheme of AVs, when it’s a matter of life or death, LiDAR systems that feature an advanced perceptual processing algorithm and a state-of-the-art perceptual process are adequate at distances beyond 30 to 40 meters. Still, they don’t act nearly fast enough within that range, when a driver is most likely to crash.

Generally, automotive cameras operate at 30 frames per second (fps), which introduces a 33-ms process delay per frame. To accurately detect a pedestrian and predict his or her path, it takes multiple passes per frame. This means resulting systems can take hundreds of milliseconds to act, but a vehicle driving 60 km/hr will have traveled 3.4 meters in only 200 ms. In an especially dense urban setting, the danger of the delay is heightened.

LiDAR, along with today’s camera-based computer-vision and artificial-intelligence navigation systems, are subject to fundamental speed limits of perception because they use this frame-based approach. To put it simply, a frame-based approach is too slow!

For the complete article CLICK HERE.

Note – If you liked this post click here to stay informed of all of the 3D laser scanning, geomatics, UAS, autonomous vehicle, Lidar News and more. If you have an informative 3D video that you would like us to promote, please forward to editor@lidarnews.com and if you would like to join the Younger Geospatial Professional movement click here.

 

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: