Real Time Vehicle Detection Framework for AVs

diagram of Real Time Vehicle Detection Fusing Lidar and Camera

Real Time Vehicle Detection Fusing Lidar and Camera

Real time vehicle detection is essential for driverless systems. However, the current single sensor detection mode is no longer sufficient in complex and changing traffic environments. Therefore, this paper combines camera and light detection and ranging (LiDAR) to build a vehicle-detection framework that has the characteristics of multi adaptability, high real-time capacity, and robustness.

First, a multi-adaptive high-precision depth-completion method was proposed to convert the 2D LiDAR sparse depth map into a dense depth map, so that the two sensors are aligned with each other at the data level. Then, the You Only Look Once Version 3 (YOLOv3) real-time object detection model was used to detect the color image and the dense depth map.

Finally, a decision-level fusion method based on bounding box fusion and improved Dempster–Shafer (D–S) evidence theory was proposed to merge the two results of the previous step and obtain the final vehicle position and distance information, which not only improves the detection accuracy but also improves the robustness of the whole framework.

We evaluated our method using the KITTI dataset and the Waymo Open Dataset, and the results show the effectiveness of the proposed depth completion method and multi-sensor fusion strategy.

Although LiDAR and cameras can detect the object alone, each sensor has its limitations [15]. LiDAR is susceptible to severe weather such as rain, snow, and fog. Additionally, the resolution of LiDAR is quite limited compared to a camera. However, cameras are affected by light, detection distance, and other factors. Therefore, two kinds of sensors need to work together to complete the object detection task in the complex and changeable traffic environment.

Object detection methods based on the fusion of camera and LiDAR can usually be divided into early fusion (data-level fusion, feature-level fusion) and decision-level fusion (late fusion) according to the different stages of fusion [16].

For the complete article click here.

Note – If you liked this post click here to stay informed of all of the 3D laser scanning, geomatics, UAS, autonomous vehicle, Lidar News and more. If you have an informative 3D video that you would like us to promote, please forward to editor@lidarnews.com and if you would like to join the Younger Geospatial Professional movement click here.

This entry was posted in 3D Modeling, AI, artifical perception, artificial intelligence, Autonomous vehicles, Business Development, Consumer, Data, driverless vehicles, Feature Extraction, Hardware, Laser Scanning, Lidar, Mapping, Mobile LiDAR, point clouds, Research, Technology, Young Geospatial Professional and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.