Autonomous vehicles Laser Scanning Lidar safety

Computer Vision is Key to AI

point cloud supports computer vision - image credit Wikipedia

The first human directly killed by AI was Elaine Herzberg. On 18 March 2018, she was crossing a 4-lane highway with her bicycle when an autonomous vehicle (AV) i.e., one driven by AI, hit her. An AV has a complex system of cameras and sensors making sense of what’s around. In this case, the main malfunction was its LiDAR system.

From an article in SIFY.com by Satyen Bordoloi.

AI cannot think. Or see. Or understand. What we consider ‘thinking’ is mere mimicry. Thousands of human workers over months and years train AI to do what it finally does, expecting that once released, it’ll learn and evolve on its own, something that’s proving incorrect. One of the biggest problems in training AI systems in AVs mainly, is how to make onboard computers see. You may think it’s easy. But this field – called computer vision – is treacherously tricky.

WHAT IS COMPUTER VISION:

This is basically everything you do to give a digital system the ability to see and interpret the world. ChatGPT, Bard and other generative AI systems are blind. They do not need to see what’s around them, just regenerate words based upon their dataset, training and query. For Bard or ChatGPT to tell you what the photo behind your chair is, you’d need to take its snapshot, upload it into their systems, the AI will run a search and tell you. Yet, it can’t tell you how far that photo is from your chair. Or if the frame is made of wood or metal.

This is fine for Bard or ChatGPT. But if you needed to build a sophisticated AI system like an AV or say one for blind people to use, depth becomes key. The smart glass that the blind girl wears has multiple cameras and sensors that not only have to interpret what’s in front of her but also have to tell her how far the car on the left is, at what speed it’s approaching and if that would be enough to cross the road.

If she could see, she’d have known without thinking that the car is about 50 meters out and watching it for a second, she’d have known its velocity enough to calculate that she could safely cross. All of us make such decisions every moment of our lives without thinking. It is when we try to make AVs ‘see’, that we realise how complex such small decisions are, and how life-altering they could be.

To do this, the AI system would need not only to ‘see’ but view the surroundings in 3D. This is where LiDAR comes in.

LiDAR –THE EYE OF THE AI:

LiDAR stands for Light Detection And Ranging. It sounds similar to RADAR – Radio Detection And Ranging because both do similar things: detect the presence and volume of distant objects. The major difference is their wavelength and thus what and how much they capture. RADAR uses radio waves and LiDAR light waves. RADAR thus has a wider beam divergence i.e., it can detect objects far off, while LiDAR has a narrower but more focussed beam and thus range.

For the complete article CLICK HERE.

Note – If you liked this post click here to stay informed of all of the 3D laser scanning, geomatics, UAS, autonomous vehicle, Lidar News and more. If you have an informative 3D video that you would like us to promote, please forward to editor@lidarnews.com and if you would like to join the Younger Geospatial Professional movement click here

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: