When we discuss the lidar operating principles, we usually refer to the concept of time of flight. This is the operating principle most commonly used in lidar today — but there are other lidar types you might run into from time to time. Here’s a quick rundown.
From a NavVis blog post by Sean Higgins.
Time of flight: You probably know this one already. Fires a laser, and measures how long it takes to return to the sensor (its time of flight). Since the onboard computer knows the speed of light and the laser’s time of flight, it can calculate the distance to the object.
Phase-shift lidar: This type of lidar uses light, just like time of flight lidar. The difference is that this system emits an uninterrupted beam rather than pulses, and measures waveforms rather than time to determine distance.
Here’s how: The lidar scanner modulates the beam, reducing and increasing the energy at a constant rate. If you mapped the energy levels over time, the chart would look like a sine wave.
When this energy hits an object, it reflects back to the sensor. By comparing the difference between the original wave and the reflected wave – or their phase difference – the sensor can determine the distance to the object. This is faster than time-of-flight lidar but has weaknesses that we won’t get into here.
Solid-state lidar: Simply put: Lidar on a microchip.
As I once wrote in an article on the technology, a good way to understand this is to compare lidar sensors to computers. In the early days of computers and lidar, units were big. They were constructed of separate mechanical parts, as well as the transistors, resistors, and capacitors necessary for functioning.
With the advent of the microchip in computing, many of these parts were miniaturized significantly. Solid-state lidar does the same thing for lidar, making it popular for use in applications like robotics and self-driving cars.
Flash lidar: You can think of a flash lidar as a single solid-state lidar chip that acts as both the camera and the flash needed to capture a 3D image.
To capture, every pixel on the chip fires light, which floods the field of view. When the light bounces off of objects and returns to the sensor, every pixel also captures 3D data.
In other words, a flash lidar captures its whole field of view with a single pulse, like a snapshot. That’s why Wikipedia helpfully describes it as a camera that captures 3D information instead of colors.
For the complete article on lidar operating principles and more CLICK HERE.
Note – If you liked this post click here to stay informed of all of the 3D laser scanning, geomatics, UAS, autonomous vehicle, Lidar News and more. If you have an informative 3D video that you would like us to promote, please forward to firstname.lastname@example.org and if you would like to join the Younger Geospatial Professional movement click here