3D Modeling Drones Lidar Surveying

Lidar Versus Photogrammetry for Drone Surveys

Image of Lidar Versus Photogrammetry
Lidar Versus Photogrammetry

Remote sensing has become more accessible in recent years due to the rising popularity of drone systems. While platforms such as manned aircraft and observation satellites cover large areas, drones offer flexibility, higher resolution, and lower costs. Drones by themselves are neat, but attached sensors are what turn them into powerful tools for many applications. In this guest blog post we will look at lidar versus photogrammetry identifying five of the major differences when drone mapping.

Before jumping into the differences between these two methods, it is important to understand their similarities:

  • Both LiDAR and photogrammetry sensors collect data remotely
  • Both create point clouds as a first step toward other deliverables
  • Accuracy and precision are comparable using either technique
  • Both methods are used to analyze scenes offsite

Many outputs are similar, although there are several important differences.

Technology and Sensing Methods

LiDAR and photogrammetry differ in how data is collected. LiDAR is an active sensing system, or one that generates and records its own energy pulses. Photogrammetry, a passive system, records light emitted from another source — for example, the sun.

A LiDAR scanner emits radiation, which reflects off of the target and returns to the sensor. Multiplying time passed by the speed of light (and dividing that figure in half since the energy must go and return) allows the system to assign x, y, and z location values to the point in space that reflected the beam. Most of the pulses are lost as they are reflected off of features in the scene. Only a tiny fraction returns to the sensor. Modern LiDAR systems compensate for this by emitting hundreds of thousands of pulses per second, allowing top of the line units to detect hundreds of points per square meter. Detected points, each with their x,y,z location values, form the point cloud.

The Greek root of the word “photogrammetry” combines the words for “light” (photo), “writing” or “drawing” (gram) and “measure” (metria). In its simplest form, it is the process of taking measurements from photographs. Drone technology utilizes digital cameras and overlapping images to triangulate x,y,z coordinates in the same way that our eyes provide us with depth perception. The software also assigns RGB color values for individual pixels. These pixels become points in the point cloud.

A quick note here is that point clouds derived from photogrammetry are automatically colorized (since they originate from pixels) while LiDAR data does not necessarily contain color data. Colorizing a LiDAR point cloud involves synchronizing an additional sensor that captures RGB information and fusing it with the points.

Multiple Returns

The first key difference between LiDAR and photogrammetry is the amount of data stored in the point cloud. Photogrammetry, as already mentioned, contains x,y,z and RGB data. LiDAR files can contain two (or more) extra properties that open new doors to an analyst. One of these is the ability to detect multiple returns, or more than one point per pulse.

It is easy to imagine that a LiDAR system is similar to a laser pointer that sweeps quickly back and forth. The reality is more like a flashlight. As a LiDAR system emits a pulse, that pulse spreads out to have a footprint somewhere in the range of 5-25cm for most drone LiDAR systems.

Imagine acquiring data over a suburban neighborhood. The scene contains houses, grass, trees, sidewalks, roads, swimming pools, and so on. If part of a beam hits the roof of a house, it reflects back to the sensor. The rest of the beam is free to continue on its way. As it hits the ground and also returns to the sensor, the system records that pulse as having multiple returns. Systems vary as to the number of returns, with top systems ranging from three to seven returns possible from one pulse.

Analysts use these multiple returns to generate different deliverables; last returns are especially helpful in developing a bare-earth model of an area even in dense vegetation.

Intensity Values

Along with reading multiple returns, a LiDAR sensor also measures the strength of a return. The different materials in a scene absorb and reflect LiDAR energy at different rates. The hard surface of a neighborhood road, for example, returns energy to the LiDAR sensor more strongly than grass in a front yard.

LiDAR systems convert this intensity to a raster that can be used for feature extraction. The output raster also has the feel of a black and white image, which provides a good visual to locate features that might otherwise be hard to decipher. Photogrammetry cannot provide such intensity data in its point clouds.

Photogrammetry Wins When…

It’s easy to sing the praises of LiDAR technology. Photogrammetry does have an important advantage, however, when it comes to visualization of 3D models. Both types of point clouds are used to produce digital surface models (DSM). DSMs on their own provide plenty of data for analysis, but photogrammetry software uses the DSM to calculate distortion in the source images on a pixel-by-pixel basis.

The result is a planimetrically correct orthorectified image (photo map). The process of orthorectification removes distortion such as that from camera angles and elevation differences. LiDAR systems by themselves cannot generate orthophotos.

When price is a factor, startup costs for photogrammetry systems are less expensive than LiDAR technology. Mid-level LiDAR systems, drone excluded, easily cost in the tens of thousands of dollars. They must be carried by relatively large drones, adding tens of thousands more. In contrast, a solid (if not spectacular) drone system can be obtained for only a few thousand dollars.

LiDAR Wins When…

LiDAR has two major advantages over photogrammetry. The first is that its pulses are more capable of detecting the ground. Many kinds of analysis, such as hydrology, require a bare-earth elevation model, which is hard to get with photogrammetry. Second, LiDAR tends to reconstruct thin objects better, such as transmission wires or cell towers.

Photogrammetric point clouds can only produce one point per pixel, and pixels can only be so small. LiDAR points, though, have no size restrictions; points can be bunched more tightly, allowing the system to visualize thin objects well.

Conclusion

As significant as the differences between LiDAR and photogrammetry can be, many industries can benefit from a combination of both methods. Construction managers need both bare-earth elevation models and high-resolution orthophotos to manage their sites. Farmers, too, use bare-earth elevation models to model the flow of water across their fields and orthoimagery to pinpoint crop locations that need extra attention. Both technologies have been adapted to 3D printing and CNC machining.

Ultimately, the decision to use LiDAR or photogrammetry depends on budget and end goal, and the potential applications for both technologies are limited only by human creativity.

Thanks to Doug Walker, Digital Marketing Specialist at Fictiv for this guest blog post.

Note – If you liked this post click here to stay informed of all of the 3D laser scanning, geomatics, UAS, autonomous vehicle, Lidar News and more. If you have an informative 3D video that you would like us to promote, please forward to editor@lidarnews.com and if you would like to join the Younger Geospatial Professional movement click here.

 

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.