This post from friend and colleague Lewis Graham, President and CTO of GeoCue Corporation is in rebuttal to a post of mine about upgrading from a camera to a UAS lidar system.
#####
My friend and colleague Dr. Gene Roe recently published a column in his LIDAR News blog entitled “ UAS Lidar – Not a Simple Upgrade from Photogrammetry” (published May 3, 2020). Now Gene and I usually agree on remote sensing issues but on this one I have to take very strong objection. I think I can safely do this because we fly drone camera/LIDAR systems on a daily basis and have a pretty good handle on the complexity of both. If you read no further, my observation is that doing accurate drone LIDAR is considerably easier than doing accurate drone photogrammetry. The operative word here is accurate.
Gene’s article made me think deeply about our own GeoCue experience. Gene tries to make the case that drone-based photogrammetry is pretty easy but drone LIDAR is very hard and fraught with risk. The GeoCue True View 410, our current 3D Imaging System (3DIS), fully provides both technologies – photogrammetry from its dual, oblique 20 MP photogrammetric cameras and LIDAR from its Quanergy M8 Ultra scanner (Figure 1). We have flown literally thousands of missions using drone photogrammetry (particularly with our Loki direct geopositioning system) and hundreds with the True View 410 and other drone LIDAR systems. In every single flight, we do detailed metric accuracy analysis. We know a lot about this technology.
In really thinking about our experiences with these systems, I have to say that photogrammetry is, without a doubt, more challenging than LIDAR. I think folks who have not used LIDAR might find this statement surprising so allow me to elaborate.
First of all, I am not talking about getting a decent looking visualization orthophoto from a drone camera system; this is actually quite easy. You will be successful with a Phantom 4 Pro (yes, they are once again available) and a cloud-hosted processing solution. It will look great! What I am talking about are 3D point clouds derived from photogrammetry (the process is often called Structure from Motion, SfM, though this is only partially correct) where you are trying to minimize ground control and will do checks for both planimetric and vertical accuracy. An example of where you will get bit very badly if these things are not exactly right is cut and fill volumetrics. Cut and Fill requires flights separated in time. Time separated flights that are going to be used in differencing operations require exacting attention to network accuracy.
In both cases (photogrammetry and LIDAR), you need a good reference strategy (e.g. a local base station) so that is a wash in the comparison. In both cases, you are going to need to understand some geodesy so you won’t get embarrassed delivering ellipsoid data when the client wanted geoid. Again, a wash in the comparison. It is also a good idea that you fully understand the nuances of Root Mean Square Error (RMSE) and what system factors contribute to its constituents, bias and deviation.
For the complete article
Note – If you liked this post click here to stay informed of all of the 3D laser scanning, geomatics, UAS, autonomous vehicle, Lidar News and more. If you have an informative 3D video that you would like us to promote, please forward to editor@lidarnews.com and if you would like to join the Younger Geospatial Professional movement click here.