One of the most important concepts in surveying is the principle that surveying control should be significantly more accurate than the actual survey. Some say it should be twice as accurate, some say an order of magnitude. In many cases this principle is being violated with the use of 3D laser scanners where the desired relative positional accuracy is effectively the same as the control survey, essentially an eighth to a quarter of an inch, or a few millimeters.
Let’s take a quick look back for some perspective. Some 50 years ago when I began my surveying career we used a steel tape, plumb bob and a one minute transit. We could measure and layout one point at a time to an accuracy of plus or minus an eighth to a quarter of an inch at least 95% of the time, assuming you were experienced and cared about what you were doing.
Most projects involved the use of a closed traverse so that you could determine how accurately your work closed back on the starting point. Sideshots were often taken with stadia which would be accurate to approximately a tenth of an inch. The NGS horizontal surveying control network closure accuracy standards went from 1:2,500 for construction, or 4th order-surveys all the way to first-order which were 1:100,000. Third-order, class 1 which required 1:10,000 was the norm for boundary surveys.
The airborne lidar industry has been constrained for the most part by the fact that the accuracy is limited by the technology which by default allows room for on the ground targets to be established using an order of magnitude higher order survey control. The USGS and ASPRS have developed specifications and standards that support the principle of higher order surveying control for lidar surveys. Some of the UAV lidar mapping projects are now beginning to make claims that threaten to violate this principle.
Mobile lidar surveys involving a vehicle mounted platform are routinely promising accuracies that are typically equal to the accuracy of the control. For an excellent discussion of mobile lidar surveying accuracy click here.
What is the practical result of this misuse of survey control? Someone attempting to check (which I don’t think this being done as a standard practice) the accuracy of the initial survey will not be able to determine whether the error is in the original scan or in the control. If both are supposedly plus or minus an eighth or a quarter of an inch, or a few millimeters it is impossible to tell where the error is coming from.
Add in the issue of 3D accuracy versus 2D. Since laser scanners are capturing data in three dimensions shouldn’t we be specifying their accuracy using the principles of spherical trigonometry, not 2D circles, or 1D linear or angular measurements? Contributing to this problem is the fact that laser scanner manufacturers continue to force customers to treat their instruments like black boxes. Instead of a 3D accuracy specification we are provided with one dimensional accuracy statements such as horizontal angular and range error.
The interstate highway system was built with a steel tape and a plumb bob. We put man on the moon with a slide rule. We built the Empire State Building, offshore oil platforms and an incredible number of tunnels with 2D surveying, but there were nationally specified and industry accepted standards for these surveys. We need the same established procedures for 3D surveys, but unfortunately the laser scanning industry is not willing to invest in developing these standards and the consumers are not willing to demand it.
Note – If you liked this post click here to stay informed of all of the 3D laser scanning, geomatics, UAS, autonomous vehicle, Lidar News and more. If you have an informative 3D video that you would like us to promote, please forward to editor@lidarnews.com and if you would like to join the Younger Geospatial Professional movement click here.
As measuring technologies have become more integrated and the observing platforms more dynamic, it has become more challenging to evaluate accuracy. Further, if we could look at the software driving these devices we would find functional models that make certain assumptions, as well as, statistical models that estimate quality using sampling criteria that aims at productivity. There is nothing inherently wrong with these things; however as the application of a given technology approaches the edges of its underlying models, its accuracy can degrade. Being able to detect changes performance depends on adequate control.
Recently I have been using a Trimble SX-10 to generate data on structural members on a 1200’ x 125’ building that is rising out of a 30’ deep hole. It is a very busy site and each day there are more obstructions to work around. And each day some control is being destroyed. On a bad day it is just disturbed a little and it can take time to figure that out. At the end of the day, all that matters is that the basement walls are constructed within geometric tolerance. The LIDAR itself is accurate enough. The logistics of getting scan coverage on the surface of the wall tends to consume most of the error budget. At the core of this situation is Euclid’s geometry. From surveying we know that certain geometric shapes possess properties that can be beneficial to the task at hand. And a bit of projective geometry offers insights into managing the shape of a point cloud. Out of this some effective rules of thumb cab ne fashioned to deliver an acceptable practical outcome.
Most man-made features have geometric tolerances lurking in the background. There is a cost to achieving them. There is also a cost associated with ignoring them. When measurements matter…and the usually do…..there is a need for a geospatial professional .
Spot on. No pun intended.
This is a cultural problem and a practical problem al wrapped up in one. No one (equipment manufacturers, practitioners, software vendors) wants to say the accuracy of any data is worse than the design capture specs. But as noted, the accuracy is a function of the control, and it should only be reported on the basis of a TEST against higher order points, not against the control. And lets face it, who is actually testing their data? As also noted, we are talking about millimeters here on the design data capture end, so the control (and any check points) would need to be in the sub-millimeter range. At some point there is a convergence where the error budget on the control just cant handle it and the whole exercise becomes moot. The bottom line, however, is that error is simply not being reported correctly, and is likely often being way under estimated. If the best you can say authoritatively according to current standards of practice (e.g., ASPRS 2014) is that the reportable accuracy of your data is actually 3x worse than the design capture spec (because you measured your control and checks at the same accuracy as the project data), then people will simply stop reporting accuracy according to the standards and default to the manufacturers specs. Which is where we are now. And that, could become a serious problem.
Well said, I could not agree more.