Automated Feature Extraction – The Search for the Holy Grail

Key Concepts:

  1. Automated feature classification/extraction is not an easy problem to solve.
  2. A pixel by pixel approach is not the way to go.
  3. A multi-sensor, data fusion approach seems to hold more promise.
  4. This is actually a remote sensing, image processing problem

As noted in my last blog the goal is not be in the laser scanning business, it is to use this disruptive technology to solve real world problems – more efficiently and hopefully with superior 3D results. In this regard, one of the most important challenges facing our industry is automating the classification and extraction of features, or more specifically 3D objects from the point cloud. As more than one industry expert has told me, this is akin to the “Search for the Holy Grail”.

Most of the approaches to solving this image processing problem, not only in the laser scanning world, but in the larger universe of remote sensing, tackle this problem on a pixel by pixel basis. One company that I used to work for created an ERDAS Imagine plug-in that was called the “Sub-Pixel Classifier”. Their theory was that all pixels are “mixed” pixels, and they attempted to determine the materials that made up the pixel. Since they would not reveal their methodology, the product had limited success.

In a previous blog from my recent trip to the ILMF conference I noted the unique approach being used by Definiens. Instead of interrogating the scene pixel by pixel, they use an iterative approach to develop an understanding of the pixel in context with its surrounding pixels. A key part of their strategy is to make use of data from other sensors in their analysis. This can include orthophotography, including infrared bands, multi and hyperspectral. I was told they can use any other available image information.

I believe in this multi-sensor, data fusion approach to automating the feature classification and extraction process. I think it is critical to achieving the kind of results that customers have a right to expect, and that it would be of great benefit to advancing the use of laser scanning. A colleague at a large aerial LIDAR mapping firm tells me that the state of the art for extracting planimetrics on a LIDAR project is still heads up digitizing.

This is fertile ground for software development, and I don’t think we should look to the equipment vendors to create these solutions. At the core, this is a remote sensing/image processing problem, not a laser scanning problem. As customers we need to create the demand for these next generation solutions, hopefully with a user interface that the average technician can understand.

This entry was posted in Business Development, Technology and tagged , . Bookmark the permalink.

2 Responses to Automated Feature Extraction – The Search for the Holy Grail

  1. Harold Rempel says:

    I agree with the idea that some type of sensor fusion approach will, in the end, yield the best results with the least amount of error.

    A flexible solution must also take into account a host of other issues and even benefits from LiDAR:

    1. A good solution must have “what if” options. Such as what if the only available data is the point cloud? Can radar assist? Is there any benefit to having a point cloud as well as just a surface of classified ground points? Is IR better than Color or B&W?

    2. Data from the LiDAR point cloud such as intensity, elevation values,slope calculations, etc. could factor into the algorithms.

    3. Will there always be the need for manual intervention to weed out the temporal differences in data fusion sources. For example, the imagery could have been flown two years before the LiDAR was. Can some intelligence be built into the “Holy Grail” to flag temporal differences?

    I agree that the mainstream approach to research thus far seems to be pixel-based but a macro view is going to have to come into this. There is research going on to determine the best way to yield 3D buildings out of LiDAR but, as you state, there is a lot of focus on just pixel processing or a lot of focus on just the LiDAR data instead of a combined approach.

    Lastly, you hit the nail on the head when you stated that we shouldn’t look to the equipment vendors for this. Such software will only come out of demand and from the production floors and universities.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.