In many cases what people are looking for are surfaces, such as pavement, curb, sidewalk not millions of random 3D points. Manually fitting a surface to a point cloud requires experience and time. VirtualGrid has been working on automating the creation of surfaces and break lines from point clouds with impressive results.
Their software, VRMesh is best known for its unique point cloud classification method, extensive feature extraction functions, and accurate triangular meshing capabilities. With this new release, Version 10.0 they have added a Construction Module that enables you to automatically fit polygonal surface to noisy point clouds for detecting road edges, curbs, traffic barriers, tunnels and more.
The new Construction Module contains the following features:
Fit a polygonal curb/edge/pipe to point clouds
Create a polygonal surface by fitting profile to point clouds
Create break lines automatically
Extract planes from point clouds to create a low-poly building
Remove small parts from point clouds to get desired surface points
These surfaces can be exported to Microstation and AutoCAD to support design calculations and improvements. You can download the software for a free 30 day trial.
This innovative business model – data-as-a-service (DaaS) – is being used in the area around Perth, Australia to offer local governments lidar-derived, 3D data via a recurring subscription model. This can make a lot of sense from a budget planning perspective while helping to insure that data remains current and accessible.
Mapping specialist Pointerra (ASX: 3DP) is partnering with Total Earth Solutions (TES) by signing its first “data marketplace and business partnership” agreement with the Australian multidisciplinary geoscience group.
The deal marks a significant milestone for Pointerra having designed a cloud service platform facilitating access to very large 3D datasets without the need for high-performance computing.
According to the terms of the deal, TES will use its lidar- equipped aircraft to conduct a local government area survey in and around Perth in WA, producing a highly accurate aerial 3D dataset.
Pointerra will be responsible for managing the raw data obtained by TES and making it available for sale via its cloud platform using a subscription revenue model.
“When we conceived the business model for Pointerra back in 2015 we envisaged a data marketplace where Pointerra customers could offer access to 3D data. We also thought that entrepreneurial and forward-thinking capture companies would seek to leverage our innovative cloud platform for 3D data by owning and selling access to 3D data via a recurring subscription model,” said Ian Olson, Managing Director of Pointerra. The two firms will split the revenue.
This could be a glimpse into the future as this data is needed to support smart cities and driverless vehicles.
This is a tremendous opportunity for entities that are interested in indoor mapping and location based services. NIST is encouraging public/private partnerships to form and submit grant applications.
PSCR conducted an initial webinar to provide general information and discuss important considerations for the Point Cloud City NOFO. PSCR will now be conducting a second webinar on March 15, 2018 to review the changes in the amendment to the PC2 Notice of Funding Opportunity released on March 9, 2018. Registration is available here.
The previous webinar provided information and guidance for applications, questions regarding eligibility requirements, evaluation and award criteria, the selection process, and general characteristics of a competitive application.
A recording of each or both webinars may be requested by sending an email to PSCR@nist.gov. Please include “Point Cloud City” in the subject line.
This article by Chuck Boyer, Director of Geospatial Solutions, at Aerial Services, Inc. is an excellent introduction to the importance of ground control in supporting lidar missions.
When discussing LiDAR accuracy, the most important statement a LiDAR consultant can end their sentence with is “with proper ground control”. Without proper ground control, the measurable and definable accuracy of LiDAR point cloud data cannot be known. In capable hands of a professional mapper, accuracy can be inferred, but without ground control it cannot be objectively measured.
Airborne LiDAR sensors fire pulses of light up to 500,000 or more times per second. These pulses then hit the ground, reflect back to the sensor and are recorded as distances. In a typical mapping mission, there are billions and billions of points collected and form what we refer to as a “point cloud”. Each point in the cloud has an associated 3D coordinate. It is a marvel of modern science that an aircraft moving at high rates of speed can emit then capture so many points of reflected light every second and that each point can be accurately determined where it came from.
Artificial perception pioneer AEye (great name) recently announced it has been awarded foundational patents with numerous claims for its solid state, MEMs-based agile LiDAR and embedded AI technology that are core to AEye’s iDAR™ perception system.
Of all of the automotive lidar start-ups, AEye’s approach to solving the artificial perception problem may hold the most promise, at least on paper.
AEye’s iDAR (Intelligent Detection and Ranging) perception system mimics how a human’s visual cortex focuses on and evaluates potential driving hazards. Using embedded AI within a distributed architecture, iDAR critically and dynamically assesses general surroundings, while applying differentiated focus to track targets and objects of interest. As a scalable, integrated system, iDAR delivers more accurate, longer range, and more intelligent information faster. AEye’s first iDAR-based product, the AE100 artificial perception system, will be available this summer to OEMs and Tier 1s launching autonomous vehicle initiatives.
That is different.
“AEye’s groundbreaking iDAR system is the first to use intelligent data capture to enable rapid perception and path planning,” said Elliot Garbus, former Vice President of Transportation Solutions at Intel. “Most LiDAR systems function at only 10Hz, while the human visual cortex processes at 27Hz. Autonomous vehicles need perception systems that work at least as fast as humans. iDAR is the first and only perception system to consistently deliver performance of at least 30-50Hz. Better quality information, faster. This is a game changer for the autonomous vehicle market.”
I have seen a requirement recently for two to three times the speed of human recognition in order to get humans to feel safe about adopting the new technology. The human brain is a beautiful thing.
Airscope, a Perth, Australia-based inspections and asset visualization company, uses the Intel Falcon 8+ drone to capture highly accurate images to create 3-D models. The Intel Falcon 8+ drone is a multirotor-style drone that, through pre-programmed flight plans, is able to capture hundreds of aerial images per flight. (Credit: Intel Corporation)
If you think that Intel is just pursuing promotional events, like the recent Olympics, with their Drone Team then you will likely be interested to know that they are actually selling UAVs and working with partners to demonstrate their capabilities. And I bet you thought they were just a chip company.
For example Intel recently worked with Australian-based Airscope to map a facility off the northwest shelf of Australia with their patented, v-shaped Falcon 8+ system. The purpose was to generate a complete and accurate 3D model of the hydrocarbon processing facility.
“We made the transition to asset visualization because UAV inspection only gave clients a fraction of the story; without context, the full potential of images captured cannot be realized,” says Chris Leslie, Airscope’s director and a certified commercial airline pilot. “So now we create a virtual canvas of the entire site using airborne photogrammetry, ground photogrammetry and laser scanning. Once the virtual canvas is created, you can paint any operational or planning data on it, to serve as a human medium to access and interact with big data.”
Intel is also offering a fixed wing UAS, a quadcopter and a computer kit that enables you to develop your own UAS platform. They certainly have the resources to become a major player in this market.
Mike Tully, CEO of Aerial Services, Inc. has written an important introduction to the topic of positional accuracy. Here’s a preview of the article.
Since the FAA Part 107 rules were released, the future of UAS commercial data collection is looking up. Although the size of the UAS market under Part 107 is a mere fraction of what it will become once beyond visual line of sight flying (BVLOS) becomes a reality. Today there are some mapping applications well suited to drones. New applications tailored for drones are being developed too.
It goes without question that drones have dramatically lowered the barriers to entry into remote sensing and mapping. Increasing numbers of new remote sensing and mapping practitioners flying drones are unfamiliar with photogrammetry and other principles of remote sensing and mapping. As with any new technology while in the “wild west” stage of development, unfounded, erroneous assumptions are made by practitioners about the positional accuracy of their mapping products.
This article is a discussion about the principles of remote sensing and mapping and how they apply to drones. Specifically, it will answer the question “What is positional accuracy and how do I know when I have it with my UAS map products?”
When asked about the idea behind creating the API Wang responded:
“The largest bottleneck for the development of high-performing perception algorithms is access to high quality labeled data for training. With the launch of Scale’s Sensor Fusion API, we’re delivering the only solution that’s able to handle full sensor fusion labeling in 3D, which is extremely valuable for any autonomous vehicle or robotics company.
It’s very easy to send data to Scale API automatically using our Sensor Fusion and Image Annotation APIs. The data will then be populated for our customers automatically via callbacks. Some of our customers have hooked up their integrations so that once a disengagement occurs on one of their vehicles, that data gets automatically sent to us to label. Once the data is sent back, a trigger signals the retraining of the algorithms. Companies like Voyage and Embark have been waiting for this technology and are incredibly excited that we can partner with them to provide it.”
Once again this points out the need for high accuracy base maps as the starting point for training and navigating. As they like say in the GIS world – “It’s all about the data.”
It is still very much in the lab experiment stage, but researchers at Stanford believe they have solved one of the two key problems with using reflected laser returns to effectively “see around corners.” Where non-line-of-sight capability could be of of real potential value is with autonomous vehicles in city environments.
“Despite recent advances, [non-line-of-sight] imaging has remained impractical owing to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light,” the research paper abstract reads in part.
“A substantial challenge in non-line-of-sight imaging is figuring out an efficient way to recover the 3-D structure of the hidden object from the noisy measurements,” said grad student David Lindell, co-author of the paper, in a Stanford news release.
They believe they have solved the time required for the computational part, but they have not tested their algorithm with a commercial lidar sensor operating in the real world. It’s certainly a step in the right direction and hopefully a key piece of the puzzle.
The first thing that gets your attention with the Wingtra is that it supports vertical take off and landing, VTOL – impressive.
The VTOL capability allows the WingtraOne to ascend and move like a helicopter. For the mapping mission it transitions into forward cruise flight and matches the endurance and speed of fixed-wing airplanes. In order to land, the WingtraOne switches back to hover flight and descends vertically.
The camera is easily swapped in the unit and it is supplied with WingtraPilot for mission planning, collection and processing. Wingtra claims that ground control is not required with the PPK – post processed kinematic option. Most of the research that I see says you are better off to use at least some ground control, but this is worth investigating.