During this past week I published my 3,000th In the Scan blog post. That’s almost one per day for the past 9 years, but enough about me.
This is really a story about you, the readers. You are the reason that I keep doing this. Lidar News has an international following of 3D subject matter experts that are interested in a wide variety of subjects related to 3D laser scanning and lidar. That’s what keeps things interesting and challenging.
A lot has changed over the past 9 years, but I think we are still very early in the adoption of 3D technology. In a sense we have all been laying the groundwork for what is certain to be a much wider use of lidar technology in the future.
It’s been a fun ride and I hope to continue to earn your interest and support as this industry and community continues to grow and prosper. Thank you for that support and please let us know how we can be of service.
Researchers at Purdue University and Stanford University believe they have found a novel laser light sensing technology that is more robust and less expensive than is currently available with a wide range of uses, including a way to guide fully autonomous vehicles.
The researchers say their innovation is orders of magnitude faster than conventional leading-edge laser beam steering devices that use phased antenna-array technology. The laser beam steering being tested and used by Purdue and Stanford is based on light-matter interaction between a silicon-based metasurface and short light pulses produced for example by a mode-locked laser with a frequency-comb spectrum. Such a beam-steering device can scan a large angle of view in nanoseconds or picoseconds compared with the microseconds current technology takes.
“This technology is far less complex and uses less power than existing technologies,” said Amr Shaltout, a post-doctoral research fellow in Materials Science and Engineering at Stanford who conceived the idea for the method. “The technology merges two different fields of nanophotonic metasurfaces and ultrafast optics.”
Shaltout said the use of photonic metasurfaces was key to the new advancement. He said metasurfaces provide simple, compact and power efficient solutions to photonics design. The combination of those two technologies provide a much simpler approach.
We invite you to participate in the needed and important dialogue Surveying: A Foundation to Sustainable Infrastructure Development. We will have exhibitors, special plenary sessions and keynote speakers, and a special event that will be hosted by Cal Poly Pomona to feature an exciting venue in Southern California!
Reason to attend:
First ASCE – related surveying conference since 1992!
Thought provoking interdisciplinary education, inspiring and enlightening keynote speakers, and networking opportunities with respected peers and leaders in surveying & geomatics.
Network with your peers, earn up to 18 PDHs (adjustable) and expand your knowledge base to enhance the success of your projects and research.
The focus of this conference will be on surveying engineering, a profession that has not been given its due. It is the first conference under the new ASCE institute – UESI. The Utility Engineering & Surveying Institute offers professionals working within the utility, pipeline engineering, and surveying/geomatics communities the opportunity to network with others and shape the future of the industry by participating in technical activities, conferences, and the development of internationally recognized standards.
Hope to see you in sunny California for what is the first of many important and valuable UESI events.
It’s difficult to use a video to explain 3D point cloud workflows, but this one from the UK does an excellent job with visually demonstrating how tree volumes can be calculated using 3D scanned data. Like many challenges with remote sensing calculating volume has been one of those difficult to achieve goals, but it seems fair to say that if you are willing to put in the time you can get excellent results – finally.
Forest Research, Tampere University of Technology and Université Grenoble Alpes have been working together to optimize measurements from forest sample plots using terrestrial LiDAR in order to produce more accurate, more detailed, timely and harmonized information that can be fed into national and international forest information systems.
This animation shows details of the data processing chain for tree volume and biomass assessments.
Are you looking for a detailed overview of mobile lidar as it applies to transportation applications, then I have some good news for you. As part of the team that developed NCHRP Report No. 748 we were asked to create this eLearning website to stimulate interest in the technology and assist managers, in particular, with coming up to speed on the appropriate use of and expectations for mobile lidar.
The website provides access to the Report itself as well as a number of references such as state and national survey specifications, a detailed literature review and perhaps the most helpful, a series of eLearning modules with test questions at the end of each section.
There is still a long way to go for mobile lidar and 3D technology to become accepted and utilized to the fullest extent in major transportation organizations, but progress is being made. In some respects, collecting the data is the easy part. It’s turning that data into actionable information that is the challenge.
The Fairfield Foundation launched a new digital historic preservation initiative in 2017 using drones, photogrammetry, and 3D printing. At the conclusion of each layer of the dig the drone was used to capture the exposed objects and terrain. The photos were used to create a digital surface model which was then 3D printed and assembled to demonstrate how an archaeologist investigates and preserves a historic site.
In this case it was the Fairfield Plantation site in Gloucester County, Virginia. The 1694 manor house has been the focus of excavations since 2000 by a team of professional archaeologists involving hundreds of volunteers each year. The process of excavation, and the historical discoveries made to date, inform and educate everyone involved and the Foundation believes it is crucial that this outdoor classroom and laboratory experience be accessible to all.
They use a DJI Phantom 4 Pro drone to photograph the surface after the completion of every excavated layer. Agisoft PhotoScan transforms the photographs into a highly detailed digital elevation model, creating a virtual landscape alongside an archaeological archive that is far more detailed than standard documentation would produce, but this is only half of the process.
Much of the interest and appeal of archaeology is derived from tactile experiences. 3D printing technology now allows the team to take the individual polygons and print, paint, and assemble them like a 3D puzzle. This can be achieved with older field documentation as well, although some additional manipulation is necessary to compensate for fewer excavation photos and less elevation data.
The result is stunning and the preliminary outreach programs suggest that it not only connects users with the space, but also provokes discussions regarding archaeological methods and interpretation.
In an excellent article in Wired the author points out the mistakes being made in the design and execution of testing programs for self-driving vehicles. He quotes Martyn Thomas CBE, professor of information technology at Gresham College and fellow of the Royal Academy of Engineering, who states that, “In a scientific experiment you have a hypothesis, and you try to prove it. At the moment, all they are doing is conducting a set of random experiments with no structure around what exactly they need to achieve and why these experiments would deliver on those goals.”
“If your system is leaning all the time, then it’s changing all the time, and any change could make things worse as well as better. So you could completely undermine the work you are doing,” Thomas explains. All these factors leave the testers with too many variables to effectively work through any problems buried deep in a car’s systems.
This is the problem with the regulators’ activities, who Thomas says are unwilling to devise the necessary criteria for testing and licensing for fear of putting off innovative companies from setting up in their jurisdictions.
In response to this accident, Thomas says that the public should put pressure on regulators to set manufacturers appropriate standards for autonomous cars, so the manufacturers can then shape their tests around them and provide the necessary evidence that their technology is safe enough for use on public roads. “If we don’t have a debate about what level of evidence is going to be needed and make the regulations fit for purpose, then I think we’re heading off down the wrong path.”
Furthermore, he wants to see autonomous cars be made very easy to spot out on the roads, “if these cars are going to be moving around on streets where there are pedestrians, then the pedestrians need to have a decent chance of realising they are coming and take extra care around them.”
Pix4D recently announced Pix4Dfields, its first fully dedicated product for agriculture. A beta program is now open. The vision for the new product is “To give fast and accurate maps while in the field, with a simple yet powerful interface fully dedicated to agriculture.”
When Pix4D decided to create a fully dedicated product for the agriculture market, they wanted to go beyond just research and development by developing a product that actually understands agriculture at a customer level. So in July 2017, Pix4D opened a new office in Berlin fully dedicated to do exactly that: Understand the agriculture industry, listen to users, and create a product that caters to all the main agricultural practices. The result is Pix4Dfields.
Equipped with a new fast processing capability that provides accurate and instant results, plus an easy-to-use interface with tools tailored to agricultural workflows, users have the tools to cover everything from simple to complex scenarios.
Pix4Dfields has only been available as a closed beta, which they are now opening to select users in order to test it and provide feedback. The product is expected to evolve at a fast pace with new and updated features being added with every new iteration.
Pix4Dfields is currently available for macOS only. The next iterations will include Windows support as well.
If you would like to join the beta program or find out more ahead of the commercial release click here.
My son and I just spent a few days in Yosemite, Kings Canyon and Sequoia National Parks. Truly spectacular and inspiring locations. We were a little early in the season, but we had the best luck in Yosemite.
In a bit of serendipity I just came across an announcement that the Yosemite Conservancy funding some $12.5 million in improvements that will include better facilities for viewing Bridalveil Fall (188 m drop) as well as an airborne lidar survey of the entire park. The latter is going to be used to create a 3D model of this incredible location. It is certainly going present some topographic challenges given the steepness and rapid changes in the terrain.
Yosemite has to be on your bucket list and I hear King’s Canyon is a close second.
It’s difficult to separate the hype from the innovation when it comes to automotive lidar, but that doesn’t mean we are not going to try to separate the two.
Meanwhile Velodyne is taking the position that the problem in last week’s fatal crash is Uber’s, but more on that in a moment.
At a recent imaging sensors conference Oren Rosenzweig, co-founder of Israeli lidar system maker Innoviz Technologies, said that the cost of today’s lidar sensors are prohibitive, and the performance is not good enough. Sounds like an honest man.
Rosenzweig believes that a lidar sensor needs to be able to detect objects 200 meters away, while also sensing small obstacles in the road at an angular resolution of around 0.1 x 0.1 degrees. Perhaps more importantly the cost has to be in the hundreds of dollars, not in the thousands.
Innoviz’s technology is a solid-state lidar sensor combining a MEMS scanner based on a micro-mirror designed by the company. The Innoviz One has a 250-metre detection range, an angular resolution of 0.1 x 0.1 degrees, a frame rate of 25fps, and a depth accuracy of 3cm. “The device is based on 905nm laser light; 1,550nm would cost too much for the lasers and detectors,” Rosenzweig said.
Now to Velodyne. “Our LiDAR sensor is capable of clearly imaging Elaine and her bicycle in this situation. However, our LiDAR doesn’t make the decision to put on the brakes or get out of her way,” said Marta Hall, who is the wife of David Hall, Velodyne’s CEO, founder and inventor of its spinning, multi-laser beam LiDAR units. “We don’t know what sensors were on the Uber car that evening, if they were working, or how they were being used.”
Even with a low cost, highly accurate lidar sensor, there will still be many issues to deal with. In the end the consumer will decide.