Two More Auto Lidar Firms Emerge From Stealth Mode

Autonomous car lidar start-ups have to be one of the hottest niche markets for venture capitalists, at least in Silicon Valley. One has to wonder where all of the senior talent is coming from to staff these ventures and more importantly, when the bubble is going to burst.

Innovusion, a privately held company based in Silicon Valley, emerged out of stealth mode this week and is introducing its ground-breaking, image grade LiDAR technology with the following industry leading features and functionality:

1. Resolution: provides near picture quality with over 300 lines of resolution and several hundred pixels in both the vertical and horizontal dimensions.
2.Range: detects both light and dark objects at distances up to 150 meters away which allows cars to react and make decisions at freeway speeds and during complex driving situations.
3.Sensor fusion: fuses LiDAR raw data with camera video in the hardware layer which dramatically reduces latency, increases computing efficiency and creates a superior sensor experience.
4.Accessibility: enables a compact design which allows for easy and flexible integration without impairing vehicle aerodynamics.

Innovusion’s products leverage components available from mature supply chain partners, enabling fast time-to-market, affordable pricing and mass production.

Ouster Inc., a San Francisco-based hardware startup backed by $27 million in funding, also launched this week from stealth mode to carve out a niche for itself in the crowded LiDAR market.

Ouster offers a compact LiDAR system called OS1. Shaped like a flat cylinder, the device weighs just 250 grams and “matches the resolution of the highest performing automotive LiDAR technology” in the industry, according to the startup. The details on its website suggest that this refers to the HDL-64E model from market leader Velodyne LiDAR Inc.

The only thing hotter than auto lidar is bitcoin.


Posted in Autonomous vehicles, Consumer, Sensors | Leave a comment

An Application Guide for Generating an Enhanced Forest Inventory

Press Release – Airborne Laser Scanning (ALS) data enables the accurate three-dimensional characterization of vertical forest structure. ALS data have proven to be an information-rich asset for forest managers, enabling the generation of highly detailed digital elevation models and the estimation of a range of forest inventory attributes (e.g., height, basal area, and volume). Good practice guidance synthesizes current knowledge from the scientific literature and practical experience to provide non-experts more detailed information about complex topics.

With this Model and Application Guide, the goal is to inform and enable readers interested in using ALS data to characterize, in an operational forest inventory context, large forest areas in a cost-effective manner. This Guide focuses specifically on the data requirements and different modelling approaches associated with implementing an area-based approach to estimate forest inventory attributes using ALS data combined with ground plot measurements.

The Guide is not intended to be prescriptive, as forest environments vary considerably and technology is evolving rapidly. Rather, the Guide is intended to support the reader in making informed decisions regarding the various modelling approaches available. The additional detail provided in this document is intended to be complementary to the more general overview provided in a previous best practices guide published in 2013, and combined these two documents offer comprehensive guidelines for generating a forest inventory using ALS data and an area-based approach.


Posted in airborne LiDAR, Environmental, Forestry, Government, remote sensing, Research | Leave a comment

Monitoring Subsea Assets

Press Release – 3D at Depth Inc., a global provider of advanced subsea LiDAR systems and solutions, has partnered with iQ3Connect Inc. (iQ3) to deliver a new data visualization tool to help clients build, maintain, map and monitor subsea assets, environments and resources.

The partnership leverages the unique features of 3D at Depth’s subsea LiDAR data and the power of iQ3’s innovative augmented reality (AR) and virtual reality (VR) platform to transform the value of offshore survey data. “Powered by iQ3’s” cloud-based software platform, customers can access 3D at Depth’s subsea LiDAR data through a secure, optimized web-based portal. The technology creates an immersive VR environment with true 1-to-1 3D scale models generated using repeatable, millimetric subsea LiDAR data inputs.

Using a laptop, desktop or smart device, geographically dispersed teams can collaborate within a single VR session and be present in the same scene. iQ3Connect’s underlying technology runs from a computer aided design (CAD) based platform allowing simulation of structures overlaid with subsea infrastructure. The ability to explore and experience 3D content as if you were actually onsite with a digital representation of physical assets or the surrounding environment creates a seamless workflow environment from reality capture to virtual immersion.

Subsea LiDAR 3D data can be viewed and discussed with minimal changes to current installed geographic information systems (GIS) or big data hosting platforms. From data acquisition through data visualization and analysis, challenges faced by subsea survey programs are attributed to connecting actionable data to the right resource at the right time. 3D at Depth’s Subsea LiDAR VR Platform “Powered by IQ3” lowers the barriers and allows multiple users and key decision makers to “see” the same data together for greater insight and better decisions.

Sounds like a powerful combination.

Posted in 3D Modeling, Augmented reality, virtual reality | Leave a comment

$100 Million in Lidar Data – Free

Lidar data sets are some of the most-often-used data by coastal communities, and NOAA’s Digital Coast provides these data at no cost, saving users time and supporting innovation, particularly by small businesses who likely couldn’t afford it otherwise.

The 600-plus lidar data sets on Digital Coast cover 550 thousand square miles and represent the efforts of many organizations and agencies, such as the U.S. Army Corps of Engineers and NOAA’s National Geodetic Survey. Over 30 thousand individuals have downloaded the customized data, with many users working in private-sector engineering firms.

Posted in airborne LiDAR, Data, Government, Mapping, remote sensing | Leave a comment

Solid State Lidar Sensor to Debut at CES 2018

It’s going to take a combination of the right technology and the right business model to become a leader in providing tier one automotive suppliers with lidar sensors. Leddartech believes they can deliver both.

LeddarTech will be presenting the LeddarCore LCA2 at CES 2018. They claim this is the industry’s first 3D solid-state, LiDAR (SSL) integrated circuit (IC) enabling mass production of automotive LiDARs. They believe this is the combination that will deliver the required technology at an attractive price.

Looks like the consumer electronics industry agrees with them. The LeddarCore LCA2 has been named a CES 2018 Best of Innovation Awards Honoree in two categories: Vehicle Intelligence and Self-Driving Technology, and Embedded Technologies.

Backed by a recent investment of US$101 million from strategic partners, in addition to multiple commercial agreements with key automotive industry players, LeddarTech sets itself apart from other LiDAR suppliers with its unique business model based on patented Leddar technology. Through this approach, LeddarTech delivers its core proprietary LiDAR know-how to Tier-1 manufacturers embedded within LeddarCore ICs, and partners with them to develop custom SSL reference designs that meet the specifications of individual OEMs. The strategic investors, who have also signed commercial agreements with LeddarTech, include Delphi, IDT, Magneti Marelli and Osram.

They are making a strong case.

Posted in Autonomous vehicles, Consumer, Sensors, solid state | Leave a comment

Long Term Monitoring of Ice Loss on the Helheim Glacier

Leigh Stearns, a geologist with the University of Kansas, is working with a RIEGL VZ-6000 ultra long range terrestrial laser scanner, incorporated into an ATLAS (Autonomous Terrestrial Laser Scanning) system to monitor rates of ice loss on the Helheim Glacier in Greenland, a tidewater glacier undergoing large-scale changes due to global climate change.

“LiDAR is an emerging technology for the earth sciences because it produces an incredibly detailed 3-D view of features,” said the KU researcher. “Repeat LiDAR scanning reveals small-scale changes with very high precision. These systems are now used to measure how bridges are sagging, how tectonic faults propagate and now how glaciers flow. The ATLAS systems are unique because they’re designed to scan the glacier terminus every six hours, year-round. That’s not a trivial task when there’s no sunlight in the winter, winds are high and it’s very cold.”

Read the entire article over at the KU website to learn more about the important work Stearns is doing.

From the Riegl Newsroom

Posted in Environmental, Mapping, Research | Leave a comment

What’s the Problem? Bicycles

Turns out one of the tougher feature identification problems for autonomous vehicles is detecting bicycles in an urban environment.This may not seem like a major problem here in the U.S., but it certainly is in other countries.

A grad student at Northeastern University is trying to better understand the problem, but he is not getting much help from the private sector. Perhaps one of our readers can provide some data.

Detecting bicycles is particularly challenging for a number of reasons. First is their relatively transparent, thin profile. Second is the fact that the profile is constantly changing as the bicycle moves. In addition bicycles can quickly maneuver in cluttered urban environments generating inaccurate tracking models and faulty prediction estimates.

Significant work has been done in sensor and algorithm development to solve the bicycle detection, tracking, and prediction problem, yet problems remain as datasets and algorithm analysis are not accessible to academic researchers. This information is instead considered proprietary. Of the published work in this field, most approaches use idealistic datasets that do not accurately represent real world conditions in order to improve the quality of their results.

To further the development of LiDAR sensors and algorithms the research paper introduces the first open LiDAR dataset, collected in real world environments. The author presents realistic datasets taken with affordable sensors, along with qualitative performance results of leading algorithms.

Easy access to this dataset and analysis allows researchers and developers to create systems and algorithms that perform in real world scenarios.

Posted in 3D Modeling, Autonomous vehicles | 1 Comment

Solid State HD 4D Lidar

Press Release: TetraVue, the leader in high definition 4D LIDAR™ technology, today announced that KLA Tencor, Lam Research and Tsing Capital have joined existing investors Robert Bosch Venture Capital GmbH, Samsung Catalyst Fund and Nautilus Ventures in providing additional funding to the company.

TetraVue technology is radically different from current LIDAR approaches, merging the resolution of HD video with range data to enable the industry’s first long-range, 4D motion capture. It is the first 4D camera technology that captures real-time images with depth perception down to each pixel, allowing it to transform markets including autonomous vehicles, machine vision and factory automation.

Today’s digital video cameras accurately capture high resolution 2D images over time, but are unable to see the depth of objects. Competing LIDAR solutions can detect the depth of a set of points, but at insufficient resolution to be discernable as an image. TetraVue cameras uniquely merge digital video with LIDAR technology by capturing multi-megapixel images at up to 30 frames per second with accurate depth for each individual pixel. As a result, a TetraVue camera has the ability to process 100x more real-time data describing object location and motion in the surrounding environment.

“We were impressed with the novel approach that TetraVue brings to solving the high-resolution depth sensing problem currently facing automotive makers who plan to deploy autonomous vehicles,” said David Fisher, Senior VP, Corporate Business Development, KLA-Tencor. “Their uniqueness convinced us to bring both our financial and engineering resources to bear to help them realize the vision.”

“HD video with accurate depth per pixel is the “Holy Grail” of machine vision that can transform emerging markets from autonomous vehicles, augmented and virtual reality and smart factories,” said Hal Zarem, CEO, TetraVue. “TetraVue technology is unmatched in the industry, capturing all four dimensions, 3D plus time, each in high resolution.”

With more accurate  information autonomous vehicles should be able to make faster and safer driving decisions.

This technology certainly has the potential to be a game changer.

Posted in 3D Modeling, Autonomous vehicles, Consumer | 1 Comment

4th Annual Lidar Art Contest

The Winter Solstice will soon be here in New England. In celebration of the return to the light, Lidar News is hosting the 4th annual Lidar Art contest.

We are looking for images with artistic appeal that ideally relate to the solstice and/or the holidays.

Please send us your submission by December 12th.  We will choose the top 5 entries and then turn the contest back to you, our readers, asking you to vote for your favorite, thereby determining the final winner of this year’s contest.  The winner will be announced as we celebrate the Winter Solstice on December 21.

We can’t wait to see what you have in store for us this year!  Please send JPEG submissions to

Here’s to the Solstice!

The Team at Lidar News

Posted in The Industry | Leave a comment

Apple Research Revealed

In what makes one wonder the intent behind this move, two Apple scientists have published a paper revealing their research into improving real time object recognition. One of them is an AI researcher and the other specializes in machine learning. They are working on accurate detection of objects including those derived from what they refer to as sparse lidar 3D point clouds.

They are using a novel, voxel – based approach that they describe as “removing the need for manual feature engineering by using VoxelNet, a generic 3D detection network that unifies feature extraction and bounding box prediction into a single stage, end-to-end trainable deep network.”

From the paper, “Our approach can operate directly on sparse 3D points and capture 3D shape information effectively. We also present an efficient implementation of VoxelNet that benefits from point cloud sparsity and parallel processing on a voxel grid.”

The robotics community has been working on this problem for 20+ years. It will be interesting to see if the autonomous vehicle funding can deliver on the performance needed to support highway speed automation.

You can read the full paper here.

Posted in Uncategorized | Leave a comment