Waymo Database Access for Academic Research

image of Waymo Database Access

Waymo Database Access

Academic researchers have now been granted access to the Waymo database, amassed by its fleet of self-driving cars.

The move is a way to “give back to the community,” says Drago Anguelov, Waymo’s head of research, “not an admission in any way that we have problems” that the company can’t solve on its own. Deadlines for self-driving cars have come and gone, and though Waymo still seems clearly in the lead, the advent of absolutely driverless cars still appears many years away.

Waymo’s competitors are free to register and look at the data set, so long as they don’t use it to build commercial self-driving cars.

It took Waymo months to select, annotate, and polish the files, which consist of 1,000 driving sequences, each one lasting 20 seconds, equivalent to 200,000 frames. The sequences were assembled from lidar, radar, and camera sensors onboard Waymo vehicles in 25 places, including busy cities, such as San Francisco, and suburban areas, such as Chandler, outside of Phoenix, Ariz.

Interleaved with all of that imagery are 12 million 3D labels, each one applying to a point cloud, the data set that’s generated by a lidar sensor. Waymo’s cars each carry five lidars—a main one that sweeps the entire 360 degrees and four short-range devices. There are also 1.2 million 2D labels for camera-generated images, which show only the visible parts of a scene and which mesh tightly with the 3D lidar clouds.

That close connection between lidar and camera data, including connections to other devices, such as radar, is known as sensor fusion. Waymo proudly asserts that it’s best at the job in large part because it alone has designed the entire package of hardware and software in-house.

Among the research problems that such a tight-knit data set may help to solve is “re-identification,” in which a continuously tracked object is recognized again after having been briefly obscured. If, say, a pedestrian walks past a tree and re-emerges on the other side, a system ought to be quick to recognize that it’s the same pedestrian.

For the complete article click here.

Not to be left out Lyft announced this week:

“We’re thrilled to share a comprehensive, large-scale dataset featuring the raw sensor camera and LiDAR inputs as perceived by a fleet of multiple, high-end, autonomous vehicles in a bounded geographic area. This dataset also includes high quality, human-labelled 3D bounding boxes of traffic agents, an underlying HD spatial semantic map.”

Note – If you liked this post click here to stay informed of all of the 3D laser scanning, geomatics, UAS, autonomous vehicle, Lidar News and more. If you have an informative 3D video that you would like us to promote, please forward to editor@lidarnews.com and if you would like to join the Younger Geospatial Professional movement click here.

This entry was posted in 3D Modeling, AI, artifical perception, artificial intelligence, Autonomous vehicles, Cloud, computer vision, Consumer, Data, Deep Learning, driverless vehicles, Laser Scanning, Lidar, machine learning, Mapping, point clouds, Research, Software and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.