A silicon valley start-up Civil Maps has recently received $6 million in funding from a group that includes Ford Motors for their ability to produce 3D base maps that are much smaller in size as they only contain the critical navigation information needed for autonomous operation.
The problem is that LIDAR, like your eyeballs, doesn’t just notice the relevant stuff. It sees lane lines and stop signs, sure. But it also records windows on buildings, leaves on trees, garbage cans in driveways. That makes for a cluttered map. “It’s not very useable,” says Civil Maps CEO Sravan Puttagunta.
This is the problem Civil Maps thinks it’s solved. Its software reads all that data, and with the help of machine learning, fishes from a sea of dots all the salient points, line strings, and polygons humans see as traffic lights, lane lines, and crosswalks. (LIDAR can actually read signs: It measures the strength of the laser signals coming back, so can tell the black numbers on a speed limit sign from the more reflective white space.)
The software uses that data to create a semantic map that includes a definition for each feature. An arrow pointing to the right and sitting between two solid lines is translated for the robot: If you’re in this lane, you must turn right.
But in the dash to map the world of self-driving cars, originality may not matter very much. The important thing is moving quickly to scale up, perfect the process, and get the four-wheeled scouts on the road. Now that it’s got a fresh pile of cash, Puttagunta says, Civil Maps is in the race.