Sensor Fusion Annotation

Are you looking for a way to jump start your machine learning perception project?

Scale claims to deliver a sensor independent API that provides a number of powerful object recognition functions. It works with cameras, lidar and radar to accelerate the development of perception algorithms for autonomous vehicles.

Dozens of automobile OEMs and self-driving car companies (such as GM Cruise and Voyage) already use Scale API’s comprehensive Image Annotation APIs to produce premium training datasets for their computer vision algorithms.

Scale API leverages machine learning, statistical models and human-generated data to deliver “…object recognition, capable of accurately analyzing millions of camera images, LIDAR frames, and RADAR data each month.”

The combination of human and artificial intelligence results in rigorously tested training data that help autonomous vehicles more quickly learn to navigate independently while accurately identifying road markers, vehicles, and other objects in an instant.

It would seem that the latter has application to the initial extraction of features from lidar data to create the HD base mapping.


This entry was posted in 3D Modeling, Autonomous vehicles, Consumer, Data, Mapping, Sensors, Software and tagged , , . Bookmark the permalink.

1 Response to Sensor Fusion Annotation

  1. surendra jena says:

    I am from india presently involved in creating high content engineering grade maps of certain roads in USA from lidar point clouds. My email;
    jena_surendra . Can be contacted for any association.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.