Autonomous vehicles have the potential to redefine transportation. When fully realized, this technology promises to unlock a myriad of societal, environmental, and economic benefits.
In July 2019, we’re sharing a comprehensive, large-scale dataset featuring the raw sensor camera and lidar inputs as perceived by a fleet of multiple, high-end, autonomous vehicles in a bounded geographic area. This dataset will also include high-quality, human-labelled 3D bounding boxes of traffic agents, an underlying HD spatial semantic map, and a large collection of crowd-sourced imagery collected by camera-equipped ride-sharing vehicles.
With this, we aim to empower the community, stimulate further development, and share our insights into future opportunities from the perspective of an advanced, industrial autonomous vehicles program.
Upcoming Tutorials and Competitions
Join us at CVPR as we conduct a tutorial covering the practical tips for building a Perception & Prediction system for autonomous driving. The tutorial will strike a balance between Applied Research and Engineering.
Monday, June 17th 1:30 - 4:30 PM PT, Room 104C
We’ll review the challenges involved in building a system that needs to operate without a human driver and how to push state-of-the-art neural network models into production. Audience members will learn about different kinds of labeled data needed for Perception & Prediction, and how to combine classical robotics and computer vision methods with modern deep learning approaches for Perception & Prediction.
The performance of the real-time perception, prediction and planning systems can be improved by prior knowledge of the environment, traffic patterns, expected anomalies etc. We show how a large scale fleet of camera-phone equipped vehicles can help generate those priors and help discover infrequent events increasing overall prediction performance. Finally we will walk the audience through a set of hands-on sessions into building basic blocks of self-driving stack, its challenges and how to use the presented dataset for its development & evaluation.
- Luc Vincent, EVP of Autonomous Technology
- Peter Ondruska, Director of Engineering
- Ashesh Jain, Head of Perception
- Sammy Omari, Head of Prediction & Planning
- Vinay Shet, Director of Product Management
- Perception for autonomous driving
- Prediction for autonomous driving
- Large scale data collection for autonomous driving
- Dataset launch and description
- Hands-on perception and prediction session on dataset
This fall, we’ll facilitate a competition on 3D object detection over semantic maps. Stay tuned as more competition details are on the way!
Sign up to be notified when the dataset becomes available for download.