Papers
arxiv:2403.09630

Generalized Predictive Model for Autonomous Driving

Published on Mar 14
Authors:
,
,
,
,
,
,
,
,
,
,

Abstract

In this paper, we introduce the first large-scale video prediction model in the autonomous driving discipline. To eliminate the restriction of high-cost data collection and empower the generalization ability of our model, we acquire massive data from the web and pair it with diverse and high-quality text descriptions. The resultant dataset accumulates over 2000 hours of driving videos, spanning areas all over the world with diverse weather conditions and traffic scenarios. Inheriting the merits from recent latent diffusion models, our model, dubbed GenAD, handles the challenging dynamics in driving scenes with novel temporal reasoning blocks. We showcase that it can generalize to various unseen driving datasets in a zero-shot manner, surpassing general or driving-specific video prediction counterparts. Furthermore, GenAD can be adapted into an action-conditioned prediction model or a motion planner, holding great potential for real-world driving applications.

Community

Paper author
edited Aug 17

OpenDV dataset available here: https://github.com/OpenDriveLab/DriveAGI
image.png

Our follow-up work:
Vista: A Generalizable Driving World Model with High Fidelity and Versatile Controllability
arXiv: https://arxiv.org/abs/2405.17398
Open release: https://github.com/OpenDriveLab/Vista
Video demo: https://vista-demo.github.io/
teaser.gif

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2403.09630 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2403.09630 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.