Causal Imitative Model for Autonomous Driving


Mohammad Reza Samsami (1)    Mohammadhossein Bahari (2)    Saber Salehkaleybar (1)    Alexandre Alahi (2)

(1) Sharif University of Technology    (2) Ecole Polytechnique Fédérale de Lausanne

        [Preprint]         [GitHub Code]



Imitation learning is a powerful approach for learning autonomous driving policy by leveraging data from expert driver demonstrations. However, driving policies trained via imitation learning that neglect the causal structure of expert demonstrations yield two undesirable behaviors: inertia and collision. In this paper, we propose Causal Imitative Model (CIM) to address inertia and collision problems. CIM explicitly discovers the causal model and utilizes it to train the policy. Specifically, CIM disentangles the input to a set of latent variables, selects the causal variables, and determines the next position by leveraging the selected variables. Our experiments show that our method outperforms previous work in terms of inertia and collision rates. Moreover, thanks to exploiting the causal structure, CIM shrinks the input dimension to only two, hence, can adapt to new environments in a few-shot setting.


BibTeX

@misc{samsami2021causal,
   title={Causal Imitative Model for Autonomous Driving}, 
   author={Mohammad Reza Samsami and Mohammadhossein Bahari and Saber Salehkaleybar and Alexandre Alahi},
   year={2021},
   eprint={2112.03908},
   archivePrefix={arXiv},
   primaryClass={cs.RO}
}