Modern Manufacturing Engineering ›› 2024, Vol. 522 ›› Issue (3): 70-78.doi: 10.16731/j.cnki.1671-3133.2024.03.010

Previous Articles     Next Articles

End-to-end autonomous driving model based on multi-modal intermediate representations

KONG Huifang, LIU Runwu, HU Jie   

  1. School of Electrical Engineering and Automation,Hefei University of Technology,Hefei 230009,China
  • Received:2023-05-09 Online:2024-03-18 Published:2024-05-31

Abstract: An accurate understanding of the driving environment is one of the prerequisites for autonomous driving.In order to improve the scene understanding ability of autonomous driving vehicles,an end-to-end autonomous driving model based on semantic segmentation,horizontal disparity,and angle coding multi-modal intermediate representations was proposed.The end-to-end autonomous driving model used deep learning technology to build perception-planning network.The perception network generated multi-modal intermediate representations with RGB and depth images as inputs to realize the spatial distribution description of road environment and surrounding obstacles.The planning network used multi-modal intermediate representations to extract road environment features and predict waypoints. Model training and performance testing were conducte based on the CARLA simulation platform.The results showed that the end-to-end autonomous driving model can realize the scene understanding of urban road environment and effectively reduce collisions.Compared with the baseline model based on the single modal intermediate representation,its driving performance index is 31.47 % better than the baseline model.

Key words: autonomous driving, scene understanding, imitation learning, trajectory planning

CLC Number: 

Copyright © Modern Manufacturing Engineering, All Rights Reserved.
Tel: 010-67126028 E-mail: 2645173083@qq.com
Powered by Beijing Magtech Co. Ltd