Dr. Ahmad El Sallab

Personal Page

End-To-End Multi-Modal Sensors Fusion System For Urban Automated Driving

Ahmad A. Al Sallab, Ibrahim Sobh, Khaled El Medawy, Loay Amin, Mahmoud Gamal, Mostafa Gamal, Omar Abdeltawab and Sherif Abdelkarim. “End-To-End Multi-Modal Sensors Fusion System For Urban Automated Driving.”, Neural Information Processing Systems (NIPS), Machine Learning in Inetelligent Transportation MLITS workshop, 05 Dec – 10 Dec, 2018, Canada

In this paper, we present a novel framework for urban automated driving based on multi-modal sensors; LiDAR and Camera. Environment perception through sensors fusion is key to successful deployment of automated driving systems, especially in complex urban areas. Our hypothesis is that a well designed deep neural network is able to end-to-end learn a driving policy that fuses LiDAR and Camera sensory input, achieving the best out of both. In order to improve the generalization and robustness of the learned policy, semantic segmentation on camera is applied, in addition to applying our new LiDAR post processing method; Polar Grid Mapping (PGM). The system is evaluated on the recently released urban car simulator, CARLA. The evaluation is measured according to the generalization performance from one environment to another. The experimental results show that the best performance is achieved by fusing the PGM and semantic segmentation.