Abstract—we propose a deep learning approach based on convolution long-short term memory networks to perform occupancy grid cell based semantic segmentation from LIDAR measurements. The input includes scan points from multiple LIDAR sensors surrounding the vehicle; each composed of multi layered 360 scanning beams, to provide 3D scan images. The output is an occupancy grid map, with the predicted class label for each cell. The experimental setup is based on Gazebo simulator to generate the ground truth, operating under the Robot Operating System. The simulation scenarios are exhaustive to test different real world scenarios including multiple objects. We further evaluate the proposed model on Velodyne laser scanner data mounted in real vehicle, with the ground truth obtained using manual annotation. Several evaluation criteria are used to evaluate the network predictions versus the simulated ground truth. Several deep learning models are evaluated to pick the best architecture. The average precision, recall and F1-score measures prove the efficiency of the proposed network.