Cloning Driving Behavior for AI

Neural networks have a crucial role to play in modern computer vision tasks, especially the tasks in the field of autonomous vehicles. This article is about utilizing Udacity’s Self Driving Car Simulator to train a neural network to clone the steering skills of a driver and allow the car to steer itself around a track in the simulator.

The code to support this article is available here.

Reader Level: Needs to have an amateur's level knowledge of programming in Python and neural networks. The article explains how to setup simulator and collect data.

Note: The objective of this article and code is to go over training, designing and development of a network to predict steering angles. The article does not touch other aspects i.e. Flask app, parameters concerning the simulator and communication between Flask and simulator, the article considers these as a large black box.


  1. Udacity Self Driving Car Simulator: Get the appropriate version, the screenshots in this article are from Windows.
  2. Setup a virtual environment using the requirements.txt from the repo, you may require a GPU for faster training and responsive inference during runtime.

Theory —Behavioral Cloning Neural Network

My deep convolutional network to predict a steering angle

The neural network laid out here has the capacity to predict the steering angle using a frame from the front camera of the car. The objective of Cropping2D is to crop the input image to segment out the road from the entire frame to help the network focus on road section only. The goal of convolutional layers is to extract and learn features which shall activate respective neurons in the following layers. The job of the dropout layers is to reduce the complexity of the model with the goal of preventing overfitting while dense layers reduce the dimensions of the network in a stepwise manner to a single regression-based steering angle.

Data Collection

The training data will consist of images and corresponding steering angles. We will be collecting training image data using the simulator. The goal is to collect data of good driving behavior.

Step 1
Start the simulator with appropriate settings based on your screen resolution and system configuration. Note, you have to record data as well so plan on having resources available even with the simulator running.

Step 1. Choose screen resolution and graphics quality, proceed to choose “Training Mode” with track on left selected

Step 2a
Clicking on Record button pops a dialog box asking you where you would like to save the data. It is recommended you save within the downloaded repository for simple and relative access during program runtime. Now, drive around the track while making sure the car does not leave the track and always centered. Thus, making sure that the data being collected represents good driving behavior.

Step 2a. Setting up data collection directories
Step 2b. Ending the data collection phase

Step 2b
Click on Pause button to end the data collection phase. The simulator will replay the entire driving session again while displaying a percentage of total data that is being saved. Once the simulator has replayed the entire session and collected the data, the simulator returns to where you had asked it to pause. You may now close the simulator.


The heart of behavioral cloning here is the architecture of the network. A good architecture coupled with good data shall determine whether the car will remain on the track or not during autonomous mode.

The theory behind network has already been discussed. The network is developed in and consists a generator function to feed images to the network in batches in case the entire data does not fit in the memory. The generator also includes code to function as an augmentation engine which helps to include images from left and right camera along with a correction angle, thus allowing them to be used for training as though viewed from the center camera.

Once the training is complete, the code shall export a trained model. The saved model file can be used for inference.

The parameters laid out during training are a batch size of 32, number of total epochs being 1 (can be increased with more data) and uses Adam optimizer.

Note: The amount of data that was used to train the demo model in the repository was very brief and the parameters were laid accordingly. As a result you may observe in the video (at the end of this article) that the car swerves outwards while maneuvering difficult curves.


The objective of is to load the trained model, save images from an inference run and handle the Flask app which is used as a communication mechanism between the model and simulator.

In order to begin an inference session:

Step 1
Execute the with path where model is saved as an argument, wait for the model to load and Flask app has been initiated.

Step 2
Start the simulator and initiate autonomous mode. If step 1 has successfully completed and the socket to communicate model output with the simulator has been setup then you shall see the car drive by itself around the simple track.