Experiments with Q learning – A reinforcement learning technique

Q learning is a form of reinforcement learning which is based on reward and penalty.  There are no neural networks or deep learning methods. For all the states(states corresponds to possible locations the bot can move) there will be Q values for each action(left, right, top, bottom). The bot will take action which has the max Q value in any state. It will in turn update the Q values depending upon the outcome. The Q table is updated every time the bot moves.

The python code opens up a gui with a bot which uses Q learning to reach its destination, all the credits for the code goes to PhilippeMorere. I edited the code to understand it, added comment in each step, made the canvas bigger and made it difficult for the bot to traverse. The states are numbered in centre and visualised so that editing the gui is easier. The Q values and rewards are printed out to see the updated values. The code for the below video can be found here.  To run the code, extract the zip downloaded from github and run as,

python Learner.py

The training took only few minutes, the video shows that the bot learned to reach  green square after training. The arrow colour shows the relative values of Q along the directions. More reddish means less reward, less Q value.

The simulation time can be changed by adjusting the time in Learner.py . A value of 5, 5 seconds will allow us to see the various values getting printed out in the terminal. Currently it is set at .005 seconds for faster simulation.


# MODIFY THIS SLEEP IF THE GAME IS GOING TOO FAST.
time.sleep(.005)

Walls, penalty areas, reward areas can be changed/added in World.py.


#black wall locations
walls = [(0, 2)....(3, 5), (5, 4), (8, 0), (1, 1)]
#green and red square locations
specials = [(4, 1, "red", -1)... (6, 1, "red", -1)]

 

Video inferencing on neural network trained using NVIDIA DIGITS with opencv

I have been playing with the inferencing code for some time. Here is a real time video inferencing using opencv to capture video and slice through the frames. The overall frame rate is low due to the system slowness. In the video, ‘frame’ is the normalised image caffe network sees after reducing mean image file . ‘frame2’ is the input image.

Caffe model is trained in NVIDIA DIGITS using goolgleNet(SGD, 100 epoch), it reached 100% accuracy by 76 epoch.
NVIDIA DIGITS goolgleNet caffe inferencing

Here is the inferencing code.


import numpy as np
import matplotlib.pyplot as plt
import caffe
import time
import cv2
cap = cv2.VideoCapture(0)
from skimage import io

MODEL_FILE = './deploy.prototxt'
PRETRAINED = './snapshot_iter_4864.caffemodel'
MEAN_IMAGE = './mean.jpg'
#Caffe
mean_image = caffe.io.load_image(MEAN_IMAGE)
caffe.set_mode_gpu()
net = caffe.Classifier(MODEL_FILE, PRETRAINED,
channel_swap=(2,1,0),
raw_scale=255,
image_dims=(256, 256))
#OpenCv loop
while(True):
    start = time.time()
    ret, frame = cap.read()
    resized_image = cv2.resize(frame, (256, 256)) 
    cv2.imwrite("frame.jpg", resized_image)
    IMAGE_FILE = './frame.jpg'
    im2 = caffe.io.load_image(IMAGE_FILE)
    inferImg = im2 - mean_image
    #print ("Shape------->",inferImg.shape)
    #Inferencing
    prediction = net.predict([inferImg])
    end = time.time()
    pred=prediction[0].argmax()
    #print ("prediction -> ",prediction[0]) 
    if pred == 0:
       print("cat")
    else:
       print("dog")
    #Opencv display
    cv2.imshow('frame',inferImg)
    cv2.imshow('frame2',im2)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
cap.release()
cv2.destroyAllWindows()

 

 

Inferencing on the trained caffe model from NVIDIA DIGITS

With this post I will explain how to do inferencing on the trained network created with NVIDIA DIGITS through command line. link to the previous post here

In DIGITS UI, we have to upload a file into the model webpage to do inferencing. This is time consuming and not practical for real world appications. We need to deploy trained model as a standalone python application.
To achieve this we need to download the trained model from NVIDIA DIGITS model page. This will download a .tgz file to your computer. Open the .tgz file using this command

tar -xvzf filename.tgz
caffe model NVIDIA DIGITS
 

Save the ‘Image mean’ image file from datasets page of NVIDIA DIGITS in to your computer.

NVIDIA DIGITS inferencing

Provide path for,

'Image mean' file    -> eg:'/home/catsndogs/mean.jpg'
deploy.prototext ->eg:'/home/catsndogs/deploy.prototxt'
caffemodel ->eg:'/home/catsndogs/snapshot_iter_480.caffemodel'
input image to test ->eg:'/home/catsndogs/image_to_test.jpg'

in the below python script.

import numpy as np
import matplotlib.pyplot as plt
import caffe
import time
from PIL import Image

MODEL_FILE = '/home/catsndogs/deploy.prototxt'
PRETRAINED = '/home/catsndogs/snapshot_iter_480.caffemodel'
MEAN_IMAGE = '/home/catsndogs/mean.jpg'
# load the mean image
mean_image = caffe.io.load_image(MEAN_IMAGE)
#input the image file need to be tested
IMAGE_FILE = '/home/catsndogs/image_to_test.jpg'
im1 = Image.open(IMAGE_FILE)
# Tell Caffe to use the GPU
caffe.set_mode_gpu()
# Initialize the Caffe model using the model trained in DIGITS
net = caffe.Classifier(MODEL_FILE, PRETRAINED,
channel_swap=(2,1,0),
raw_scale=255,
image_dims=(256, 256))
# Load the input image into a numpy array and display it
plt.imshow(im1)
# Iterate over each grid square using the model to make a class prediction
start = time.time()
inferImg = im1.resize((256, 256), Image.NEAREST)
inferImg -= mean_image
prediction = net.predict([inferImg])
end = time.time()
print(prediction[0].argmax())
pred=prediction[0].argmax()
if pred == 0: 
  print("cat")
else: 
  print("dog")
# Display total time to perform inference
print 'Total inference time: ' + str(end-start) + ' seconds'

Run the file with

python catsndogs.py

for inferencing.

Train a neural network with custom images using NVIDIA DIGITS on your machine

It was a surprise when NVIDIA DIGITS framework was up and running in my 10 year old machine in no time. So I decided I start my blog with a tutorial about it.

Until now I was hand coding neural networks in tensorflow/tflearn and feeding data directly into the network. There is less time to play with the network and optimise while majority of the time will be spend to make it run.

NVIDIA DIGITS, is a framework which gives a nice UI to create your network, without even touching the code, unless u really want to. It does the heavy lifting like parsing test/train data and creating input database. I have no idea how it arranges the data, If you have used sklearn, you used to do the hard work arranging data in to arrays, so you will be clear of each and every bit. I will extend the tutorial as I figure out things. DIGITS then create models/network from existing models and this can be customised.

I will explain here, how to create/train a network on your machine which can detect cat and dogs from images using data from kaggle, download the data here. Extract the tar file and arrange the data according to Step2.

Step 1: Signup for NVIDIA developer network and download and install NVIDIA DIGITS.

I downloaded DIGITS 5 since I had a previous installation of CUDA 8. At the time of writing this blog, there was no local installer for DIGITS 6. A follow up tutorial will be there when I move to DIGITS on Docker. That will happen when I upgrade my machine. Currently I use an old machine with Pentium D desktop paired with a GTX 1050 Ti running on Ubuntu 16.04.3. I am waiting to upgrade to Intel kabylake, So I have to upgrade motherboard, RAM and CPU. When this happens, I have to have a fresh installation of all(CUDA/CuDNN/Python3/OpenCV/tensorflow/Anaconda/Arduino etc). So there will be more tutorials to follow.

NVIDIA DIGITS

This will download a deb file, install it using the below commands.
sudo dpkg -i nv-deep-learning-repo-ubuntu1604-ga-cuda8.0-digits5.0_1-1_amd64.deb
sudo apt-get update
sudo apt-get install digits

This will start DIGITS on a local webserver, which can be accessed by going to http://localhost

NVIDIA DIGITS framework

 

Step 2: Create database for training from the images you have.

There should be train and validate directories. Under train/validate, there should be directories for each separate class which contain the images. Here, cats and dogs are the two classes.

Click on DIGITS webpage -> Datasets->Images dropdown->Classification
Fill in, path to test images and validation images, give a name and click create. This will do parsing of images and creating a database for feeding it into the neural network model.

NVIDIA DIGITS new dataset creation

Step3: Create a model and train, thats all!!

Click on DIGITS webpage -> Models->Images dropdown->Classification
Choose Alexnet or Googlenet, give a name and click create. This will create your neural network in caffe framework and it will start training. The parameters in ‘Solver options’ and ‘Data transformations’ can be changed to try with different optimizers. the network layers can be edited by clicking ‘Customize’.

NVIDIA DIGITS new model creation

I got an ERROR for Out of memory, but it got resolved when I recreated the model. If anyone knows about the reason please let me know in the comments. My system swap space was changed to 30GB so it should not fail, I assume it was because of GPU memory availability at 4GB. Maybe we should be able to redirect to system memory or I have to upgrade my GPU. Anyway it worked for my surprise.

NVIDIA DIGITS Error out of memory

The CPU utilisation shoots up to 164%!!, I have no idea why, may be the CPU is a severe bottleneck.

NVIDIA DIGITS cats and dogs training

 

Step4: Test the network with a random image.

While training, we can observe the accuracy and loss in the graph. Once the training is over, upload a random image and click on ‘Classify one’ to inference.

NVIDIA DIGITS inference

And It works!

NVIDIA DIGITS inference

This is so fast considering the amount of debugging time spent  when I write by myself. The whole process took me about 20 mins till the output. In the following post I will try to explain how to make it standalone python program.