Introduction

Artificial intelligence is developed to make the task automated. Keeping this vision, I am writing this post to automate the detection of flower and cat using Google TensorFlow Object Detection api. Broadly speaking, this post is about Custom-Object-Detection with Tensorflow API. Google provides an Object Detection API which already had some models were trained on the COCO dataset . You can look for the trained models from Tensorflow detection model zoo. First of all before proceeding further, one need to clone the TensorFlow object detection repository using the below command

git clone https://github.com/tensorflow/models.git  
image2
Image and Prediction

Before proceeding further, I want to discuss directory structure that I will use throughout the tutorial.
data/- This folder is having records and csv files.

images/- This folder contains the images along with test and train annotations.

training/- It contains the trained model, object_detection.pbtxt, pipeline.config, graph.pbtxt and pretrained model config file. Remember object_detection.pbtxt contains the id and corresponding name of each label. Pretrained model config file can be of any pretrained model which is used to train the customize model. One can use ssd_mobilenet_v1_coco, ssd_mobilenet_v1_0.75_depth_coco, ssd_resnet_50_fpn_coco, faster_rcnn_inception_v2_coco, rfcn_resnet101_coco, mask_rcnn_inception_v2_coco.

eval/-Save results of evaluation on trained model.          

Prerequisite Requirements    

absl-py0.7.1
astor0.8.0
attrs19.1.0
backcall0.1.0
bleach3.1.0
contextlib20.5.5
cycler0.10.0
Cython0.29.7
decorator4.4.0
defusedxml0.6.0
entrypoints0.3
gast0.2.2
grpcio1.20.1
h5py2.9.0
ipykernel5.1.1
ipython7.5.0
ipython-genutils0.2.0
ipywidgets7.4.2
jedi0.13.3
Jinja22.10.1
jsonschema3.0.1
jupyter1.0.0
jupyter-client5.2.4
jupyter-console6.0.0
jupyter-core4.4.0
Keras-Applications1.0.7
Keras-Preprocessing1.0.9
kiwisolver1.1.0
lxml4.3.3
Markdown3.1.1
MarkupSafe1.1.1
matplotlib3.1.0
mistune0.8.4
nbconvert5.5.0
nbformat4.4.0
notebook5.7.8
numpy1.16.3
opencv-python4.1.0.25
pandas0.24.2
pandocfilters1.4.2
parso0.4.0
pexpect4.7.0
pickleshare0.7.5
Pillow6.0.0
prometheus-client0.6.0
prompt-toolkit2.0.9
protobuf3.7.1
ptyprocess0.6.0
Pygments2.4.0
pyparsing2.4.0
PyQt55.12.2
PyQt5-sip4.19.17
pyrsistent0.15.2
python-dateutil2.8.0
pytz2019.1
pyzmq18.0.1
qtconsole4.4.4
Send2Trash1.5.0
six1.12.0
tensorboard1.12.2
tensorflow-gpu1.12.0
termcolor1.1.0
terminado0.8.2
testpath0.4.2
tornado6.0.2
traitlets4.3.2

Steps followed to train the Custom-Object-Detection

Step 1: Download the images for which one wants to train the model. In simple words we will use transfer learning and will train the new model using the pretrained models which one can download from the tensorflow zoo. I have used ssd mobilenet to train the model. Make a directory images and place these images there. For my case I have downloaded 100 images for both cat and  flowers.

Step2: Now, it is the time to do annotations. Make two folders inside the folder images (train and test). 180 images (90 from cat and 90 from flowers) will be used for training annotations and 20 images (10 images from flower and 10 images from cat) will be used for test annotations. I have followed  tzutalin link to perform the annotations.

Step 3: It is the time to create the csv files from the xml file. To implement the above please use the datitran  repository. Remember, I have saved the csv file in the data folder. If data folder is not created please do create it. Please run the file python xml_to_csv.py to make this and change the following code in this file.

Original code

def main():	
image_path = os.path.join(os.getcwd(), 'annotations')	
xml_df = xml_to_csv(image_path)	xml_df.to_csv('raccoon_labels.csv', index=None)	
print('Successfully converted xml to csv.')

Changed code

def main():
for i in ['train','test']:
image_path = os.path.join(os.getcwd(), 'images/{}'.format(i))
xml_df = xml_to_csv(image_path)
xml_df.to_csv('data/{}_labels.csv'.format(i), index=None)
print('Successfully converted xml to csv.')

Step 4: Convert the csv file into tf record. Next step is to convert the csv file into test.record and train.record. Both the csv files are converted into record file. For this, please follow the below link. These two files are placed in the data folder. Please change the below code in this file.

Original code

if row_label == 'raccoon':	
return 1
else:	
None

New code

if row_label == 'flower':	
return 1elif row_label == 'cat':	
return 2
else:	
None

Step 5: Download the pretrained model from tensorflow zoo along with the configuration files.Please create the new folder called as training folder and place the file config file as well create the new file object_detection.pbtxt file there. Please do open the   object_detection.pbtxt file and update the below code there.

item{
id: 1
name: 'flower' # class 1 is flower
}

item{
id: 2
name: 'cat' # class 2 is cat
}

Please make the required changes in the config file also such as location for record file and model checkpoints there.

Step 6: Move Model (ssd_mobilenet_v1_coco_11_06_2017), training, data and images inside the models/research/object_detection and run the below code

python3 train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/ssd_mobilenet_v1_pets.config

If there any error come, please run the  line

export PYTHONPATH=$PYTHONPATH:pwd:pwd/slim

Remeber before creating the tf.record files please do run this statement

sudo python3 setup.py install (inside the models>research)

Step 7: Now once the training is finished, you need to create your customized model. Please run the below command to create the frozen file. Please remember if there is any step missing please go to installation steps to see if there is any step missing. Please run the below command to create the frozen file

python3 export_inference_graph.py
--input_type image_tensor
--pipeline_config_path training/ssd_mobilenet_v1_pets.config
--trained_checkpoint_prefix training/model.ckpt-xxx
--output_directory flower_inference_graph

Step 8: Now let us predict the cat and flower using the frozen model. Just make the required changes in the research/object_detection object_detection_tutorial.ipynb

I hope you have gained some good knowledge while getting a skimmed knowledge of steps followed to train your own object detection model using tensorflow pretrained model using transfer learning.

Please spare some time to go through the actual implementation of this model from the below link

http://practice.aisolutions.ai/object_detection

For more technical articles please read the below articles

Build, ship and run any app anywhere without dependencies | Docker

What is docker volume and how to create it

A small introduction to docker swarm orchestration

A simple and brief introduction to docker-compose

Django Deployment on Nginx and Gunicorn

For queries, please do email us at info@paradisetechsoft.com