Custom Object Detection using TensorFlow — Part 3 (Google Colab)

Harshil Patel
3 min readAug 30, 2023

--

So we saw how to train custom object detection model locally now we will see how to train model in google colab.

Part 1 — https://harshilp.medium.com/custom-object-detection-using-tensorflow-part-1-from-scratch-41114cd2b403

Part 2 — https://harshilp.medium.com/custom-object-detection-using-tensorflow-part-2-train-a-custom-model-a229d5898f51

Now upload the folder we used in previous part on your google drive. This may take some time and storage. Upload models folder on drive.

Install TensorFlow and Numpy

!pip install --upgrade pip
!pip install --upgrade protobuf
-------------------------------------------Collecting pip
Downloading https://files.pythonhosted.org/packages/43/84/23ed6a1796480a6f1a2d38f2802901d078266bda38388954d01d3f2e821d/pip-20.1.1-py2.py3-none-any.whl (1.5MB)
|████████████████████████████████| 1.5MB 4.8MB/s
Installing collected packages: pip
Found existing installation: pip 19.3.1
Uninstalling pip-19.3.1:
Successfully uninstalled pip-19.3.1
Successfully installed pip-20.1.1
Requirement already up-to-date: protobuf in /usr/local/lib/python3.6/dist-packages (3.12.2)
Requirement already satisfied, skipping upgrade: six>=1.9 in /usr/local/lib/python3.6/dist-packages (from protobuf) (1.12.0)
Requirement already satisfied, skipping upgrade: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf) (49.1.0)

Make sure you are using tf verison 1.15

%tensorflow_version 1.15
import tensorflow as tf
print(tf.__version__)
!pip install numpy

GPU Status

device_name = tf.test.gpu_device_name()if device_name != '/device:GPU:0':raise SystemError('GPU device not found')print('Found GPU at: {}'.format(device_name))

Importing Libs

# memory footprint support libraries/code!ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi!pip install gputil!pip install psutil!pip install humanizeimport psutilimport humanizeimport osimport GPUtil as GPUGPUs = GPU.getGPUs()# XXX: only one GPU on Colab and isn’t guaranteedgpu = GPUs[0]def printm():process = psutil.Process(os.getpid())print("Gen RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available ), " | Proc size: " + humanize.naturalsize( process.memory_info().rss))print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))printm()

Mounting Gdrive

from google.colab import drivedrive.mount('/content/gdrive')# change to working tensorflow directory on the drive%cd '/content/gdrive/My Drive/models/'

You will get a Unique security code for this to connect to your google drive.

Install protobuf and compile, install setup.py

!apt-get install protobuf-compiler python-pil python-lxml python-tk!pip install Cython!pip install gekko%cd /content/gdrive/My Drive/Object_Detection_Files/models/research/!protoc object_detection/protos/*.proto — python_out=.import osos.environ[‘PYTHONPATH’] += ‘:/content/gdrive/My Drive/models/research/:/content/gdrive/My Drive/Tensorflow/models/research/slim’!python setup.py build!python setup.py install

Check remaining GPU time

import time, psutil
Start = time.time()- psutil.boot_time()
Left= 12*3600 - Start
print('Time remaining for this session is: ', Left/3600)

Start training

!pip install tf_slim
%cd /content/gdrive/My Drive/models/research/object_detection
os.environ['PYTHONPATH'] += ':/content/gdrive/My Drive/models/research/:/content/gdrive/My Drive/Tensorflow/models/research/slim'

!python train.py --train_dir=training/ --pipeline_config_path=training/ssd_mobilenet_v1_pets.config --logtostderr

You will be able to see loss and accuracy. When loss reach 1, press CTRL+C

Export inference graph

#  .ckpt needs to be updated every time to match last .ckpt generated
# .config needs to be updated when changing model
!python export_inference_graph.py --input_type image_tensor --pipeline_config_path training/ssd_mobilenet_v1_pets.config --trained_checkpoint_prefix training/model.ckpt-6537 --output_directory new_graph

Zip file in Google Drive

!zip -r model_graph.zip new_graph

Now download that model folder from drive, now go to models>research>object_detection and upload new_graph folder there. Now open Pycharm and run web_detection.py file. Test your model.

Source Code

https://github.com/Bengemon825/TF_Object_Detection2020/blob/master/ModelTrainingOnColab.ipynb

Do check previous part if you are facing any error, I have given solution to errors.

Happy Coding !!!

--

--