Playing Chrome’s Dinosaur Game using OpenCV

What would you see in your Chrome browser when there is no internet connection ? Yes, everybody knows that dinosaur game that comes on screen. We are going to use Python and OpenCV to achieve our target.Here I’m using Windows, and my webcam for the camera feed, but you can use most of the code for Linux and MacOS as well

We are basically going to play the Chrome Dinosaur game using hand movements from the camera feed.

Demo

A glimpse of what our final code does.

Image for post
Image for post

Let’s Start

import numpy as np
import cv2
import math
import pyautogui

Pyautogui : PyAutoGUI is a Python module for programmatically controlling the mouse and keyboard without any user interaction.

Opening Camera and setting up the rectangle

# Open Camera
capture = cv2.VideoCapture(0)

while capture.isOpened():

# Capture frames from the camera
ret, frame = capture.read()

# Get hand data from the rectangle sub window
cv2.rectangle(frame, (100, 100), (300, 300), (0, 255, 0), 0)
crop_image = frame[100:300, 100:300]

Blurring the image to smooth out some edges, converting the blurred image from BGR(Blue, Green, Red) to HSV(Hue, Saturation, Value), because it is easier to filter out in HSV rather than with BGR.

Setting the upper and lower limits for the values of H,S,V so as to only include skin colors, orange-ish tints. (feel free to play with these values).

This is called thresholding, it converts every pixel into either white or black, now every pixel in the range of upper and lower is white and every other pixel is black, such images are referred to as ‘masks’.

# Apply Gaussian blur
blur = cv2.GaussianBlur(crop_image, (3, 3), 0)

# Change color-space from BGR -> HSV
hsv = cv2.cvtColor(blur, cv2.COLOR_BGR2HSV)

# Create a binary image with where white will be skin colors and rest is black
mask2 = cv2.inRange(hsv, np.array([2, 0, 0]), np.array([20, 255, 255]))

# Kernel for morphological transformation
kernel = np.ones((5, 5))

# Apply morphological transformations to filter out the background noise
dilation = cv2.dilate(mask2, kernel, iterations=1)
erosion = cv2.erode(dilation, kernel, iterations=1)

# Apply Gaussian Blur and Threshold
filtered = cv2.GaussianBlur(erosion, (3, 3), 0)
ret, thresh = cv2.threshold(filtered, 127, 255, 0)

Finding and drawing contours. Each contiguous section of pixels is referred to as a contour. This will draw an outline to every white ‘section’ of the frame. The output will look something like this.

We now need to find the center of our contour (hand), to do this we will use cv2.moments to find the centroid of our contour. From the moments the centroid is extracted as follows. When the hand is detected SPACE is pressed and dino is able to jump, if you don’t open the dino window you will see every time when hand is detected SPACE will be pressed.

# Find contours
#image, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contours, hierachy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

try:
# Find contour with maximum area
contour = max(contours, key=lambda x: cv2.contourArea(x))

# Create bounding rectangle around the contour
x, y, w, h = cv2.boundingRect(contour)
cv2.rectangle(crop_image, (x, y), (x + w, y + h), (0, 0, 255), 0)

# Find convex hull
hull = cv2.convexHull(contour)

# Draw contour
drawing = np.zeros(crop_image.shape, np.uint8)
cv2.drawContours(drawing, [contour], -1, (0, 255, 0), 0)
cv2.drawContours(drawing, [hull], -1, (0, 0, 255), 0)

# Fi convexity defects
hull = cv2.convexHull(contour, returnPoints=False)
defects = cv2.convexityDefects(contour, hull)

# Use cosine rule to find angle of the far point from the start and end point i.e. the convex points (the finger
# tips) for all defects
count_defects = 0
for i in range(defects.shape[0]):
s, e, f, d = defects[i, 0]
start = tuple(contour[s][0])
end = tuple(contour[e][0])
far = tuple(contour[f][0])

a = math.sqrt((end[0] - start[0]) ** 2 + (end[1] - start[1]) ** 2)
b = math.sqrt((far[0] - start[0]) ** 2 + (far[1] - start[1]) ** 2)
c = math.sqrt((end[0] - far[0]) ** 2 + (end[1] - far[1]) ** 2)
angle = (math.acos((b ** 2 + c ** 2 - a ** 2) / (2 * b * c)) * 180) / 3.14

# if angle >= 90 draw a circle at the far point
if angle <= 90:
count_defects += 1
cv2.circle(crop_image, far, 1, [0, 0, 255], -1)

cv2.line(crop_image, start, end, [0, 255, 0], 2)

# Press SPACE if condition is match
if count_defects >= 4:
pyautogui.press('space')
cv2.putText(frame, "JUMP", (115, 80), cv2.FONT_HERSHEY_SIMPLEX, 2, 2, 2)

Execution

Run the program and switch over to chrome and open chrome://dino

Keep your hand within the rectangle as you start. Move your hand to make dino jump.

Full Code

Github Link — https://github.com/its-harshil/handdino-opencv

References :

https://medium.com/@sulphurgfx/playing-chromes-dinosaur-game-using-computer-vision-105da2f3114f

https://github.com/SinghalHarsh/OpenCV-Projects/blob/master/Playing_Chrome_Dinosaur_Game.ipynb

Written by

Tech Lead and Founder at @XenonStudio. #Mobile #AI. Visit :- https://xenonstudio.in

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store