What would you see in your Chrome browser when there is no internet connection ? Yes, everybody knows that dinosaur game that comes on screen. We are going to use Python and OpenCV to achieve our target.Here I’m using Windows, and my webcam for the camera feed, but you can use most of the code for Linux and MacOS as well
We are basically going to play the Chrome Dinosaur game using hand movements from the camera feed.
A glimpse of what our final code does.
You need the following libraries
import numpy as np
Pyautogui : PyAutoGUI is a Python module for programmatically controlling the mouse and keyboard without any user interaction.
Opening Camera and setting up the rectangle
# Open Camera
capture = cv2.VideoCapture(0)
# Capture frames from the camera
ret, frame = capture.read()
# Get hand data from the rectangle sub window
cv2.rectangle(frame, (100, 100), (300, 300), (0, 255, 0), 0)
crop_image = frame[100:300, 100:300]
Blurring the image to smooth out some edges, converting the blurred image from BGR(Blue, Green, Red) to HSV(Hue, Saturation, Value), because it is easier to filter out in HSV rather than with BGR.
Setting the upper and lower limits for the values of H,S,V so as to only include skin colors, orange-ish tints. (feel free to play with these values).
This is called thresholding, it converts every pixel into either white or black, now every pixel in the range of upper and lower is white and every other pixel is black, such images are referred to as ‘masks’.
# Apply Gaussian blur
blur = cv2.GaussianBlur(crop_image, (3, 3), 0)
# Change color-space from BGR -> HSV
hsv = cv2.cvtColor(blur, cv2.COLOR_BGR2HSV)
# Create a binary image with where white will be skin colors and rest is black
mask2 = cv2.inRange(hsv, np.array([2, 0, 0]), np.array([20, 255, 255]))
# Kernel for morphological transformation
kernel = np.ones((5, 5))
# Apply morphological transformations to filter out the background noise
dilation = cv2.dilate(mask2, kernel, iterations=1)
erosion = cv2.erode(dilation, kernel, iterations=1)
# Apply Gaussian Blur and Threshold
filtered = cv2.GaussianBlur(erosion, (3, 3), 0)
ret, thresh = cv2.threshold(filtered, 127, 255, 0)
Finding and drawing contours. Each contiguous section of pixels is referred to as a contour. This will draw an outline to every white ‘section’ of the frame. The output will look something like this.
We now need to find the center of our contour (hand), to do this we will use cv2.moments to find the centroid of our contour. From the moments the centroid is extracted as follows. When the hand is detected SPACE is pressed and dino is able to jump, if you don’t open the dino window you will see every time when hand is detected SPACE will be pressed.
# Find contours
#image, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contours, hierachy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Find contour with maximum area
contour = max(contours, key=lambda x: cv2.contourArea(x))
# Create bounding rectangle around the contour
x, y, w, h = cv2.boundingRect(contour)
cv2.rectangle(crop_image, (x, y), (x + w, y + h), (0, 0, 255), 0)
# Find convex hull
hull = cv2.convexHull(contour)
# Draw contour
drawing = np.zeros(crop_image.shape, np.uint8)
cv2.drawContours(drawing, [contour], -1, (0, 255, 0), 0)
cv2.drawContours(drawing, [hull], -1, (0, 0, 255), 0)
# Fi convexity defects
hull = cv2.convexHull(contour, returnPoints=False)
defects = cv2.convexityDefects(contour, hull)
# Use cosine rule to find angle of the far point from the start and end point i.e. the convex points (the finger
# tips) for all defects
count_defects = 0for i in range(defects.shape):
s, e, f, d = defects[i, 0]
start = tuple(contour[s])
end = tuple(contour[e])
far = tuple(contour[f])
a = math.sqrt((end - start) ** 2 + (end - start) ** 2)
b = math.sqrt((far - start) ** 2 + (far - start) ** 2)
c = math.sqrt((end - far) ** 2 + (end - far) ** 2)
angle = (math.acos((b ** 2 + c ** 2 - a ** 2) / (2 * b * c)) * 180) / 3.14
# if angle >= 90 draw a circle at the far point
if angle <= 90:
count_defects += 1
cv2.circle(crop_image, far, 1, [0, 0, 255], -1)
cv2.line(crop_image, start, end, [0, 255, 0], 2)
# Press SPACE if condition is match
if count_defects >= 4:
cv2.putText(frame, "JUMP", (115, 80), cv2.FONT_HERSHEY_SIMPLEX, 2, 2, 2)
Run the program and switch over to chrome and open chrome://dino
Keep your hand within the rectangle as you start. Move your hand to make dino jump.
Invisibility Cloak -using OpenCV
If you are a Harry Potter fan , you would know what an Invisibility Cloak is. Yes! It’s the cloak that makes Harry…
Github Link — https://github.com/its-harshil/handdino-opencv