Update! You can check this new post to calibrate a camera directly from Colab:
The code on this older post may not work after February 2023 with newer OpenCV versions.
When using a camera to capture fiduciary markers, such as ArUco markers, we need to consider the distortion our camera lens may insert in the image capture. Given a known shape, it is possible to calculate the distortion from a camera by comparing the captured images of the form to the reality, the ground truth shape. OpenCV documentation shows this example, in which already existing images are used to calculate the distortion, iterating through the contents of a folder.
Suppose that we have a webcam we want to calibrate. We will need access to an image of a chessboard. We can obtain the calibration values for our camera without capturing intermediate files using OpenCV in Python.
First of all, we need that the image of the chessboard rests against a white backdrop; any picture will do as long as the corners separating the squares of the board are seen and there is a white background, something like this:
Make sure your image or your real-life board is straight; that is, it forms a good plane.
We need to import the following modules for the calibration part:
# Import required modules:
import cv2
import numpy as np
import os
from time import sleep
We will define a 7x7 board so that OpenCV does not try to adjust the outer edges of the chessboard. We will get a minimum of 50 calibration points or perspectives. We will stop our individual calibrations when we reach an accuracy of 0.001 or when a maximum number of iterations have occurred. We will store the 3D points, the 2D points, and the ideal checkerboard shape (the platonic shape) inside NumPy arrays:
# Define the dimensions of checkerboard
CHECKERBOARD = (7 ,7)
MIN_POINTS = 50
RECORD = True
# Stop the iteration when specified
# accuracy, epsilon, is reached or
# specified number of iterations are completed.
criteria = (cv2.TERM_CRITERIA_EPS +
cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# Vector for the 3D points:
threedpoints = []
# Vector for 2D points:
twodpoints = []
# 3D points real world coordinates:
objectp3d = np.zeros((1, CHECKERBOARD[0]
* CHECKERBOARD[1],
3), np.float32)
objectp3d[0, :, :2] = np.mgrid[0:CHECKERBOARD[0],
0:CHECKERBOARD[1]].T.reshape(-1, 2)
prev_img_shape = None
Next, we need to open the video capture using OpenCV built-in tools, and obtain the frames per second from the video capture. If we want to record the calibration procedure, we will open a video writer too. This latter video writer is not needed to perform the calibration itself:
cap = cv2.VideoCapture(0)
FPS = cap.get(cv2.CAP_PROP_FPS)
# Check if the webcam is opened correctly
if not cap.isOpened():
raise IOError("Cannot open webcam")
if RECORD:
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
writer = cv2.VideoWriter('calibration.mp4',
cv2.VideoWriter_fourcc(*'DIVX'),
FPS,
(width,height))
Now, as we capture each of the frames in the video, we turn it into a grayscale image and look for the correct quantity of corners; the locator should find the 7x7 shape inside the chessboard. Once the board corners are located, we append the resulting object's 3D points and the corresponding 2D points to our point vectors. If we have enough points, we can close the stream and calculate the distortion between the ground truth for our checkerboard and the obtained points.
while True:
ret, img = cap.read()
image = img
grayColor = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Find the chess board corners
# if desired number of corners are
# found in the image then ret = true:
ret, corners = cv2.findChessboardCorners(
grayColor, CHECKERBOARD,
cv2.CALIB_CB_ADAPTIVE_THRESH
+ cv2.CALIB_CB_FAST_CHECK +
cv2.CALIB_CB_NORMALIZE_IMAGE)
# If desired number of corners can be detected then,
# refine the pixel coordinates and display
# them on the images of checker board
if ret == True:
threedpoints.append(objectp3d)
# Refining pixel coordinates
# for given 2d points.
corners2 = cv2.cornerSubPix(
grayColor, corners, CHECKERBOARD, (-1, -1), criteria)
twodpoints.append(corners2)
# When we have minimum number of data points, stop:
if len(twodpoints) > MIN_POINTS:
cap.release()
if RECORD: writer.release()
cv2.destroyAllWindows()
break
# Draw and display the corners:
image = cv2.drawChessboardCorners(image,
CHECKERBOARD,
corners2, ret)
cv2.imshow('img', image)
if RECORD:
writer.write(image)
# wait for ESC key to exit and terminate feed.
k = cv2.waitKey(1)
if k == 27:
cap.release()
if RECORD: writer.release()
cv2.destroyAllWindows()
break
OpenCV camera calibration tool will calculate the camera matrix and distortion, along with the rotation and translation vectors as we know the theoretical shape of the board:
h, w = image.shape[:2]
# Perform camera calibration by
# passing the value of above found out 3D points (threedpoints)
# and its corresponding pixel coordinates of the
# detected corners (twodpoints):
ret, matrix, distortion, r_vecs, t_vecs = cv2.calibrateCamera(
threedpoints, twodpoints, grayColor.shape[::-1], None, None)
# Displaying required output
print(" Camera matrix:")
print(matrix)
print("\n Distortion coefficient:")
print(distortion)
print("\n Rotation Vectors:")
print(r_vecs)
print("\n Translation Vectors:")
print(t_vecs)
Our camera matrix and distortion coefficients should be something like this:
The text output of the matrices is not very useful; we can use NumPy's array-saving utilities to make them more portable. These vectors are tiny; there should be no problem storing them as CSV files. For fiduciary marker detections, we need the camera and distortion matrices only; we can still keep the mean values of the translations and rotations in case they are required:
from numpy import savetxt
from numpy import genfromtxt
mean_r = np.mean(np.asarray(r_vecs), axis=0)
mean_t = np.mean(np.asarray(t_vecs), axis=0)
savetxt('rotation_vectors.csv', mean_r, delimiter=',')
savetxt('translation_vectors.csv', mean_t, delimiter=',')
savetxt('camera_matrix.csv', matrix, delimiter=',')
savetxt('camera_distortion.csv', distortion, delimiter=',')
These arrays, when stored as text, can be accessed using:
my_data = genfromtxt('camera_distortion.csv', delimiter=',')
As an example, this calibration is carried out using the monitor of the computer to which our webcam is attached in a few seconds:
We will use this camera calibration to detect non-standard fiduciary markers; we will check in the future how far we can distort existing markers and still obtain reliable detections.
Do not hesitate to contact us if you require quantitative model development, deployment, verification, or validation. We will also be glad to help you with your machine learning or artificial intelligence challenges when applied to asset management, automation, or intelligence gathering from satellite, drone, or fixed-point imagery.