Welcome back to our computer vision blog series! We’ve covered both advanced OpenCV techniques and the foundations of image processing in earlier posts. We’ll look at feature extraction today, which is an important area of computer vision. Finding and characterising important information in images is known as feature extraction, and it is necessary for many applications, such as tracking, object identification, and recognition.

In this blog, we will cover:

  1. Edge Detection
  2. Corner Detection
  3. SIFT and SURF
  4. Histograms of Oriented Gradients (HOG)

Edge Detection

The process of finding and identifying abrupt discontinuities in an image is known as edge detection. Usually, these discontinuities correspond to object boundaries. We’ll talk about Canny and Laplacian edge detection methods here.

Canny Edge Detection

The Canny edge detector is renowned for its capacity to identify a variety of image edges. With a high degree of accuracy, it detects edges using a multi-stage algorithm.

import cv2

# Read the image
image = cv2.imread('path_to_your_image.jpg', cv2.IMREAD_GRAYSCALE)

# Apply Canny edge detector
edges = cv2.Canny(image, 100, 200)

# Display the result
cv2.imshow('Canny Edges', edges)
cv2.waitKey(0)
cv2.destroyAllWindows()

Laplacian Edge Detection

An additional technique for edge detection is the Laplacian operator, which computes the image’s second derivatives. Due to its sensitivity to noise, this technique is frequently combined with a Gaussian filter to first smooth the image.

# Apply Gaussian blur
blurred_image = cv2.GaussianBlur(image, (3, 3), 0)

# Apply Laplacian edge detector
laplacian = cv2.Laplacian(blurred_image, cv2.CV_64F)

# Display the result
cv2.imshow('Laplacian Edges', laplacian)
cv2.waitKey(0)
cv2.destroyAllWindows()

Corner Detection

Finding locations in an image where the intensity varies noticeably in different directions requires the use of corner detection. These points are frequently utilised in object recognition and picture matching. For this use, a well-known algorithm is the Harris Corner Detector.

Harris Corner Detection

The Harris Corner Detector searches for significant variations in intensity in all directions to locate corners. How to use it in OpenCV is as follows:

# Read the image
image = cv2.imread('path_to_your_image.jpg')
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# Apply Harris Corner Detector
gray_image = np.float32(gray_image)
harris_corners = cv2.cornerHarris(gray_image, 2, 3, 0.04)

# Result is dilated for marking the corners
harris_corners = cv2.dilate(harris_corners, None)

# Threshold for an optimal value, it may vary depending on the image.
image[harris_corners > 0.01 * harris_corners.max()] = [0, 0, 255]

# Display the result
cv2.imshow('Harris Corners', image)
cv2.waitKey(0)
cv2.destroyAllWindows()

SIFT and SURF

Two sophisticated feature detection algorithms that are invariant to scale and rotation are Scale-Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). This makes them very useful for locating and characterising local features in images.

SIFT (Scale-Invariant Feature Transform)

By identifying keypoints and calculating descriptors, SIFT facilitates feature matching between disparate images.

# Ensure you have the opencv-contrib-python package installed
import cv2

# Read the image
image = cv2.imread('path_to_your_image.jpg')
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# Create a SIFT detector
sift = cv2.SIFT_create()

# Detect keypoints and compute descriptors
keypoints, descriptors = sift.detectAndCompute(gray_image, None)

# Draw keypoints on the image
image_with_keypoints = cv2.drawKeypoints(image, keypoints, None)

# Display the result
cv2.imshow('SIFT Keypoints', image_with_keypoints)
cv2.waitKey(0)
cv2.destroyAllWindows()

SURF (Speeded-Up Robust Features)

Though it is no longer part of OpenCV’s free version due to patent issues, SURF is faster than SIFT.

# Ensure you have the opencv-contrib-python package installed
import cv2

# Read the image
image = cv2.imread('path_to_your_image.jpg')
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# Create a SURF detector
surf = cv2.xfeatures2d.SURF_create(400)

# Detect keypoints and compute descriptors
keypoints, descriptors = surf.detectAndCompute(gray_image, None)

# Draw keypoints on the image
image_with_keypoints = cv2.drawKeypoints(image, keypoints, None)

# Display the result
cv2.imshow('SURF Keypoints', image_with_keypoints)
cv2.waitKey(0)
cv2.destroyAllWindows()

Histograms of Oriented Gradients (HOG)

HOG is employed in object recognition and detection. It calculates the localised gradient orientation histograms of an image.

HOG Feature Extraction

HOG features can be extracted from images using the built-in HOG descriptor in OpenCV.

# Ensure you have the opencv package installed
import cv2

# Read the image
image = cv2.imread('path_to_your_image.jpg')
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# Create a HOG descriptor
hog = cv2.HOGDescriptor()

# Compute HOG features
hog_features = hog.compute(gray_image)

print("HOG features shape:", hog_features.shape)

We looked at a variety of feature extraction methods in this blog that are crucial for computer vision applications. We discussed advanced feature detectors (SIFT and SURF), corner detection (Harris Corner Detector), edge detection (Canny and Laplacian), and Histograms of Oriented Gradients (HOG). These methods make it possible to recognise and explain important aspects in photos, which makes tasks like tracking, object identification, and recognition easier.

Watch this space for our upcoming blog post, where we’ll continue to explore more complex computer vision topics. Have fun with coding!

Categories: Uncategorized