Originally developed by Intel, OpenCV is an open-source, cross-platform computer vision library for real-time image processing that has become a standard tool for all things computer vision applications. In 2000, the first version of OpenCV was released; since then, its functions have been considerably enriched and simplified by the scientific community. Later in 2012, a nonprofit foundation OpenCV.org took the initiative to maintain a support site for developers and users.
It is available on most operating systems such as Linux, Windows, Android, iOS and many more. The first implementation was in the C programming language. However, the library has a full interface for other programming languages ââlike Python, Java, MATLAB / Octave. Additionally, the wrapper for other languages ââwas developed to encourage adoption by programmers.
The OpenCV application covers areas such as segmentation and recognition, object identification, facial recognition, motion tracking, gesture recognition, image stitching, range imaging high dynamic range (HDR), augmented reality, etc. In addition, it supports application areas such as statistical machine. learning functions. Today in this article, we’re going to see basic image processing using OpenCV, and we’re going to implement some filters using the Python programming language.
Implementing various filters using OpenCV
Import all dependencies:
To get started with OpenCV, you need to install it using the following command. Run this command either in the terminal window or in the Jupyter notepad preceding the exclamation point.
! pip install opencv-python import cv2 import numpy as np import matplotlib.pyplot as plt
The helper function is a matplotlib feature used to compare images.
def compare_image(image1, image2): Â Â plt.figure(figsize=(9,9)) Â Â plt.subplot(1,2,1) Â Â plt.imshow(image1) Â Â plt.title('Orignal') Â Â plt.axis('off') Â Â plt.subplot(1,2,2) Â Â plt.imshow(image2) Â Â plt.title('Modified') Â Â plt.axis('off') Â Â plt.tight_layout()
Load the image:
By default, OpenCV reads the image in BGR format, to see the image in the original format, ie. RGB, we need to convert it.
img = cv2.imread('/content/original.jpg') img2 = cv2.cvtColor(img,cv2.COLOR_BGR2RGB) compare_image(img2,img)
cv2.filter2D () is used to perform a convolution operation with a 2D image and the kernel (N x M dimensional matrix) is used. Filtering with such a kernel results in operation: suppose you have defined a 10 x 10 kernel and all the pixels falling under this window are first multiplied and added the result is divided by 100 it is nothing more than the pixel average and this same operation is performed for all pixels.
To learn more about the convolution operation, read this article.
Let’s implement the convolutional filter above.
kernel = np.ones((10,10),np.float32)/100 cnv = cv2.filter2D(img2, -1, kernel) compare_image(img2,cnv)
The smoothing of the image is obtained by convolving the image with different filters. It is useful for removing high frequency content such as noise and edges from the image which results in blurred edges when these filters are applied. OpenCV comes with four main filters as below;
Averaging is done by simply convolving the image with a standardized box filter. It takes the average of all the pixels below the kernel window and replaces the central element with that average. This is done as follows;
## Average Filtering blur = cv2.blur(img2,(10,10)) compare_image(img2,blur)
- Median filtering:
cv2.medianBlur () calculates the median of all pixels below the kernel window and replaces the center value with the median value. This filter is widely used to remove noise from the image. To demonstrate how this filter works, we introduced noise into the image using the skimage library; later, the median filter is applied to the noisy image.
from skimage.util import random_noise ## adding noise noise_img = random_noise(img2, mode="s&p",amount=0.3) noise_img = np.array(255*noise_img, dtype="uint8") ## median filter median = cv2.medianBlur(noise_img,5) compare_image(noise_img,median)
- Bilateral filtering:
The previous filters blur the image, but the bilateral filter tends to blur the image by preserving the edges between objects. In addition, while blurring the image, the two-tailed filter takes into account nearby pixel intensity values ââand considers whether the pixel is on the edge or not; this makes the operation of this filter a bit slow.
In the example below, we can see how the filter has blurry images preserving the edges of cellphones.
## bilateral filtering img = cv2.imread('/content/history_of_mobile_phones.jpg') img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB) blur = cv2.bilateralFilter(img,20,200,300) compare_image(img,blur)
Clever edge detection:
The nifty edge detector is the edge detection operation that works on multi-step algorithms to detect a wide range of edges in the image. The process involves: apply the Gaussian filter to a smooth image, find the intensity gradient of the image, apply a gradient magnitude threshold to get rid of the spurious response to edge detection, apply a double threshold to determine contours and follows the contours by hysteresis.
cv2.canny () is used to detect edges;
## edge detection park = cv2.imread('/content/new-zealand-parks.jpg') park = cv2.cvtColor(park,cv2.COLOR_BGR2RGB) edge = cv2.Canny(park,100,200) compare_image(park,edge)
The contour is the curve joining all continuous points along the boundary having the same color or intensity. It is the most useful tool for object detection and shape analysis.
Letâs find the outlines of our cell phones;
from google.colab.patches import cv2_imshow ## ContoursÂ im = cv2.imread('/content/history_of_mobile_phones.jpg',cv2.IMREAD_COLOR) imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY) ret,thresh = cv2.threshold(imgray,127,255,cv2.THRESH_BINARY) contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE) cv2.drawContours(im,contours,-1,(0,255,0),3) cv2_imshow(im)
As you can see the outlines are drawn for the shadows of the phones as well, it can detect even small pixel changes.
Morphological filters are simple operations based on the shape of the image. These filters need two inputs: image and kernel, which decide the nature of the operation.
It’s like soil erosion; it erodes the limit, it warns against the limits of foreground objects, ie tries to keep the foreground white. Thus, the operation follows when the kernel slides over the image, and a pixel of the image is only considered as one if all the pixels under the kernel are equal to 1; otherwise, it is eroded.
# erosion img = cv2.imread('/content/kama_ingredients_updated_600x400_0071_banyan_leaf.jpg',0) img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB) kernel = np.ones((5,5),np.uint8) erosion = cv2.erode(img,kernel,iterations=1) compare_image(img,erosion)
By performing the erosion, we can clearly see all the fins of the leaf.
It is just the opposite of erosion; here the pixel is considered one if at least one pixel below the core is one, so in this case it increases the white region in the image. Expansion is also useful for joining the broken part of an object.
# Dilation dia = cv2.dilate(img,kernel,iterations=1) compare_image(img,dia)
- Morphological gradient:
This is the difference between expansion and erosion, and the result looks like the outline of the object.
# gradient grad = cv2.morphologyEx(img,cv2.MORPH_GRADIENT, kernel) compare_image(img,grad)
From this article we have seen how to load an image. The OpenCV reads the image in the default way, i.e. it reads the image in BGR format and then has different filters like image smoothing, the convolution operation of base, contour mapping, contour detection, morphological transformation. One can take advantage of the power of these filters in real world applications, as the erosion filters can be used to visualize the object in more detail.
Join our Telegram group. Be part of an engaging online community. Join here.
Subscribe to our newsletter
Receive the latest updates and relevant offers by sharing your email.