This is the 3rd post in a series on OpenCV. Previously:
Canny edge detection takes a colour or grayscale input image and returns a black-and-white image showing all the detected edges. Parameters to cv::canny() determine the level of edge detection. Here is an example with 3 different thresholds:
A good starting point for the first threshold is the value returned by cv::threshold() when used with the parameter cv::THRESH_OTSU, as described in the previous post. The second threshold is typically several times larger than the first.
Once edge detection has run, the next step is to find the contours. The input image to cv::findContours() is the binary image from Canny edge detection. The main output is not an image, but a vector of contours, each of which is a vector of points. These points can then be easily drawn to create a new image, or even drawn on top of an existing image. An example will help:
If 10 different edges/contours were identified, then contours will contain 10 items. Each of these items is a vector of points. Some contours will be composed of a dozen points, while others may be hundreds or thousands of points.
Many things can be done with contours. You could calculate the area of each individual contour:
But possibly more interesting, is to draw the contours on top of the original image so we can visually examine exactly what was detected. One of the parameters to cv::polylines() is a vector of points, each representing the ends of the line to draw. Conveniently, each contour is exactly that -- a vector of points. So starting with the original colour image, we can create a clone (just because I didn't want to make changes to the original image) and draw each contour:
This should give results that look similar to this:
False positives can be reduced or eliminated using several different methods:
See the attached source code which uses a combination of Gaussian blur, erosion, and dilation to generate the following boundaries: