Keywords
Image Segmentation. Clustering. Edge detection. Colour frequencies. Texture.
Image Segmentation. Clustering. Edge detection. Colour frequencies. Texture.
Image segmentation is a fundamental step in computer vision for object recognition and classification. Despite many techniques and algorithms have been proposed, image segmentation remains one of the most challenging research topics because none of them can provide a coherent framework for achieving quick and efficient segmentation of images.1 Two explanations can be attributed to the complexity of image segmentation. The first is that image segmentation has many solutions for the problem i.e. for one image, there are many best results of segmentation. The second is because of noise, background, low signal-to-noise-ratio, and uninformed intensity.2 For that, it is difficult to only suggest one image segmentation method. We can distinguish between two concepts in image segmentation: region-based and contour-based techniques.
Region-based approaches partition the image into different homogenous regions based on similarities in color, location, and texture.
Contour-based techniques start with edge detection technique followed by linking and forming the segments.
In this paper, we tried to combine both approaches. We start with the Canny edge detector. Then we form initial regions accordingly. Those regions are optimized and merged according to similarities in color, location, and texture.
Over recent years, several techniques have been developed to segment images. Wavelet-based segmentation can be found in Ref. 3. Unsupervised image segmentation4 is performed using k-means clustering. It clusters (segments) the image into different homogenous regions. In Ref. 5 Graph theory was employed using greedy decisions. Segmentation using Texture is shown in Sagiv et al.6 Shi et al.7 used smoothness and boundary continuity. Ren and Malik8 used contours and textures. In Refs. 9 and 10 the concept of superpixels was used where the redundancy of the image can be highly decreased Superpixel methods11,12 have been researched intensively using NCut, mean shift, and graph-based methods. Genetic algorithm was also employed in Ref. 13. Edge detection techniques in image segmentation is shown in Ref. 14.
Image segmentation is the process of dividing an image into multiple partitions. It is typically used to locate objects and change the representation of the image into something more meaningful. It is also used in multiple domains such as medical imaging, object detection, face recognition, and machine vision.
Image segmentation consists of assigning a label for every pixel in an image. Moreover, different labels have different characteristics, and the same labels share the same characteristics at some point such as color, intensity, or texture. The result of image segmentation is a set of segments that collectively cover the entire image or a set of contours extracted from the image.
Different image segmentation techniques exist like threshold-based, region growth, edge detection, and clustering methods.1
Threshold segmentation15 is one of the most common segmentation techniques. It splits the picture into two or multiple regions using one or multiple thresholds. The most commonly used threshold segmentation algorithm is the Otsu method, which selects optimum threshold by optimizing deviation between groups. Its downside is that it is difficult to get correct results where there is no noticeable grayscale variation or overlap between the grayscale values in the image.2 Since Thresholding recognizes only the gray information of the image without taking into consideration the spatial information of the image, it is vulnerable to noise and grayscale unevenness, for that it is frequently combined with other methods.
The regional growth approach16 is a traditional serial segmentation algorithm, and its basic concept is to use identical pixel properties together to construct a region. An arbitrary seed pixel is chosen and compared with neighboring pixels. The region is grown from the seed pixel by adding neighboring pixels that are similar, increasing the size of the region. When the expansion of one region stops, another seed pixel that doesn’t yet belong to any region is chosen and therefore the flow is repeated.
Edge detection17 is used to find the boundaries of objects in an image. It detects discontinuities in brightness. The most common edge detection technique is Canny edge detector which can be described by the 5 following steps.
1. Gaussian filter is used to smooth the image.
2. Get the gradient magnitude and the gradient angle of the image.
3. Non-maximum suppression is applied.
4. Double thresholding is applied.
5. Suppress weak edges using hysteresis.
Finally, the image is segmented, and edges are drowned at the boundaries of each object.
Clustering18 is the task of dividing the population or data points into several groups such that similar data points within the same groups are dissimilar to the data points in other groups. A common clustering algorithm is the Fuzzy C-means (FCM).
Fuzzy c-means (FCM) is a clustering method that permits one piece of data to be a member of two or more clusters. Based on the distance between the cluster center and the data point, this algorithm determines each data point’s membership in relation to each cluster center. The FCM algorithm can be described by the following steps:
The Connected component algorithm19 scans an image and groups the pixels into components dependent on pixel connectivity, i.e. all pixels in the connected component share identical pixel intensity values and are in some way connected. Until all classes have been determined, each pixel shall be labelled with a gray level or a color (color marking) according to the portion to which it has been allocated. Connected part labeling works by scanning an image, pixel-by-pixel (from top to bottom and from left to right) to identify connected pixel regions, i.e. neighboring pixel regions that share the same collection of intensity values as V. The following is the labeling for p:
• If all four neighbors are zero, give p a new label; otherwise
• If only one neighbor has V= 1, give its label to p; otherwise
• If more than one neighbor has V= 1, give one of the labels to p and note the equivalences.
After screening, the identical label pairs are sorted into equivalence groups and a unique label is assigned to each class. As a final stage, a second scan is performed through the image, during which each label is replaced by the label assigned to its equivalence class.
The objective of Texture filters20 is to separate the regions in an image based on their texture content. While smooth regions are characterized with a small range of values in the neighborhood around a pixel, rough texture regions are characterized by a large range of values. Gabor Wavelets are band pass filters which extract the image local important features. A convolution is done between the image and the filters in order to get texture frequency and orientation. We have used the outputs of Gabor filters with 8 orientations and 5 wavelengths.
The proposed approach is based on obtaining an optimum number of clusters and regions of an image obtained from the Berkeley segmentation dataset. This is done using the following three consecutive steps:
I. Obtaining a good initial set of centers:
• Apply edge detection. This is done using the canny edge detector.
• Apply the connected component algorithm on the binary image obtained.
• Using the labeled image, find the properties of each region.
• Join similar regions and keep the unique ones.
• Finally, find the center of each region.
Figure 1 illustrates the procedures of step I.
II. Reducing the number of centers
This is done using texture filters as follows:
• Get the feature vectors of each center using Gabor filters.
• Merge the centers according to their Euclidian distances and the results obtained from the Gabor filters using:
The Euclidian distance between 2 centers is given by:
Where Xcenter 1 and Ycenter 1 are the xy coordinates of the first center and Xcenter 2 and Ycenter 2 are the xy coordinates of the second center
• If the 2 centers are close to each other and approximately belong to the same texture, then merge them.
The results are shown in Figure 2.
Figure 2 shows that the number of centers was reduced from 246 to 97.
III. Apply the FCM clustering algorithm:
It should be noted that the FCM clustering requires the specification of the number of clusters. Noting that in color image segmentation the similarity used by the FCM is based on Euclidian distance between RGB pixels, getting the number of clusters is done by using Color frequencies. The color frequencies21 index is computed by three steps:
1. All the color frequencies of the image are computed and added to an array
2. Then, the duplications in the array are removed and unique frequencies are kept
3. Finally, only the main colors are kept for example if there are multiple shades of a color only the main color is kept, and the other ones are removed
The color frequencies index is equal to the size of the array and is given as an input to the FCM function. After this step is applied the number of RGB centroids is reduced from 97 centroids to only 13 (Figure 3). Then the RGB distance is computed between each pixel and the center to determine its corresponding label.
Our algorithm is summarized in Figure 4.
Figure 5 shows 3 images and their edge images. Figure 6 shows the edge images and their corresponding initial set of centers. The optimum number of cluster centers is shown in Figure 7. The final image segmented images are shown in Figure 8
To evaluate this work, the BSDS500 database22 is chosen. It is used for most segmentation techniques. It consists of 500 images of outdoor scenes, landscapes, buildings, animals, and humans. Figure 9 shows sample images from the database.
The following segmentation metrics23 are used to show the effectiveness of our novel approach: accuracy, F-measure, precision, MCC, dice, Jaccard, specificity. Those metrics are computed by comparing the result segmented image with the ground truth of the original image.
Given that: TP is the true positive, TN is the true negative, FN is the false negative and FP is the false positive
In this section, the results of the proposed approach are compared with different methods on the same database and using the same classification metrics. For the K-means and the SLIC we have experimented with different values of K and we have chosen the value of K which gave good segmentation results. We used K=10 for the K-means and K=100 For the SLIC.
Graphical Illustration
The following figures illustrate the segmentation results of the Kmeans, SLIC, and our algorithm. Figure 10 shows the results obtained by the K-means, the SLIC, and our algorithm. The Figure shows the superior performance of our approach.
Results of the K-means, the SLIC, and the proposed approach in second, third, and fourth columns respectively.
Comparisons based on the Segmentation metrics
Table 1 shows the segmentation metrics results of our algorithm compared to the K-means, the SLIC and the CAS24 algorithms. The images of the BSD500 are used and the average segmentation metrics are shown in the table. Table 4.1 shows the accurate segmentation results of our algorithm compared to the others. It should be noted that our algorithm does not require a priori to specify the number of centers.
To show the effectiveness of the proposed method, we have followed the experiments done in Ref. 3 using 2 images: Lena and the Cameraman images (Figure 11). We have used the SNR and the PSNR as verification indices. Table 2 shows the results obtained. It clearly shows the outperformance of our approach.
Methods | SNR | PSNR |
---|---|---|
Lena | ||
MVSM | 46.0638 | 3.5854 |
BSM | 45.6782 | 3.1999 |
VTSM | 46.1026 | 3.6242 |
WSM | 48.1855 | 5.7071 |
Our | 50.89 | 7.78 |
Cameraman | ||
MVSM | 45.6261 | 3.0267 |
BSM | 47.4184 | 4.819 |
VTSM | 45.6929 | 3.0935 |
WSM | 48.1859 | 5.5865 |
Our | 50.76 | 7.88 |
Bigger SNR and PSNR imply better segmentation results. Our algorithm gave for the Lena image an SNR 0f 50.89 and PSNR of 7.78 which are bigger than the other 4 algorithms.
Image segmentation has become an important topic in many fields like medical, machine vision, object detection. In this work, a new approach is proposed to improve the accuracy and performance of image segmentation. We combined Region-based and Contour-based segmentation both approaches. Edge detection, Color frequencies, and texture measures are used in developing the new algorithm. We started with Canny edge detector. Then we formed initial regions accordingly. Those regions are optimized and merged according to similarities in color, location and texture. We obtained optimum number of clusters and regions of an image. To show the effectiveness of this work, the BSDS500 database is chosen and different segmentation and clustering measures were used. The results show the improved performance of the proposed technique compared to other wavelet-based and other techniques.
All images used in this article were sourced from The Berkeley Segmentation Dataset and Benchmark (BSDS300): https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/resources.html#algorithms 22
Zenodo. COMBINING CONTOUR-BASED AND REGION-BASED IN IMAGE SEGMENTATION. https://doi.org/10.5281/zenodo.8319898. 25
This project contains the following extended data:
Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the work clearly and accurately presented and does it cite the current literature?
Partly
Is the study design appropriate and is the work technically sound?
Partly
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
Yes
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Machine/Deep Learning; Computational Vision; Image Processing
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | |||
---|---|---|---|
1 | 2 | 3 | |
Version 3 (revision) 02 Apr 24 |
read | read | |
Version 2 (revision) 16 Nov 23 |
read | ||
Version 1 11 Oct 23 |
read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)