Keywords
Optic disc localisation, fundus image, vessel masking, Hough transform
This article is included in the Research Synergy Foundation gateway.
Optic disc localisation, fundus image, vessel masking, Hough transform
Retinal image analysis can be very helpful in providing insights into patients’ ocular health. By analysing the retinal image, ophthalmologists can detect various symptoms of ocular diseases, which may help to ensure timely treatment of the diseases, thus ultimately decreasing the risk of patients going totally blind. Most hospitals are now equipped with modern fundus cameras that image the patient’s fundus to produce a retinal image. Figure 1 shows a sample of a fundus image. Nerves from the retina converge to form a round or oval optic disc (OD) that sends a focused image onto the retina, in the form of electrical impulses to the part of the brain responsible for visual function. The central part of the retina, known as the macula, is responsible for an important part of the central vision system, while the fovea is the point in the middle of the macula.
With routine retinal screening in place, a huge number of fundus images will need to be analysed daily. This scenario has resulted in a lot of research being conducted on the automatic analysis of fundus images to assist ophthalmologists in efficiently and accurately performing retinal diagnoses.1,2 These studies aim to extract important parameters from a fundus image, mostly related to the important landmarks, including the OD, retinal blood vessels, fovea, macula, and any associated anomalies.
A topic of interest regarding fundus image analysis is the automatic localisation of the OD from a fundus image. By detecting the OD, parameters such as its position and radius could be used to estimate other parameters such as vessel width or tortuosity. Normally, when measuring for these parameters from a fundus image, to be considered for parameter calculation the vessels are to be of certain distance close to the OD.3 OD detection would also allow for identification of the eye side from which the image is taken, whether right eye or left eye.
A number of studies have been dedicated to automatically detecting the OD on the fundus image,4–8 while others have also attempted at providing a more accurate boundary of the detected OD.9–12 The methods used include circular transformation,11 directional local contrast,13 probability models,6 automatic thresholding3 and deep learning.12,14
Thresholding works in locating the OD in fundus images with high-intensity differences between the OD region and other parts of the image.15–18 When dealing with images containing an OD with low contrast against the retinal background or images with pathologies, the thresholding method may fail to detect the OD.19 A set of points is used to describe the OD boundary by minimising the energy function in active contour-based methods for OD detection.8,20–22 While this method may work well, its performance is very much dependent on the initial seed points for the contour model. There is also the risk of being trapped in a local maximum when searching for the OD boundary, especially with images containing pathologies. Extensive review of existing OD segmentation methods can be found in the review literature.19,23
OD localisation focuses more on locating the position of OD center on the fundus image, which is different from the focus of the OD segmentation procedure. In OD segmentation, the general aim is to identify every pixel that belongs to the OD on the fundus. In most applications, OD localisation precedes the OD segmentation step; hence it is important to have an accurate estimate of the OD center through OD localisation to ensure a successful OD segmentation procedure. Since the OD is usually a bright disc-shaped area on the fundus image, some researchers have investigated the use of the Hough transform technique to detect the shape and thus estimate the center of the OD.3,24–27 Many researchers employ methods to remove vessel structures from the fundus image, or use vessel masking, to further highlight the OD structure, such as inpainting28 and median filtering.24 Combining Hough transform with vessel masking can be a potential method to efficiently localise the OD in a fundus image, instead of using the methods separately.24–28
A method inspired by combining existing efficient OD localisation methods, namely thresholding, vessel masking and Hough transform, is proposed to localise the OD centre's position from a fundus image.
The proposed OD localisation method takes a color fundus image as the input. Firstly, the green channel image is extracted from the color image as part of pre-processing. Next, the green channel image is padded around the original region of interest (ROI - the circular non-black area), with additional pixels matching the pixel values along the border. This pre-processing step is similar to Soares’ proposed method29 for retinal vessel segmentation, except that the number of iterations for ROI padding is increased to 50 instead of 20. This step helps to minimise the contrast between the ROI and background further so it would not be falsely detected as the OD centre in the following step. The pre-processed image is then resized to a standardised smaller size for faster computation and is converted to a binary image using a global thresholding method, called Otsu’s method. The binary image will highlight most vessel structures in the pre-processed fundus image in white pixels, while the retinal background is in black pixels. This method is implemented using the Matlab software, which can potentially be translated into SCILAB as an open-source alternative.
Next, using the vessel pixel information from the binary image, a discrete cosine transform-based smoothing method is employed on the pre-processed image to replace all the vessel pixel values with values closer to the surrounding neighbours. This vessel masking step will effectively remove most of the vessel structures from the image, resulting in a vessel-masked image. The Hough transform is then applied to detect the circle representing the OD on the image. Once the circle has been detected, the OD centre and the radius can then be estimated to be used in the estimation of important retinal parameters such as cup-to-disc ratio, tortuosity and calibre of the retinal vessels.
Figure 2 depicts all steps involved in OD localisation from a fundus image and their corresponding sample output images.
For validation, it is not necessary for the fundus images to have ground truth vessel segmentation images. In a number of previous studies on OD localisation, a database called Methods to Evaluate Segmentation and Indexing Techniques in the field of Retinal Ophthalmology (MESSIDOR) is used for validation.9,11,24,30,31 The MESSIDOR database32 consists of 1200 fundus images captured using a Topcon TRC NW6 non-mydriatic fundus camera with 45 degrees of field of viev (FOV). In this study, the OD’s centre position and radius are estimated on all 40 images from Digital Retinal Images for Vessel Extraction (DRIVE),33 45 images from High-Resolution Fundus (HRF),34 and 1200 images from MESSIDOR, which are all publicly available fundus image databases. The images from another popular benchmark fundus image database, the STructured Analysis of the Retina (STARE) database, are excluded in this evaluation since most of its images do not contain OD. Even for those with the OD in the ROI, the OD is only partially visible.
Table 1 shows sample output images for the main steps in the proposed OD localisation method for the DRIVE, HRF and HUKM databases. The OD-localised image output includes a “+” sign to indicate the estimated OD centre and the green circle denotes the estimated OD radius. Figure 3 shows zoomed-in images of the OD localisation output from HRF images. It can be seen that the proposed method managed to accurately detect the centre and the radius of the OD, regardless of whether the fundus image contains a clean (normal) or noisy (with pathologies) retinal background.
![]() |
Following the previous researchers’ method of assessing the OD localisation performance, a method is considered to have successful OD localisation when the estimated location of OD center is within the circumference of the OD itself.31 The proposed OD localisation method achieves a 100% correct detection rate for all images in DRIVE and HRF. For the larger MESSIDOR database, only six images out of 1200 images result in either wrong detection or non-detection of the OD, hence 99.5% successful rate. These results are summarised in Table 2 below.
Database | Number of images | Correct output | False output | Detection rate (%) |
---|---|---|---|---|
DRIVE | 40 | 40 | 0 | 100 |
HRF | 45 | 45 | 0 | 100 |
MESSIDOR | 1200 | 1194 | 6 | 99.50 |
All | 1285 | 1279 | 6 | 99.53 |
Table 3 shows the performance comparison of the proposed method against published methods in the literature. The proposed method outperforms all considered methods except for Yu’s method that achieved 99.67% detection rate. On average, this translates to a 99.53% detection rate for all the validated databases. The processing time for OD localisation is less than one second for every image tested, regardless of the original image resolution. Shorter processing time is achieved because the proposed method employs image resizing, however this does not compromise the detection rate. This method is efficient and accurate for practical application of OD localisation in clinical settings.
Author | Database | Detection rate (%) |
---|---|---|
Proposed | DRIVE | 100 |
HRF | 100 | |
MESSIDOR | 99.50 | |
Aquino et al.9 | MESSIDOR | 99.00 |
Lu11 | MESSIDOR | 98.77 |
Yu et al.31 | MESSIDOR | 99.67 |
Salih et al.30 | DRIVE | 100 |
MESSIDOR | 98.91 | |
Gui et al.5 | DRIVE | 100 |
MESSIDOR | 99.25 | |
Dietter et al.4 | DRIVE | 100 |
HRF | 100 | |
MESSIDOR | 98.91 |
In this paper, we have proposed an efficient method for localising the OD in a fundus image. The method involves the use of vessel masking to remove vessel structures from the image and Hough transform to locate the circular object on the vessel-masked image, which is the OD. The output will be in the form of the coordinates of the OD center together with the estimated radius of the OD, which can also be visualised on the fundus image. Validation of the proposed method on three different public databases, namely DRIVE, HRF and MESSIDOR resulted in an overall detection rate of 99.53%. The achieved performance is superior to many published methods available, with a much-reduced processing time of less than one second for each image. The proposed method has only been validated on one type of retinal image, which is a fundus image produced by a fundus camera. In the future, retinal images using other imaging modalities such as angiogram or scanning laser ophthalmoscopy can further validate the proposed optic disc localisation. Another interesting direction for future research is accurate segmentation of the OD boundary for more accurate parameter estimation. The output of the method may prove to be useful for diagnosing ocular diseases, which relate to parameters such as cup-to-disc ratio and vessel width parameters. Automating the step for OD localisation can help develop a fully automated computer-assisted retinal diagnosis system in the future.
The DRIVE database can be accessed at https://drive.grand-challenge.org/, HRF at https://www5.cs.fau.de/research/data/fundus-images/ and MESSIDOR at https://www.adcis.net/en/third-party/messidor/.
We would like to thank our collaborators from the Department of Ophthalmology, Universiti Kebangsaan Malaysia Medical Center, especially Dr Wan Haslina and her team for their valuable inputs for this study.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the rationale for developing the new method (or application) clearly explained?
Yes
Is the description of the method technically sound?
Yes
Are sufficient details provided to allow replication of the method development and its use by others?
Partly
If any results are presented, are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions about the method and its performance adequately supported by the findings presented in the article?
Yes
References
1. Zaaboub N, Sandid F, Douik A, Solaiman B: Optic disc detection and segmentation using saliency mask in retinal fundus images.Comput Biol Med. 2022; 150: 106067 PubMed Abstract | Publisher Full TextCompeting Interests: No competing interests were disclosed.
Reviewer Expertise: Human-Computer Interaction, Health Informatics, Artificial Intelligence
Is the rationale for developing the new method (or application) clearly explained?
Partly
Is the description of the method technically sound?
No
Are sufficient details provided to allow replication of the method development and its use by others?
Partly
If any results are presented, are all the source data underlying the results available to ensure full reproducibility?
No source data required
Are the conclusions about the method and its performance adequately supported by the findings presented in the article?
Partly
References
1. Pachade Samiksha, Porwal Prasanna, Thulkar Dhanshree, Kokar Manesh, et al.: RETINAL FUNDUS MULTI-DISEASE IMAGE DATASET (RFMID). IEEE Dataport. 2020. Publisher Full TextCompeting Interests: No competing interests were disclosed.
Reviewer Expertise: Retinal Vessel segmentation, Biomedical Image Processing
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | ||
---|---|---|
1 | 2 | |
Version 1 14 Feb 22 |
read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)