Encoding Retina Image to Words using Ensemble of Vision Transformers for Diabetic Retinopathy Grading [version 1; peer review: 1 not approved]

Diabetes is one of the top ten causes of death among adults worldwide. People with diabetes are prone to suffer from eye disease such as diabetic retinopathy (DR). DR damages the blood vessels in the retina and can result in vision loss. DR grading is an essential step to take to help in the early diagnosis and in the effective treatment thereof, and also to slow down its progression to vision impairment. Existing automatic solutions are mostly based on traditional image processing and machine learning techniques. Hence, there is a big gap when it comes to more generic detection and grading of DR. Various deep learning models such as convolutional neural networks (CNNs) have been previously utilized for this purpose. To enhance DR grading, this paper proposes a novel solution based on an ensemble of state-of-the-art deep learning models called vision transformers. A challenging public DR dataset proposed in a 2015 Kaggle challenge was used for training and evaluation of the proposed method. This dataset includes highly imbalanced data with five levels of severity: No DR, Mild, Moderate, Severe, and Proliferative DR. The experiments conducted showed that the proposed solution outperforms existing methods in terms of precision (47%), recall (45%), F1 score (42%), and Quadratic Weighted Kappa (QWK) (60.2%). Finally, it was able to run with low inference time (1.12 seconds). For this reason, the proposed solution can help examiners grade DR more accurately than manual means.


Background
Diabetes mellitus (DM) is a group of metabolic disorders that are characterized by high levels of blood glucose and are caused by either the deficient secretion of the hormone insulin, its inaction, or both.Chronically high levels of glucose in the blood that come with DM may bring about long-term damage to several different organs, such as the eyes. 1,2DM is a pandemic of great concern [3][4][5][6] as approximately 463 million adults were living with DM in 2019.This number is expected to rise to about 700 million by the year 2045. 4][8][9][10][11] DR is the leading cause of blindness among adults in the working age 12 and has brought about several personal and socioeconomic consequences, 13 and a greater risk of developing other complications of DM and of dying. 14According to a meta-analysis that reviewed 35 studies worldwide from 1980 to 2008, 34.6% of all patients with DM globally have DR of some form, while 10.2% of all patients with DM have vision-threatening DR. 15 A study found that screening for DR and the early treatment thereof could lower the risk of vision loss by about 56%, 16 proving that blindness due to DR is highly preventable.Moreover, the World Health Organization (WHO) Universal Eye Health: A Global Action Plan 2014-2019 advocated for efforts to reduce the prevalence of preventable visual impairments and blindness including those that arise as complications of DM.
Many tests can be used for the screening of DR.While sensitivity and specificity are certainly important, the data about performance of tests for DR are different.Researchers employ different outcomes to measure sensitivity, e.g., the ability of a screening test to detect any form of retinopathy, and the ability to detect vision-threatening DR.Additionally, some tests may detect diabetic macular edema better than the different grades of DR according to World Health Organization.Diabetic retinopathy screening: a short guide.Copenhagen: WHO Regional Office for Europe.The examiner's skill is also a source of variation in the test results.A systematic review found that the sensitivity of direct ophthalmoscopy (DO) varies greatly when performed by general practitioners (25%-66%) and by ophthalmologists (43%-79%). 17 grading is an essential step in the early diagnosis and effective treatment of the disease.Manual grading is based on high-resolution retinal images examined by a clinician.However, the process is time-consuming and is prone to misdiagnosis.This paper aims to address the matter by developing a fast and accurate automated DR grading system.Here, a novel solution that is based on an ensemble of vision transformers was proposed to enhance grading.Moreover, a public DR dataset proposed in a 2015 Kaggle challenge was used for the training and evaluation.

Related work
Traditional machine learning (ML) methods have been used to detect DR.Typically, these ML methods require handtuned features extracted from small datasets to aid in classification.These traditional methods may involve ensemble learning 18 ; the calculation of the mean, standard deviation, and edge strength 19 ; and the segmentation of hard macular exudates. 20,21However, these methods require tedious and time-consuming feature engineering steps that are sensitive to the chosen set of features.Work that employs traditional ML methods to detect DR usually yield favorable results using one dataset but fail to obtain a similar success when another dataset is used. 18,19This is a common limitation of handcrafted features.
Deep neural networks, such as CNNs, with much larger datasets have also been used for classification tasks in the diagnosis and grading of DR.These methods involve CNNs developed from scratch to grade the disease using images of the retinal fundus 22 ; transfer learning based on Inception-v3 neural network to perform multiple binary classification (moderate versus worse DR, and severe or worse DR) 23 ; segmentation prior to detection by pixel classification 24 or patch classification. 25A deep learning (DL)-based framework that uses advanced image processing and a boosting algorithm for grading of DR was also proposed by. 26This is one of only a handful of works that have effectively employed transfer learning to train large neural networks for this purpose.Recently, ResNet, a deep CNN, was proposed to address the problem brought about by imbalanced datasets in DR grading. 27Additionally, a bagging ensemble of three CNNs: a shallow CNN, VGG16, and InceptionV3, was used to classify images as DR, glaucoma, myopia and normal. 28eviously, a transformer was also proposed by Vaswani et al. 29 for natural language processing tasks especially for machine translation.Inspired by the successes of the transformers in NLP, transformers were transferred to computer vision tasks e.g.image classification.

Methods
In this section, the DR detection dataset is explored.Additionally, the vision transformer, a DL model that was used on these data, is discussed in detail.

Dataset overview
The DR detection dataset is highly imbalanced and consists of high-resolution images with five levels of severity including No_DR, Mild, Moderate, Severe, and Proliferative_DR.It has significantly more samples for the negative (No_DR) category than for the four positive categories.Table 1 shows the class distribution of the training and testing sets.Figure 1, on the other hand, shows a few samples from each class.The images come with different conditions and were labeled with subject IDs.The left and right fields were provided for every subject.The images were captured by different cameras, thus affecting the visual appearance of the images of left and right eyes.
The samples of the training set were rescaled between 0,1 ½ , cropped to remove their black borders, and augmented by randomly flipping the samples horizontally and vertically, and by randomly rotating the samples by 360°.The samples of the test set were only cropped and rescaled.Figure 2 shows a few augmented samples from the training set.

Vision transformer
A vision transformer is a state-of-the-art DL model that is used for image classification and was inspired by Dosovitskiy et al. 30 Figure 3 shows the architecture of the vision transformer.In this paper, a retinal image that has a sequence of patches encoded as a set of words was applied to the transformer encoder as shown in Figure 3.The original image's patches were extracted with a fixed patch size P, P ð Þ where P ¼ 16, W is the image width, H is the image height, and N is the number of patches.The extracted patches were flattened and each patch x p belonged to ℝ P 2 : C , where C is the number of channels.
As a result, the 2D image was converted into a sequence of patches x ∈ ℝ NÂ P 2 :C ð Þ .Each patch in the sequence x was mapped to a latent vector with hidden size D ¼ 768.A learnable class embedding z 0 0 ¼ x class was prepended for the embedded patches, whose state at the output of the transformer's encoder z 0 L À Á serves as the representation y of the image.After that, a classifier was attached to the image representation y.Additionally, a position embedding E pos was added to the patch embeddings to capture the order of patches that were fed into the transformer encoder.Figure 4 illustrates the architecture of transformer's encoder with L blocks, each block containing alternating layers of multi-head self-attention (MSA) 29 and multi-layer perceptron (MLP) blocks.The layer normalization (LN) 31 was applied before every block, while residual connections were applied after every block. 30

Ensemble learning of vision transformers
Ensemble learning is a ML ensemble meta-algorithm.Bagging (Bootstrap Aggregating) is a type of ensemble learning that uses "majority voting" to combine the output of different base models to produce one optimal predictive model and improve the stability and accuracy. 32e advantage of ensemble bagging several transformers is that aggregation of several transformers, each trained on a subset of the dataset, outperforms a single transformer trained over the entire set.In other words, it leads to less overfit by removing variance in high-variance low-bias datasets.To increase the speed of training, the training can be done in parallel by running each transformer on its own data prior to result aggregation, as shown in Figure 5.

Experimental setup and protocol
The images available in this dataset were resized to H ¼ 256, W ¼ 256, the latent vector hidden size was set to D ¼ 768, the number of layers of the transformer to L ¼ 12, the MLP size to 3072, the MSA heads to 12, and the default value of the patch size to P ¼ 16.Thus, the sequence's number N was 256.In the experiments conducted, 20% from each class in the training set were selected for validation.All transformers were fine-tuned using the weights of the transformer pre-trained on ImageNet-21K. 33r optimization, the ADAM algorithm 34 was utilized with a batch size of 8. Furthermore, the mean squared error loss function was used.The training process for each transformer consists of two stages: 1) All layers in the transformer backbone were frozen and the regression head that was initialized randomly was unfrozen.Then, the regression head was trained for five epochs.
2) The entire model (transformer backbone + regression head) which was trained for 40 epochs was unfrozen.Data augmentation, early stopping, dropout, and learning rate schedules were used to prevent overfitting and loss divergence.Figure 6 shows the attention map of a few samples extracted from the transformer.
The classification heads of all transformers were removed and replaced by a regression head with one node instead of logits.The regression output of a transformer was interpreted as shown in Table 2 to be converted into a category.
An ensemble of ten transformers with similar architectures and hyperparameters was used.The samples were divided randomly into ten sets and each transformer was trained on each one.After interpreting the regression output from each transformer, the predicted classes from ten transformers were aggregated with "majority voting" to predict the final class.
Training, validation, and testing were carried out using the TensorFlow framework on an NVIDIA Tesla T4 GPU.

Performance metrics
In this section, the results of the proposed ensemble of transformers are discussed.The performance metrics, such as precision, recall, and F1 score were calculated.Additionally, the quadratic weighted kappa (QWK) metric was utilized in this dataset because these data needed specialists to label the images manually since the small differences among the classes can only be recognized by specialist physicians.QWK which lies in the range À1, þ1 ½ measures the agreement between two ratings and is calculated between the scores assigned by human raters (doctors) and predicted scores (models) as shown in Table 3.The dataset has five ratings: 0, 1, 2, 3, 4.
QWK was calculated as follows: 1) The confusion matrix O between predicted and actual ratings was calculated.
2) A histogram vector was computed for each rating in the predictions and in the actual.
3) The E N Â N ð Þmatrix which represents the outer product between the two histogram vectors was calculated.
4) The W (N Â N) weight matrix was constructed representing the difference between the ratings as shown in Table 4. 35 Where 1 ≤ i ≤ 5, 1 ≤ j ≤ 5 5) QWK was defined as follows 35 : where N is the number of classes.

Experimental results
Table 5 shows the performance metrics of ten transformers with each one trained on a subset of data.It is obvious that there is a big difference among the performances of these individual transformers.Transformer_1 was able to yield a Kappa of 55.1%.On the other hand, transformer_10 yielded a Kappa of 30.9%.Ensembles of various numbers of transformers including all ten transformers, four transformers (1,3,8,9), and other configurations were also evaluated.The best model was an ensemble of two transformers (1,3) which yielded a Kappa of 60.2%.
This Kappa is at the boundary between moderate and substantial agreement.The previous results confirm that the performance of the ensemble of transformers (1,3) trained with fewer training images outperformed the ensemble of ten transformers trained with five times the number of images.Table 6 compares the performance of the ensemble of transformers with the ensemble of ResNet50 CNNs.The ResNet50 CNN was transferred from ImageNet 1K.The top layers were replaced by a support vector machine that was tuned with this dataset.The proposed ensemble of transformers outperformed the ensemble of ResNet50 CNNs significantly by >18% Kappa.
The confusion matrix of each configuration including ensembles of transformers with ten, four, and two transformers, and the ensemble of two ResNet50 CNNs were shown in Figure 7.The confusion matrix (c) which represents the best Kappa of 60.2% shows that the model was able to recognize the categories of severe and proliferative DR from one side, and NO DR and mild DR from the second side.Likewise the authors write "A challenging public DR dataset proposed in a 2015 Kaggle challenge was used for training and evaluation of the proposed method."Yet there are many datasets for DR grading, so why only Kaggle?The author can validate their model with other datasets.

2.
Add some latest papers and cite them.

3.
Read the whole manuscript for typos and grammatical mistakes.4.
Recheck, not all references that could be given are given.In 'Dataset Overview', in the sentence "the quadratic weighted kappa (QWK) metric was utilized in this dataset because of these data..." the reference is missing.Likewise there are places where authors can give references.

5.
Finally, your conclusion needs to be more tailored to your findings.The authors write "Hence, we intend to enhance the performance by utilizing a collection of various DR datasets.This can increase the size and variety of training data to train the proposed model 6.
from scratch instead of starting from the weights of the ImageNet 21K-based model.By doing so, we can ultimately enhance performance" Yet, why don't authors have tried increasing the datasets utilized already?Furthermore, the ImageNet 21K-based model is mentioned significantly in the conclusion without having been elaborated upon in the main text of the article: why was it introduced, what is it, and what is it for and how does it improve performance?Likewise, there is no mention of ensemble transformers or vision transformers in the conclusion.For pre-submission enquiries, contact research@f1000.com

Figure 2 .
Figure 2. A few samples cropped and augmented randomly.

Algorithm:Figure 6 .
Figure 6.The Attention Map of samples A) No DR, B) Mild, C) Moderate, D) Severe, E) Proliferative DR.

1 .I
find it difficult to understand, from the abstract, the proposed methodology by which you seek to solve the problem of your paper.For example, the authors could clarify the following: "To enhance DR grading, this paper proposes a novel solution based on an ensemble of state-of-the-art deep learning models called vision transformers".What are vision transformers?Are authors proposing this or what is new in it?https://viso.ai/deeplearning/vision-transformer-vit/ the work clearly and accurately presented and does it cite the current literature?No Is the study design appropriate and is the work technically sound?No Are sufficient details of methods and analysis provided to allow replication by others?Partly If applicable, is the statistical analysis and its interpretation appropriate?Partly Are all the source data underlying the results available to ensure full reproducibility?Partly Are the conclusions drawn adequately supported by the results?Partly Competing Interests: No competing interests were disclosed.Reviewer Expertise: Image and Signal Processing, Soft Computing, Internet-of-Things, Pattern Recognition, Bio-inspired Computing and Computer-Aided Design of FPGA and VLSI circuits I confirm that I have read this submission and believe that I have an appropriate level of expertise to state that I do not consider it to be of an acceptable scientific standard, for reasons outlined above.The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias • You can publish traditional articles, null/negative results, case reports, data notes and more • The peer review process is transparent and collaborative • Your article is indexed in PubMed after passing peer review • Dedicated customer support at every stage •

Table 1 .
Training and Testing Class Distribution in dataset of EyePACS, Diabetic Retinopathy Detection.

Table 2 .
Pseudocode for the transformer regression output interpretation.

Table 4 .
The Weight Matrix W represents the difference between the classes.

Table 5 .
Performance metrics for various ensemble models.