ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Research Article

Pancreatic cancer grading in pathological images using deep learning convolutional neural networks

[version 1; peer review: 1 approved, 1 approved with reservations]
PUBLISHED 18 Oct 2021
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

This article is included in the Research Synergy Foundation gateway.

This article is included in the Artificial Intelligence and Machine Learning gateway.

Abstract

Background: Pancreatic cancer is one of the deadliest forms of cancer. The cancer grades define how aggressively the cancer will spread and give indication for doctors to make proper prognosis and treatment. The current method of pancreatic cancer grading, by means of manual examination of the cancerous tissue following a biopsy, is time consuming and often results in misdiagnosis and thus incorrect treatment. This paper presents an automated grading system for pancreatic cancer from pathology images developed by comparing deep learning models on two different pathological stains.
Methods: A transfer-learning technique was adopted by testing the method on 14 different ImageNet pre-trained models. The models were fine-tuned to be trained with our dataset.
Results: From the experiment, DenseNet models appeared to be the best at classifying the validation set with up to 95.61% accuracy in grading pancreatic cancer despite the small sample set.
Conclusions: To the best of our knowledge, this is the first work in grading pancreatic cancer based on pathology images. Previous works have either focused only on detection (benign or malignant), or on radiology images (computerized tomography [CT], magnetic resonance imaging [MRI] etc.). The proposed system can be very useful to pathologists in facilitating an automated or semi-automated cancer grading system, which can address the problems found in manual grading.

Keywords

digital pathology, pancreatic cancer, cancer grading, deep learning, image classification

Introduction

Pancreatic cancer is one of the most lethal malignant neoplasms in the world,1 developed when cells multiply and grow out of control in the pancreas,2 forming cancer cells caused by cells mutation in their genes.3 Doctors commonly perform a biopsy to diagnose cancer when physical examination or imaging tests like magnetic resonance imaging (MRI) and computerized tomography (CT) scans are insufficient. In pancreatic cancer, grading is essential for planning treatment but is currently done using a meticulous microscopic examination.4 Up to now, there has been no successful implementation of artificial intelligence (AI) for classifying pancreatic cancer grade. The absence of such AI work motivates this paper to use transfer-learning to grade pathological pancreatic cancer images using 14 deep learning (DL) models. This work can facilitate an automated cancer grading system to address the exhaustive work of manual grading.

Pancreatic cancer and digital pathology

Pancreatic cancer is considered to be under-studied, and improvements in the diagnosis and prognosis of pancreatic cancer have therefore been minor.5 Digital pathology is an image-based environment obtained by scanning tissue samples from glass slides. Staining, usually using May-Grünwald-Giemsa (MGG) and haematoxylin and eosin (H&E) stains, is carried out on the tissue samples before digitization into whole-slide images. The cancer grade is identified by the degree of differentiation of the tumour cells6 ranging from well to poorly differentiated as described in Table 1.

Table 1. Pancreatic cancer grade.

MGG = May-Grünwald-Giemsa; H&E = haematoxylin and eosin.

GradeNormalGrade IGrade IIGrade III
DescriptionBenign. Cells are not cancerous and will not spread.Well differentiated. Cancer cells look like normal cell and are not growing rapidly.Moderately differentiated. Cancer cells look abnormal and are growing faster than normal cell.Poorly differentiated. Cancer cells look very abnormal and may spread aggressively.
MGG Stainffa4a7e8-7890-42b4-879d-f7ede8a4fd4d_graphic1.gifffa4a7e8-7890-42b4-879d-f7ede8a4fd4d_graphic2.gifffa4a7e8-7890-42b4-879d-f7ede8a4fd4d_graphic3.gifffa4a7e8-7890-42b4-879d-f7ede8a4fd4d_graphic4.gif
H&E Stainffa4a7e8-7890-42b4-879d-f7ede8a4fd4d_graphic5.gifffa4a7e8-7890-42b4-879d-f7ede8a4fd4d_graphic6.gifffa4a7e8-7890-42b4-879d-f7ede8a4fd4d_graphic7.gifffa4a7e8-7890-42b4-879d-f7ede8a4fd4d_graphic8.gif

Deep learning and related works

Convolutional neural network (CNN) is a widely used deep learning (DL) algorithm in medical image-based classification and prediction.7 Several methods use CNN in cancer detection and diagnosis8 such as the Gleason grading of prostate cancer,911 colon cancer grading,12 breast cancer detection,13,14 and pancreatic cancer detection1518 and classification.19 However, grading of pancreatic cancer with DL still needs comprehensive study.

Methodology

The methodology of this work was done at Multimedia University, Cyberjaya, from June 2020 to May 2021. The overall methodology of this research is as illustrated in Figure 1, with two major stages. In the data preparation stage, pathology images of pancreas tissue samples were obtained from our collaborator and the images were pre-classified by a pathologist into four classes. In the DL model development stage, the images were trained on the DL model and evaluated accordingly. All stages were carried out using Jupyter notebook in Google Colab. The source code is available from GitHub and archived with Zenodo.24

ffa4a7e8-7890-42b4-879d-f7ede8a4fd4d_figure1.gif

Figure 1. Flowchart of the research work.

Ethical approval

This work was approved by the Research Ethics Committee of Multimedia University with approval number EA2102021. This article does not contain any studies with human participants or animals performed by any of the authors. Only pathology images were used, and the patients’ personal data were anonymized.

Dataset preparation

Pathology image procurement

A total of 138 high-resolution images with varying dimensions (1600 × 1200, 1807 × 835 and 1807 × 896) were obtained and pre-classified by the collaborators (see Acknowledgements). Four classes were identified (as shown in Table 2): Normal, Grade-I, Grade-II and Grade-III. Each image consisted of a tissue-sample stained with MGG and H&E. The image distribution in each class was unequal with Grade-II having 58 images and Normal with only 20 images. To better capture the cells characteristics which is paramount in determining their grade and to match the lower-resolution setting of the network’s input, the images were pre-processed into small non-overlapping patches.

Table 2. Number of high-resolution images in the dataset.

MGG = May-Grünwald-Giemsa; H&E = haematoxylin and eosin.

Stain\ClassNormalGrade IGrade IIGrade IIITotal
MGG stained134431979
H&E stained727151059
Total20315829138

Image pre-processing

The pre-trained models require a low dimension and square image for training and making predictions. The squared slicing method is used where smaller patches with approximately 200 × 200 pixels of non-overlapping regions are sampled from the original images. Further processing was done to remove unwanted patches, as shown in Figure 2.

ffa4a7e8-7890-42b4-879d-f7ede8a4fd4d_figure2.gif

Figure 2. Process of slicing an image and discarding unwanted non-tissue patches.

Image dataset

A total of 6468 patches were generated from the slicing process of the 138 original images which is an increase of 468% in number of images. Overall, 50.5% (3267) of the patches with background and non-tissue information were discarded, and the remaining are listed in Table 3. Examples of MGG stain and H&E stain pathology images are shown in Table 1, with the mixed dataset combining all images from MGG and H&E stains. From the numbers in Table 3, these datasets still had an imbalanced number of patch images in each class but this can be mitigated by employing a weighted average to evaluate the model.

Table 3. Number of sliced images kept for training and validation.

MGG = May-Grünwald-Giemsa; H&E = haematoxylin and eosin.

Stain\ClassNormalGrade IGrade IIGrade IIITotal
MGG stained4011089833661858
H&E stained1396063092891343
Total54071412926553201

Training-validation splitting and K-fold cross-validation

To evaluate the DL model, images in each dataset were split into training and validation set with 80-20 ratio. K-fold cross-validation with K = 5 was used by splitting all MGG, H&E and the mixed dataset into five parts, producing cross-validation sets with five new copies of MGG, H&E and mixed datasets and labelled (e.g. MGG Set 1 to MGG Set 5 for MGG). Each set had a different set of images used for training (80%) and validation (20%). The average value was calculated from the five-training iterations to evaluate the performance.

Image data augmentation and normalisation

Image data augmentation was implemented to virtually expand the training set, but not on the validation set. The transformation parameter involved was horizontal flip, vertical flip and -90° to 90° rotation range. Image data normalisation was used to rescale image pixels from a range of [0,255] to [0,1] so the input pixels will have similar data distribution.

CNN deep learning model development

Deep CNN algorithm was used for developing a model for classifying pancreatic cancer grading from pathology images.

Transfer-learning

A total of 14 CNN pre-trained models from recognizing 1000 classes in ImageNet was selected from Keras API20 to get the best model for classifying the 4-grade classes of pancreatic cancer. The proposed pre-trained models are listed in Table 4 along with their original model’s image input shape, and its respective top-1 accuracy on the ImageNet validation set.

Table 4. ImageNet pre-trained models.

Pre-trained modelsInput shapeTop-1 accuracy
Xception299 × 2990.790
VGG16224 × 2240.713
VGG19224 × 2240.713
ResNet50V2224 × 2240.760
ResNet101V2224 × 2240.772
ResNet152V2224 × 2240.780
InceptionV3299 × 2990.779
InceptionResNetV2299 × 2990.803
MobileNetV2224 × 2240.713
DenseNet121224 × 2240.750
DenseNet169224 × 2240.762
DenseNet201224 × 2240.773
NASNetMobile224 × 2240.744
NASNetLarge331 × 3310.825

Fine-tuning

All 14 models were fine-tuned with four newly added layers to extract the features from pathology images: a flatten-layer to form a 1D fully connected layer; a dense-layer with 256 nodes and ReLu activation-function; a dropout-layer with a rate of 0.4 to regularise the network; and lastly another dense-layer with 4-nodes and SoftMax activation-function to normalize the probability of prediction.

Setup and evaluation parameters

Batch size of 64 was chosen to allow the computer to train and validate 64-patch-samples at the same time. Adam optimizer with default initial learning rate of α = 0.01 and moment decay rate of β1 = 0.9 and β2 = 0.999 was used. The loss function is calculated using categorical cross-entropy for the 4-class classification task. With this setup, the models are compiled and trained for 100-epochs.

The confusion matrix, precision, recall, f1-score and weighted-average were used to evaluate the model's performance. Weighted-average was used to calculate the performance of individual cross-validation set and suitable for imbalanced dataset. The equation for the weighted-average is:

averageweighted=k=1nPkNo. of images in classkTotal number of images in dataset

Results and discussion

Effect of data augmentation

This experiment was done with the first cross-validation set of the mixed dataset, to observe how data augmentation impacts a model training performance.23 Table 5 and Table 6 display the final accuracy and loss of training and validation set after 100 epochs. Without data augmentation in Table 5, it is evident that overfitting has occurred, because the model is doing very well on the training set but not on the validation set. With data augmentation, the validation accuracy improved, specifically on VGG19 model (54.83% to 77.22%). The training accuracy of other models are slightly reduced with data augmentation (except for VGG19) but it is normal as the model is learning newly transformed images. The validation loss also shows a reduction, as in Table 6, such as on NASNetLarge model from 3.36376 to 0.68587. Overall, these results show that data augmentation may reduce overfitting and improve model performance as reported in.10,13,14

Table 5. Model accuracy for without and with data augmentation after 100 epochs.

Pre-trained modelsWithout image data augmentationWith image data augmentation
Training set (%)Validation set (%)Training set (%)Validation set (%)
Xception99.8883.7593.6385.02
VGG1697.3576.8886.4881.12
VGG1964.8554.8377.3477.22
ResNet50V2100.0082.6695.3986.11
ResNet101V299.9279.8493.4485.80
ResNet152V2100.0081.2595.3586.58
InceptionV399.3882.6591.2383.15
InceptionRes-NetV299.6179.0690.1283.78
MobileNetV299.6882.5094.1485.02
DenseNet12199.8482.8194.6988.14
DenseNet16999.8485.0095.9889.70
DenseNet20199.9285.4796.7688.14
NASNetMobile99.9279.6991.2181.75
NASNetLarge98.3273.4889.8479.88

Table 6. Model loss for without and with data augmentation after 100 epochs.

Pre-trained modelsWithout image data augmentationWith image data augmentation
Training setValidation setTraining setValidation set
Xception0.002411.491180.162540.47146
VGG160.092560.872810.356390.46890
VGG190.952391.148150.562130.60279
ResNet50V20.000051.400100.127760.42690
ResNet101V20.002871.310440.177010.41832
ResNet152V20.000101.687710.123550.40160
InceptionV30.016081.026460.237750.44671
InceptionRes-NetV20.020741.151620.261730.47013
MobileNetV20.007031.038790.147390.47216
DenseNet1210.006150.926050.143790.33270
DenseNet1690.004230.938880.112090.35975
DenseNet2010.003850.949790.093990.32288
NASNetMobile0.003381.802720.229780.47335
NASNetLarge0.056693.363760.322250.68587

Comparison analysis of model performance

The overall performance results of all 14 different transfer-learning models proposed for this experiment are presented. Each model was trained with the 3 datasets and 5-fold cross-validation. Figure 3 illustrates the overall performance in terms of mean f1-score.

ffa4a7e8-7890-42b4-879d-f7ede8a4fd4d_figure3.gif

Figure 3. Mean F1-score of models.

Comparison between MGG, H&E and the mixed dataset

This comparison shows how a DL model learns from single coloured stain. In Figure 3, all models trained with the H&E obtained the highest f1-score compared to MGG and mixed. Most models scored above 0.9 except for VGG19 (0.87). When trained with the MGG, models other than VGG16 and VGG19 performed the lowest compared to H&E and Mixed. The performance of mixed is as expected because it contains a mixture of both datasets. The VGG16 and VGG19 model, however, performed better on MGG than Mixed, due to the small VGG network architecture and small fully-connected layers making it unable to learn complex features and patterns in pathology image. The trend described in Figure 3 indicates that image patches in the H&E are easier to learn with better prediction than MGG.

Comparison between pre-trained models

From the result, DenseNet network architecture was the best at classifying pathology images where all three variations trained on MGG, H&E and mixed take the top spot among the 14 models. The ResNet on mixed dataset were ranked ascendingly from ResNet101V2, ResNet50V2 and ResNet152V2 before the three DenseNet models. This supports the work of Huang et al. in21 where DenseNet was designed to improve the ResNet architecture. DenseNet201 which is a much deeper layer than the other two DenseNet models managed to achieve the highest f1-score of 0.88, 0.96 and 0.89 for MGG, H&E and mixed, respectively. The DenseNet121 and DenseNet169 performance on the three dataset scores were marginally lower at 0.87, 0.95, 0.89 and 0.87, 0.95, 0.88, respectively. This shows that a deeper DenseNet layer can perform more accurate prediction.

The Xception21 and InceptionResNetV222 are improvements of InceptionV3, which performs better than their ancestor. The f1-scores for Xception trained by MGG, H&E and Mixed are 0.85, 0.94 and 0.86, as compared to InceptionV3, 0.80, 0.92 and 0.83, respectively. However, InceptionResNetV2 are just slightly higher than InceptionV3 (0.93 and 0.83 for H&E and mixed) but lower for MGG (0.80). VGG models did not perform quite well when compared to its earlier models. VGG19, which is supposed to be an improvement to VGG16, failed to achieve a greater f1-score, with 0.74, 0.87 and 0.65 for the MGG, H&E and mixed, respectively, while VGG16 was higher at 0.80, 0.93 and 0.78. The results concluded that VGG19 was the worst performing model for our datasets.

This experiment applied transfer-learning on 14 ImageNet pre-trained models to classify pancreatic cancer grades. From the comparisons, DenseNet201 model is suggested for practical application of pancreatic grading system of MGG or H&E stains.

Comparison between the best and the worst performing model

Table 7 and Table 8 shows the precision and recall of VGG19 (the worst) and DenseNet201 (the best) for the three datasets. VGG19 struggles to make prediction for Grade-I patches in MGG where the precision and recall are 0.00 for CV sets 3, 4, and 5. A similar pattern is noticed in Grade-III patches, and from our observation, this is because most of the Grade-I and Grade-III patches were wrongly predicted as Grade-II. This is due to the imbalance classes in MGG where Grade-II patches account for 52.9% of the total images whereas Grade-I consist of only 5% and Grade-III 19.7%. This class imbalance has caused the VGG19 model to struggle a lot at recalling class with fewer data.

Table 7. Precision rate of VGG19 and DenseNet201.

PrecisionVGG19DenseNet201
Class\CV set1234512345
MGG DatasetNormal0.870.910.960.890.850.950.930.990.940.95
Grade I0.620.640.000.000.000.770.851.000.790.70
Grade II0.770.800.740.720.770.870.870.910.860.90
Grade III0.710.560.540.560.640.790.830.800.850.84
Weighted Average0.770.770.700.680.720.870.870.910.870.89
Mean0.72890.8819
H&E DatasetNormal1.000.960.961.000.961.001.001.001.001.00
Grade I0.980.950.970.910.910.981.000.990.970.97
Grade II0.920.890.840.900.860.930.940.870.950.86
Grade III0.950.750.810.900.930.920.930.880.980.95
Weighted Average0.890.870.890.880.880.960.970.940.970.94
Mean0.88020.9565
Mixed DatasetNormal0.900.860.690.760.610.940.880.890.960.94
Grade I0.950.950.900.810.870.930.970.980.960.95
Grade II0.980.670.650.520.560.850.850.890.880.87
Grade III0.780.880.880.000.000.830.860.820.910.83
Weighted Average0.920.810.760.520.520.880.880.900.920.89
Mean0.70550.8935

Table 8. Recall Rate of VGG19 and DenseNet201.

RecallVGG19DenseNet201
Class\CV set1234512345
MGG DatasetNormal0.890.930.900.840.880.890.970.930.940.94
Grade I0.230.410.000.000.000.450.500.550.520.76
Grade II0.920.900.930.940.930.940.940.950.940.94
Grade III0.470.420.370.300.440.770.710.880.730.71
Weighted Average0.780.780.760.740.770.870.880.910.870.88
Mean0.76690.8819
H&E DatasetNormal1.000.960.960.930.821.001.001.001.001.00
Grade I0.800.910.930.910.930.990.990.980.990.97
Grade II0.920.900.920.970.870.870.940.890.950.92
Grade III0.720.830.760.760.720.950.950.900.930.90
Weighted Average0.840.870.880.870.860.950.970.940.970.95
Mean0.86510.9571
Mixed DatasetNormal0.760.810.920.290.570.940.980.970.980.94
Grade I0.750.780.740.790.780.900.870.890.920.85
Grade II0.950.950.920.930.900.900.920.910.940.92
Grade III0.460.370.110.000.000.780.750.820.810.82
Weighted Average0.770.770.710.600.630.880.880.900.920.89
Mean0.69810.8933

For H&E images, the effect of class imbalance however did not affect the performance of VGG19. The recall and precision for Normal class are ranked among the highest despite its smallest number (10%) of patches. Looking back at Table 1, the H&E Normal images have a quite different stain colour compared to other classes, which explains the good prediction for both models. This could be seen as a problem where limited image variation can cause biasness. The precision of Normal class would score poorly if it were tested to predict different variation of H&E stain image even with the same set of ground truth, but can be assuaged if the class have many different variations of stain colour.

For the mixed dataset, VGG19 also struggled to predict Grade-III class, especially on CV 4 and 5 where it scored 0.00 for both metrics. The reason could be that the Grade-III patches are difficult for the VGG19 model to learn. This is the reason why cross-validation should be performed to rigorously evaluate a DL model. DenseNet201 managed to get good recall for Grade-III patches for both CV sets, confirming its ability to learn complex features on the pathology image.

Conclusion

This paper presents development of several deep learning models through transfer-learning for classifying pancreatic cancer grade from pathology images. The datasets were trained on a total of 14 ImageNet pre-trained models. Image data augmentation was performed to counter the low number of images and has proven to improve the validation accuracies of all pre-trained models up to 40%. The evaluation on 14 pre-trained models shows that the DenseNet models performed best compared to the other models. Most of the models trained by H&E managed to achieve f1-score above 0.9. The MGG dataset scores lower f1-score compared to the mixed dataset. The highest f1-scores were achieved by DenseNet201, with 0.8786, 0.9561 and 0.8915 for MGG, H&E and Mixed, respectively. To the best of our knowledge, no similar work on pancreatic cancer grading has been reported in the literature. With these promising early results, this work can aid pathologists in facilitating an automated pancreatic cancer grading system for better cancer diagnosis and prognosis. This study has not been tested with whole slide images (WSI), but similar approaches can be applied. Further improvements to the system can potentially be achieved by using future state-of-the-art DL models.

Data availability

Underlying data

Open Science Framework: Dataset for Pancreatic Cancer Grading in Pathological Images using Deep Learning Convolutional Neural Networks. https://doi.org/10.17605/OSF.IO/WC4U9.23

This project contains the following underlying data:

  • - Dataset PCGIPI-Original.zip (pancreatic pathological image patches used for our analysis. The stain types are May-Grünwald-Giemsa (MGG) and Haematoxylin and Eosin (H&E)).

  • - Dataset PCGIPI-sliced.zip

  • - PCGIPI Results.xlsx

  • - Slicing Process for Table 3.docx

Data are available under the terms of the Creative Commons Zero “No rights reserved” data waiver (CC0 1.0 Public domain dedication).

Extended data

Analysis code available from: https://github.com/mnmahir/FYProject-PCGIPI

Archived analysis code as at time of publication: https://doi.org/10.5281/zenodo.5532663.24

License: MIT

Comments on this article Comments (0)

Version 2
VERSION 2 PUBLISHED 18 Oct 2021
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Mohamad Sehmi MN, Ahmad Fauzi MF, Wan Ahmad WSHM and Wan Ling Chan E. Pancreatic cancer grading in pathological images using deep learning convolutional neural networks [version 1; peer review: 1 approved, 1 approved with reservations]. F1000Research 2021, 10:1057 (https://doi.org/10.12688/f1000research.73161.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 1
VERSION 1
PUBLISHED 18 Oct 2021
Views
13
Cite
Reviewer Report 19 Jul 2022
Rajasvaran Logeswaran, Faculty of Information Technology, City University, Petaling Jaya, Malaysia 
Approved
VIEWS 13
Summary:
The paper reports on novel work undertaken in automated grading of pancreatic cancer using pathology images and deep learning. The classification is based on the grade of the cancer, instead of just detecting the presence or type of ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Logeswaran R. Reviewer Report For: Pancreatic cancer grading in pathological images using deep learning convolutional neural networks [version 1; peer review: 1 approved, 1 approved with reservations]. F1000Research 2021, 10:1057 (https://doi.org/10.5256/f1000research.76792.r142271)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
37
Cite
Reviewer Report 18 Jul 2022
Francisco Maria Calisto, Institute for Systems and Robotics (ISR/IST), LARSyS, Instituto Superior Técnico (IST), University of Lisbon, Lisbon, Portugal;  Interactive Technologies Institute (ITI), LARSyS, Instituto Superior Técnico (IST), University of Lisbon, Lisbon, Portugal 
Approved with Reservations
VIEWS 37
In this manuscript, the authors are presenting the development of several DL (deep learning) models through transfer-learning for classifying pancreatic cancer grade from pathology images. Specifically, the authors performed data augmentation across a dataset of images to counter the low number ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Calisto FM. Reviewer Report For: Pancreatic cancer grading in pathological images using deep learning convolutional neural networks [version 1; peer review: 1 approved, 1 approved with reservations]. F1000Research 2021, 10:1057 (https://doi.org/10.5256/f1000research.76792.r142618)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 01 Nov 2022
    Wan Siti Halimatul Munirah Wan Ahmad, Faculty of Engineering, Multimedia University, Cyberjaya, 63100, Malaysia
    01 Nov 2022
    Author Response
    Thank you Prof. F.M.Calisto for the encouraging comments for improving our article.

    We have noted the following changes (new texts are emphasized in Bold):

    Revised Introduction:
    Pancreatic cancer ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 01 Nov 2022
    Wan Siti Halimatul Munirah Wan Ahmad, Faculty of Engineering, Multimedia University, Cyberjaya, 63100, Malaysia
    01 Nov 2022
    Author Response
    Thank you Prof. F.M.Calisto for the encouraging comments for improving our article.

    We have noted the following changes (new texts are emphasized in Bold):

    Revised Introduction:
    Pancreatic cancer ... Continue reading

Comments on this article Comments (0)

Version 2
VERSION 2 PUBLISHED 18 Oct 2021
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.