ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Research Article
Revised

Crowd density estimation using deep learning for Hajj pilgrimage video analytics

[version 2; peer review: 3 approved]
PUBLISHED 14 Jan 2022
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

This article is included in the Research Synergy Foundation gateway.

Abstract

Background: This paper focuses on advances in crowd control study with an emphasis on high-density crowds, particularly Hajj crowds. Video analysis and visual surveillance have been of increasing importance in order to enhance the safety and security of pilgrimages in Makkah, Saudi Arabia. Hajj is considered to be a particularly distinctive event, with hundreds of thousands of people gathering in a small space, which does not allow a precise analysis of video footage using advanced video and computer vision algorithms. This research proposes an algorithm based on a Convolutional Neural Networks model specifically for Hajj applications. Additionally, the work introduces a system for counting and then estimating the crowd density.
Methods: The model adopts an architecture which detects each person in the crowd, spots head location with a bounding box and does the counting in our own novel dataset (HAJJ-Crowd).
Results: Our algorithm outperforms the state-of-the-art method, and attains a remarkable Mean Absolute Error result of 200 (average of 82.0 improvement) and Mean Square Error of 240 (average of 135.54 improvement).
Conclusions: In our new HAJJ-Crowd dataset for evaluation and testing, we have a density map and prediction results of some standard methods.

Keywords

Visual Surveillance, Density Estimation, Crowd Counting, CNN.

Revised Amendments from Version 1

We are happy to submit a revised version of our work, titled "Crowd density estimation using deep learning for Hajj pilgrimage video analytics," which incorporates the reviewers' suggestions. The outline of the changes made from version 1 to version 2 and the reason for those changes are discussed below:
In the Result Analysis section, we have added the Training, Testing and Validation in details based on the first reviewer’s comment. We did mention about cross-fold validation and tune of hyperparameters.
In the Methods section we have added new images (Figure 2) and discussed in detail based on the second reviewer’s comment. For the Result section, we have updated the MAE and MSE based on the same review comment as well as including the YouTube link of the Mecca Hajj, 2019.
The Abstract, Introduction, Related works, Classification of box, Annotation technique and Conclusion sections are already updated based on the third reviewer’s comment.

See the authors' detailed response to the review by Mohamed Uvaze Ahamed
See the authors' detailed response to the review by Md Junayed Hasan
See the authors' detailed response to the review by Saravana Balaji B

Introduction

Hajj has been used as an opportunity for certain rituals. The Hajj is linked to the life of the Islamic prophet Muhammad, who lived in the seventh century AD, although Muslims believe that the tradition of pilgrimage to Mecca dates all the way back to Abraham’s time.1 For four to five days a year, over two million pilgrims from several parts of the world come to Mecca, where they tour the many places in Mecca and perform rituals.2 Each ritual has a short but challenging path to take. The Hajj authorities have confirmed that they are having difficulties in monitoring crowd density, which can be seen from the tragedies that occurred in September 2015.3 Regression-based approaches are normally used to estimate crowd density, to infer a mapping between lower-level capabilities and crowd evaluation.1,2

In this paper, we propose a method for crowd analysis and density estimation using deep learning. The benefit of the Convolutional Neural Network (CNN) model is that it is superior than handcrafted features in identifying crowd-specific characteristics. We propose a framework for crowd counting based on convolutional neural networks (CNNs) in this study.2 Our aim is to analyze the map of crowd videos and then use visualization for cross-scene crowd analysis in unseen target scenes. To do this, we must overcome the following obstacles: The challenge of prevailing multitude analysis is insufficient to help in the comparison of research into scene analysis.46

The main contributions of this research include:

  • 1. A methodology to accurately perform the multitude analysis from an arbitrary multitude density and arbitrary perspectives in a separate video.

  • 2. An evaluation of interventions and a comparison of these established methods specifically for activity with recent deep CNN networks.

  • 3. A new dataset based on Hajj pilgrimage specifically for the crowds around the Kaaba area. Crowd datasets such as Shanghai Tech, UCSD, and UCF CC 50 are available for crowd analysis research, however our dataset contains large numbers of crowds.

Related works

Early works on the usage of detection methods in crowd counting are presented.711 Typically, these approaches refer to an individual or head detector through a sliding picture window. Recently, many exceptional object detectors have been presented, including Region Based Convolutional Neural Networks (R-CNN),1214 YOLO15 and SSD,16 which can have a low precision of detection in scattered scenes. Some works such as Idrees et al.17 and Chan et al.18 implement regression-based approaches that learn directly from the crowd images in order to minimize these issues. They normally extract global19 (texture, gradient, edge) or local characteristics20 for the first step (SIFT,21 LBP,22 HOG,23 and GLCM21). Then several regression techniques such as linear regression24 and Gaussian mixture regression25 are employed to map the crowd counting function. These approaches manage the problems of occlusion and context disorder successfully, but spatial detail is still ignored. Thus, Lemptisky et al.26 have developed a framework that focuses on density assessment, learning to linearly plot local features and charts. A non-linear mapping, random forest regression, which is achieved the same forest to train two separate forests, is proposed in order to reduce the challenge of studying linear mapping.27 Previous heuristic models that traditionally used CNNs to estimate crowd density2831 have improved significantly compared to conventional handcrafted methods. Considering the drawbacks of these conventional methods we have employed improved CNN.

Methods

We proposed a model that employs the state-of-the-art crowd counting algorithms used for the Hajj pilgrimage. The algorithms predicted specific regions on people’s heads for Hajj crowd images. The head size for each individual is identified using multi-stage procedures. Figure 1 shows the suggested architecture of CNNs, which is made up of three key components. The first component is the extraction of frames. To do this, we first gathered video clips of Hajj pilgrims. For this experiment we have collected video clips from YouTube using video recording software. To develop this model, we have used programming language python 3.6.15 with others libaries such as/opencv-python 3.4.11.43, NumPy 1.21.2, SciPy 1.21.2 and matplotlib 3.4.3.32 We executed 30 frame extractions per second to assemble all of the footage into one clip. Feature extraction at different resolutions is the method used in spatial feature extraction. The CNN prediction map has been utilized in our proposed method. A set of multi-scale feedback reasoning networks (MSFRN) was used to route the results of mapping to the MSFRN. Results from mapping were sent to the MSFRN where information fused across the scales and predictions were formed using boxes.32 Finally, crowd density results were obtained by utilizing the Non-Maximum Suppression (NMS) which uses several resolutions in combination to arrive at the accurate result. After completing the whole process we got the crowd density result. To compare with our proposed method the following existing algorithms were used. Adversarial Cross-Scale Consistency Pursuit was suggested by Zan Shen et al. as a new paradigm for crowd counting (density estimation) (ACSCP). A three-part Perspective Crowd Counting Network (PCC Net) has been suggested by Junyu Gao et al. Yuhong Li et al. suggested CSRNet made up of two main parts: CNN as the front-end for 2D feature extraction and a dilated CNN as the back-end. The CP-CNN developed by Vishwanath A et al. has four modules: the GCE, the LCE, the DME, and a Fusion-CNN (F-CNN). An image’s change in crowd density may be used to enhance the accuracy and localisation of the projected crowd count, as suggested by Deepak Babu Sam et al.

54eb5daa-d652-48e7-8338-c3ceab505ae2_figure1.gif

Figure 1. Proposed crowd counting technique based on CNN architecture.

Architecture of CNN layer

In addition to CNN detectors, all existing CNN-related detectors are built on a deep-backbone feature extractor network. Furthermore, it is possible that detection accuracy is linked to functionality consistency. CNN-enabled networks are often used in counting crowds, and give an approximate real time performance.31 The first five CNN convolution blocks initialized using ImageNet training are the backbone network’s starting point.33 Typically, a CNN design consists of a single input layer, many convolutional and pooling layers, numerous fully connected layers, and a final output layer for automating the feature extraction process. As input, an RGB crowd image of 224 by 224 pixels is accepted, with data downsampling in each block for maximum pooling. Except for the last blocks, which are copied by the following blocks, every block on the network branches. A resolution of 0.5, 0.25, 0.125, and 0.166 is utilized to generate feature maps when using cloned blocks. Figure 2 shows the architecture of CNN layers in our experiment.

54eb5daa-d652-48e7-8338-c3ceab505ae2_figure2.gif

Figure 2. Architecture of CNN layers for crowd counting.

Classification of the box

Instead of making everything the same size, we used a per pixel categorization approach for scaling. The model classifies each head as part of or inside the context of one of the bounding boxes. Model scale branches generate map set DnshnB=0, showing the confidence level for each pixel for classes of the box. The final requirement for training the model is to know the model’s users’ head sizes, which is not easily accessible and cannot be reliably inputted from typical crowd sourced databases. We created a method to help estimate head sizes in this research. We used the crowd dataset accessible point annotations to get the ground truth. People’s heads are located at certain coordinates with these annotations. Note that only quadratic boxes are regarded as box-like. It is situated approximately in the center of the head, though it may vary drastically depending on the number of people. The same applies to scale, since it not only indicates the scale of each person in the crowd, but also shows scale in the form of annotation points. Assuming a homogeneous density of the crowd, the space between two nearby people may represent the size of the box, depending on the dimensions of the crowd. Know that only quadratic boxes are regarded as box-like. In simpler words, a given head size is equivalent to the length of the neighbor closest to it. It is right to use these boxes for crowds of medium to large sizes, but for those with sparse populations with far closest neighbors, these box dimensions may be wrong. However, on the whole, they are deemed experimentally effective, providing an accurate distribution of head sizes throughout a broad range of densities. However, on the whole, they are deemed experimentally effective,

(1)
βsb=βs+1nsifs<ns11+b1ys,Otherwise

In choosing the Box U+03B2 (s)/b s for each scale, a popular approach is used. At the maximum resolution scale (s = ns U+2212 1), the initial box size (b = 1) is often set at one, which increases the ability to handle the extremely congested density. The standard size of increase values on different scales are the y = 4, 2, 1, 1 definition. Please note that at high-level (0.5 and 0.25), in which coarse resolution is appropriate (as shown by Figure 1), boxes of better sizes include those of low resolution (0.16 and 0.25).33

Count of heads

For testing the model in Figure 1, the predictive fusion procedure is utilized in place. The multi-resolution prediction is made across all branches of the picture pipeline. Using these prediction charts, we can anticipate that the locations of the boxes are linearly scaled from the resolution of the input. When the present NMS is in place, then it is used to prevent multi-threshold mixing.

Data collection

The HAJJ-crowd dataset was collected from live television broadcasts via YouTube of the Mecca Hajj 2019. All of the images depict pilgrims performing tawaf around the magnificent kaaba. Tawaf involves walking around the Kabba seven times. The moving process begins in the opposite direction of the clock. The video frames have been extracted and saved as.jpg files for future examination. The dataset contains a total of 1500 crowd images. As a result, 1500 images and ten film sequences are captured in several populous areas surrounding Kaaba (Tawaf region), with some typical crowd scenarios, such as touching a black stone in the Kaaba region and tossing a stone into the Mina region. All images have a resolution 1280 × 720 HD and videos have a resolution 1080p.

Annotation technique

We used python 3.6.15 and opencv-python 3.4.11.43 as an annotation tool to easily annotate head positions in the crowds. The process involved two types of labelling: point and bounding box. During the annotation process, the head is freely zoomed in/out, split into a maximum of 3 × 3 tiny patches, allowing annotators to mark a head in 5 sizes: 2x (x = 0,1,2,3,4) times the original image size. In this study, we developed a technique for estimating head sizes. To get the ground truth, we utilized available point annotations from the crowd dataset. With these annotations, the heads of individuals are positioned at certain locations. It is worth noting that only quadratic boxes are considered box-like. It is located about in the middle of the head, but this might vary significantly depending on the population. The same holds true for scale, which not only represents the size of each individual in the crowd but also displays scale in the form of annotation points.

Experimental design

Firstly, we gathered all images of size 1280 × 720 pixels. Then we applied a profound learning method to improve the CNN and obtain the best outcomes. Training and analysis was done using the pytorch 1.9.1 framework and operating system Ubuntu 18.04.6 LTS deep learning packages on NVIDIA GEFORCE GTX 1660Ti GPU. For profound learning, we utilized packages such as opencv-python 3.4.11.43, NumPy 1.21.2, SciPy 1.21.2, matplotlib 3.4.3.

Experimental analysis

The HAJJ-crowd data collection consisted of three sections, the examination, validation and training. The count accuracy which is the Mean Absolute Error (MAE) and Mean Squared Error (MSE) should be measured in two measurements. The equations are shown below:

(2)
MAE=1Ni=1Nyiyi
(3)
MSE=1Ni=1Nyiyi2

In this scenario, N is assumed to be the test sample, yi is regarded as the count mark, whereas yi is the approximation count sample. For each set of persons, the preceding group consists of (0), (0, 1000) (1000, 2000), (2000, 3000). In accordance with the annotated number and quality of the image, each image is allocated an attributing label. In the test set, MAE and MSE are applied for the matching samples in a particular viewpoint for each class. For example, the luminescence attribute calculates average MSE and MAE figures based on two categories that demonstrate the counting models’ sensitivity to luminescence variation.

Results analysis

Figure 3(a) and Figure 3(c) indicate clearly that there is no significant change in the loss of pixels from zero to ten epochs, whereas there is a ten pixel loss from ten to 20 epochs. However, the pixel loss between 20 and 30 epochs keeps increasing, up to 40–52 epochs. At the end, the pixel loss is 15.0 at 52 epochs. We may get genuine training loss from this experiment. More than anything, the legitimate pixel loss in tests is 17 at 40 epochs and 14 at 52 epochs. At the same time, based on the preceding equation, we computed the MAE test. We have computed the valid MAE test loss and the valid MAE test that is shown in Figure 3(b) and Figure 3(d). For the MAE test, we found that the error is over 600 when the epoch is zero. We saw the error coming down to 200.0 after 52 epochs. In the Test MSE, we saw the error is over 425 if the epoch is zero. After that, we saw that the error came down to 240.0, after 52 epochs. Figure 3 shows the graphical representations of the results.

54eb5daa-d652-48e7-8338-c3ceab505ae2_figure3.gif

Figure 3. Results analysis graph.

MAE = mean absolute error; MSE = mean squared error.

Proposed method comparison with state-of-the-art methods

The HAJJ-crowd dataset contains a large number of crowds as well as a density collection. It contains 1050 training images and 450 testing images with the same resolution of 1280 × 720 pixels. For our Hajj-Crowd dataset, we have used 80% data for training and 20% data for testing and we could successfully validate 90% data. For our experiment, we have used three fold cross validation. The mainstream UCF CC 50 dataset are compared with the most advanced non-defined approaches3438 in terms of the MAE and MSE. Our method and dataset outperforms the state-of-the-art methods, and attains a remarkable MAE result of: 200.0 (Average of 82.0 points improvement) and MSE of 240.0 (Average of 135.54 points improvement). We established the range of feasible values for each hyperparameter, as well as a sampling technique, evaluation criteria, and a cross-validation procedure. MSE is calculated as follows, which makes mathematical operations easier than with a non-differentiable function such as MAE. Table 1 shows the comparison with state-of-the-art methods.

Table 1. Error estimation on UCF CC 50 dataset.

MAE = mean absolute error; MSE = mean squared error.

MethodMAEMSE
ACSCP34291.0404.6
PCC Net35240.0315.5
Switching-CNN36318.1439.2
CP-CNN37295.8320.9
CSRNet38266.1397.5
Proposed method200.0240.0

Conclusions

This paper provides a new approach for crowd density estimation using a convolutional neural network. A multi-column structure of high-level feedback processing that addresses the problems in large crowds is the proposed model of the convolutional neural network. The proposed model can recognize moving crowds, which leads to improved performance. We found that crowd analysis prior to crowd counting has significantly boosted the efficiency of counting for extremely dense crowd scenarios. The proposed method outperforms the state-of-the-art method, with a Mean Absolute Error of 200 and a Mean Square Error of 240.

Data availability

Underlying data

Due to the ethical and copyright limitations around social media data, the underlying data for this study cannot be disclosed. The original dataset contains a total of 1500 images, all of which were collected from the Mecca Hajj 2019. The dataset contains three classes of crowd density around tawaf area. The Methods section offers extensive information that will enable the research to be replicated. If you have any questions concerning the approach, please contact the corresponding author.

Software availability

Software available from: https://github.com/romanbhuiyan/CrowdCounting.

Archived source code at time of publication: https://doi.org/10.5281/zenodo.5635486.32

License: https://opensource.org/licenses/gpl-licenseGPL.

Author contributions

R.B. developed the experimental model, structure of the manuscript, performance evaluation and wrote the preliminary draft. J.A. helped to fix the error code, checked the labelled data and results as well as reviewed the full paper. N.H. gave some important feedback on this paper. F.F. helped with the structured full paper revision. J.U. helped format the full paper. N.A. checked the revised version and added a few paragraphs to the full article. M.A.S. helped with the paper organization. All authors discussed the results and contributed to the final manuscript.

Comments on this article Comments (0)

Version 2
VERSION 2 PUBLISHED 24 Nov 2021
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
BHUIYAN MR, Abdullah DJ, Hashim DN et al. Crowd density estimation using deep learning for Hajj pilgrimage video analytics [version 2; peer review: 3 approved]. F1000Research 2022, 10:1190 (https://doi.org/10.12688/f1000research.73156.2)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 2
VERSION 2
PUBLISHED 14 Jan 2022
Revised
Views
7
Cite
Reviewer Report 28 Jan 2022
Mohamed Uvaze Ahamed, Westminster International University in Tashkent, Tashkent, Uzbekistan 
Approved
VIEWS 7
No Further comments to make. As per the suggestion, the authors have addressed all ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Ahamed MU. Reviewer Report For: Crowd density estimation using deep learning for Hajj pilgrimage video analytics [version 2; peer review: 3 approved]. F1000Research 2022, 10:1190 (https://doi.org/10.5256/f1000research.119864.r119888)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
13
Cite
Reviewer Report 24 Jan 2022
Saravana Balaji B, Department of Information Technology, Lebanese French University, Erbil, Iraq 
Approved
VIEWS 13
The authors revised the ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
B SB. Reviewer Report For: Crowd density estimation using deep learning for Hajj pilgrimage video analytics [version 2; peer review: 3 approved]. F1000Research 2022, 10:1190 (https://doi.org/10.5256/f1000research.119864.r119887)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
14
Cite
Reviewer Report 24 Jan 2022
Md Junayed Hasan, Department of Electrical, Electronics and Computer Engineering, University of Ulsan, Ulsan, South Korea 
Approved
VIEWS 14
I am satisfied with the answers given by the authors. Therefore, I think this paper can be indexed and is suitable for the readers. 

The most interesting part of the paper is Hajj crowds. The authors came up ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Hasan MJ. Reviewer Report For: Crowd density estimation using deep learning for Hajj pilgrimage video analytics [version 2; peer review: 3 approved]. F1000Research 2022, 10:1190 (https://doi.org/10.5256/f1000research.119864.r119886)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Version 1
VERSION 1
PUBLISHED 24 Nov 2021
Views
22
Cite
Reviewer Report 20 Dec 2021
Saravana Balaji B, Department of Information Technology, Lebanese French University, Erbil, Iraq 
Approved with Reservations
VIEWS 22
The introduction section is brief: highlight the need for density estimation and the current issues in density estimation. Also, emphasize the short note about the proposed method.

The related work section should group the works technically and ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
B SB. Reviewer Report For: Crowd density estimation using deep learning for Hajj pilgrimage video analytics [version 2; peer review: 3 approved]. F1000Research 2022, 10:1190 (https://doi.org/10.5256/f1000research.76787.r101072)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 14 Jan 2022
    MD ROMAN BHUIYAN, FCI, Multimedia University, Persiaran Multimedia, 63100, Malaysia
    14 Jan 2022
    Author Response
    1. The introduction section is brief: highlight the need for density estimation and the current issues in density estimation. Also, emphasize the short note about the proposed method.

    Ans: ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 14 Jan 2022
    MD ROMAN BHUIYAN, FCI, Multimedia University, Persiaran Multimedia, 63100, Malaysia
    14 Jan 2022
    Author Response
    1. The introduction section is brief: highlight the need for density estimation and the current issues in density estimation. Also, emphasize the short note about the proposed method.

    Ans: ... Continue reading
Views
30
Cite
Reviewer Report 06 Dec 2021
Mohamed Uvaze Ahamed, Westminster International University in Tashkent, Tashkent, Uzbekistan 
Approved with Reservations
VIEWS 30
The authors proposed a model for crowd density estimation using a convolutional neural network. Overall, the article is a clear, concise, and well-written manuscript. Though, the work has been well presented with neat technical flow, there are certain clarifications that need ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Ahamed MU. Reviewer Report For: Crowd density estimation using deep learning for Hajj pilgrimage video analytics [version 2; peer review: 3 approved]. F1000Research 2022, 10:1190 (https://doi.org/10.5256/f1000research.76787.r101069)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 14 Jan 2022
    MD ROMAN BHUIYAN, FCI, Multimedia University, Persiaran Multimedia, 63100, Malaysia
    14 Jan 2022
    Author Response
    1. In the Methods section, the authors wrote the following statement: “Figure 1 shows the suggested architecture of CNNs, which is made up of three key components”. But actually Figure ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 14 Jan 2022
    MD ROMAN BHUIYAN, FCI, Multimedia University, Persiaran Multimedia, 63100, Malaysia
    14 Jan 2022
    Author Response
    1. In the Methods section, the authors wrote the following statement: “Figure 1 shows the suggested architecture of CNNs, which is made up of three key components”. But actually Figure ... Continue reading
Views
44
Cite
Reviewer Report 29 Nov 2021
Md Junayed Hasan, Department of Electrical, Electronics and Computer Engineering, University of Ulsan, Ulsan, South Korea 
Approved with Reservations
VIEWS 44
This paper focuses on recent developments in crowd control research, with a focus on high-density crowds, particularly those attending the Hajj. In order to improve the safety and security of pilgrimages in Makkah, Saudi Arabia, video analysis and visual surveillance ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Hasan MJ. Reviewer Report For: Crowd density estimation using deep learning for Hajj pilgrimage video analytics [version 2; peer review: 3 approved]. F1000Research 2022, 10:1190 (https://doi.org/10.5256/f1000research.76787.r101065)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 14 Jan 2022
    MD ROMAN BHUIYAN, FCI, Multimedia University, Persiaran Multimedia, 63100, Malaysia
    14 Jan 2022
    Author Response
    1. The details of train, test, and validation is not clear from the manuscript. How much data you have actually used to perform this test? Train, test, and Validation – ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 14 Jan 2022
    MD ROMAN BHUIYAN, FCI, Multimedia University, Persiaran Multimedia, 63100, Malaysia
    14 Jan 2022
    Author Response
    1. The details of train, test, and validation is not clear from the manuscript. How much data you have actually used to perform this test? Train, test, and Validation – ... Continue reading

Comments on this article Comments (0)

Version 2
VERSION 2 PUBLISHED 24 Nov 2021
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.