Keywords
distortion classification, convolutional Neural Network, laparoscopic video, long short-term memory, multi-label classification, spatio-temporal features
This article is included in the Artificial Intelligence and Machine Learning gateway.
This article is included in the Research Synergy Foundation gateway.
distortion classification, convolutional Neural Network, laparoscopic video, long short-term memory, multi-label classification, spatio-temporal features
Video quality assessment (VQA) in the medical field is an important task to achieve satisfactory conditions for medical imaging modalities like magnetic resonance imaging (MRI), computed tomography (CT) scans, and laparoscopy. VQA is composed of two stages: distortion classification and quality score evaluation. Laparoscopic surgery videos are prone to distortions that affect a surgeon’s visibility and degrade the vision quality for robot-assisted surgery.1
Laparoscopic videos are often affected by various types of distortions like noise, smoke, uneven illumination, and blur, which are all concomitant artifacts that arise from operating the laparoscopic surgical equipment.2 To enhance the distorted laparoscopic videos, most studies propose solutions that require troubleshooting the equipment.2,3 However, such solutions are time consuming and cannot guarantee high-quality laparoscopy every time.
Recent studies have suggested the use of image or video enhancement methods like de-smoking for laparoscopic surgery,4–6 and joint wavelet decomposition and binocular combination for endoscopic image enhancement.7 In this case, real-time detection of the types of distortion is important to decide which enhancement methods are appropriate to apply. Real-time distortion classification is a challenging task and few recent studies have addressed it using hand-crafted features.8–12 These existing image quality assessment methods, such as BIQI,11 DIIVINE12 and BRISQUE,10 were based on non-generic classification and are considered domain-dependent tasks. In addition, a distortion-specific classification approach has been demonstrated.8 This approach used a separate traditional feature method for each type of distortion.8
On the other hand, convolutional neural networks (CNNs) overcome the previous limitations and learn features automatically with the same CNN architecture to detect all types of distortions. This paper aims to address the challenge of distortion detection and produce a generic method for distortion classification in laparoscopic videos.
Artificial neural networks (ANNs) have shown significant capability in overcoming the issue of distortion classification by extracting informative features from all kinds of distortions. CNNs are powerful and efficient in several image tasks including classification,13 segmentation,14 enhancement,15 and retrieval.16 Recently, CNNs have also been used in several studies on image distortion classification for various applications.17,18 However, recurrent neural networks (RNNs), and specifically, long short-term memory (LSTM)19 have not yet been investigated for distortion classification in video datasets. This paper aims to highlight the use of CNN-LSTM20 to improve classification accuracy.
In the context of distortion classification in laparoscopic surgery videos, a recent study has proposed the use of deep CNNs, such as ResNet for distortion ranking.21 Its method achieved ranking accuracies of 83.3%, 84.7%, and 87.3% using Resnet18, Resnet34, and Resnet50, respectively. However, the previous work focused only on spatial features extracted from a collection of 20,000 images for image-level distortion ranking.
Another very recent work was found to transfer learning from pre-trained ResNet50 CNN to laparoscopic video frames.22 The spatial features extracted from ResNet50 were applied to four support vector machine classifiers (three binary and one 5-class) utilizing decision fusion to produce the final distortion lists.22 Hence, this paper proposes to extract spatiotemporal features using CNN-LSTM for video-level distortion classification.
The key contributions of this paper are:
• Utilization of a RNN model such as LSTM with time series of CNN-based features extracted from the frames. To the best of our knowledge, this is the first paper that uses CNN-LSTM for non-reference distortion classification in laparoscopic videos.
• An evaluation and comparison between the proposed CNN-LSTM and existing solutions presented for the ICIP2020 challenge.
This paper is structured as follows: Methods describes the proposed method and the experiments including the dataset and the experimental setup. In Results and discussion, the results of the proposed solution and the comparison with existing methods are presented and discussed. Conclusions summarizes the significance of this work and opens doors for further improvement.
In this section, we describe the proposed methodology for distortion classification in laparoscopic videos. This classification problem is formulated as a single multi-label classification which can be transformed to multiple binary classifiers. In this scenario, each label (distortion) in the dataset is used with a separate binary classifier, resulting in five binary classifiers in total. The block diagram of the proposed model is shown in Figure 1.
CNN, convolutional neural networks; LSTM, long short-term memory; AWGN, additive white Gaussian noise.
Transfer learning with residual network
Usually, very deep CNNs suffer from the gradient vanishing problem, which leads to a drop in accuracy.23 To address this problem, residual network (ResNet) was developed utilizing skip connections instead of direct stacked layers.23 ResNet is a well-known deep neural network with high generalization ability used for image recognition.23 Residual networks have various versions with different numbers of layers, such as ResNet50 with 50 layers and over 23 million trainable parameters.
The transfer learning approach is summarized by training deep CNNs like ResNet with a large-scale dataset such as ImageNet24 and utilizing them with a novel small-scale dataset. In this paper, ResNet5023 was transferred to the laparoscopic video dataset and utilized to extract spatial features from the video’s frames. This CNN pre-trained on ImageNet24 was used after removing top layers. The input images were resized to 224 × 224 and the dimensions of extracted features was 2048.
Classification with LSTM
LSTM is a special type of RNN that is used for long-range sequence modeling.19 LSTM has a memory cell, which acts as an accumulator of state information, supported by control gates. The advantage of this structure is that it solves the problem of gradient vanishing.19 The CNN-LSTM network was found to capture spatiotemporal correlations better than fully connected LSTM, which is only powerful for spatial correlation.20
In this paper, the spatial feature vector extracted from ResNet50 represents one laparoscopic frame. Additionally, the series of feature vectors extracted from a series of frames in one video was applied to a set of five LSTMs. This aims to map the video to two categories in each LSTM. For example, the first LSTM checks whether smoke distortion is available in a video and produces two classes: “yes” and “no.” The already-trained CNN was utilized after replacing the top layers with five LSTM classifiers to tune the parameters of the fully connected layers. In other words, each LSTM fits the extracted features and maps them to two categories: “yes” and “no.”
The architecture of each LSTM consists of the following layers:
Datasets and experimental setup
The dataset used in this paper is an extended version of the Laparoscopic Video Quality (LVQ) database.8 The database contains 10 reference videos, each 10 seconds in length.8 Each reference video is distorted by five different types of distortions with four different levels, resulting in a total of 200 videos. These videos were extracted from the Cholec80 dataset that comprises 80 different videos of cholecystectomy surgeries.25 The extracted videos were selected considering multiple variations of scene content. The resolution of the videos is 512 × 288 with a 16:9 aspect ratio and a frame rate of 25 fps.
The extended version of LVQ dataset was issued in the ICIP2020 challenge and includes 1000 laparoscopic videos divided into 800 videos for training and 200 videos for testing. The distortions include additive white Gaussian noise (AWGN), smoke, uneven illumination, defocus and motion blur. The numbers of videos for each label or distortion are not balanced (300 videos with AWGN, 320 videos with smoke, 400 videos with uneven illumination, 160 videos with defocus blur, 80 videos with motion blur). The challenge in this dataset is that each video is affected by single or multiple distortions and thus, the problem of distortion classification is formulated as a multi-label classification problem.
The training and testing for the ResNet-LSTM model was carried out using OpenCV and TensorFlow frameworks and libraries on an NVIDIA GeForce GTX 1080 Ti GPU. The learning rate used to train the LSTM model was set to 0.001, the batch size was set to 8, and the number of epochs was set to 150. The minimization of the categorical crossentropy loss function was achieved using the Adam optimizer.
To the best of our knowledge, no other papers have utilized this extended version of the laparoscopic video dataset challenge dashboard for distortion classification. For this reason, we compared our approach with the best solutions presented in the ICIP2020 challenge as shown in Table 1.
The description of the baseline solutions was given by winners in the ICIP2020 challenge presentation event. One of the solutions was based on using a VGG16 CNN26 to extract features. The feature vector was applied to the fully connected neural network that included two hidden layers with 4096 nodes, two batch normalization layers, and two dropout layers. On the other hand, another solution used a deep multi-task learning model. It included one shared VGG-based feature extraction block and five independent binary classifiers (one for each distortion type). Each classifier had two fully connected layers with 512 nodes and one node in the output layer with a sigmoid activation function ICIP2020 challenge presentation event. The description of other baseline solutions was not presented, but the results were shown in the challenge dashboard.
The performance of the proposed methodology was evaluated in terms of classification accuracy, F1-score of single distortion, and F1-score of single and multiple distortions as shown in Table 1. It can be observed that the proposed ResNet50-LSTM leads to the best accuracy of 85.0%, while baseline methods yielded accuracies of between 57% and 81.5%. Additionally, ResNet50-LSTM yielded the best F1-score of single and multiple distortions (94.2%), while baseline methods yielded F1-score between 83.2% and 94.1%. Furthermore, it is clear that the performance of our method for multiple distortions outperforms that for single distortion, which still has room for improvement.
Figure 2 shows the confusion matrix for each distortion category produced from each LSTM. The LSTMs were able to correctly classify 58 videos out of 60, 46 videos out of 50, 94 out of 95, 88 out of 95 for AWGN, defocus blur, smoke, and uneven illumination, respectively. On the other hand, motion blur LSTM gave the worst classification performance with 29 correct videos out of 45. The reason for this drop was that the videos with motion blur have the minimum number of samples, which is only 80 videos. The performance of the motion LSTM can be improved significantly by having more samples affected by motion blur distortion. The performance metrics of the proposed method for each class are shown in Table 2.
The proposed ResNet50-LSTM was able to run considering real-time conditions. The inference time was 0.05 seconds to extract features from one frame using ResNet50. The features extracted from one frame were added to the features of other frames to be applied to the LSTM. The inference time for five LSTMs to produce the five distortion classes was 0.1 seconds. In summary, the proposed model updates the distortion categories every 0.15 seconds and achieves high speed performance.
In this paper, a novel strategy of distortion classification was proposed. A multi-label spatiotemporal deep model, including a pre-trained deep CNN of ResNet50 and five LSTMs, was used to address the problem of single and multiple distortion classification. The proposed model was tested with a laparoscopic video dataset and the results were promising. It was found that our model outperformed existing solutions in terms of accuracy by 4.5% and yielded the best F1-score for single and multiple distortions. Hence, we intend to enhance the performance by tuning more layers of pre-trained CNN with laparoscopic images affected by distortions to learn more informative features. The last step requires collecting a large number of images to achieve promising improvements. Additionally, more recent CNN architectures such as EfficientNet27 and DeiT (Data-efficient Image Transformers)28 models are good candidates for extracting informative features. In this paper, the proposed solution only classifies the laparoscopic distortions into five categories. Hence, in future work, we plan to rank each category of distortion in terms of distortion intensity, which is a more challenging matter.
The datasets used in this work were used for the ICIP 2020 challenge and created by researchers from Université Sorbonne Paris Nord, France; Norwegian University of Science and Technology, Norway; and Oslo University Hospital, Norway. The datasets are publicly available under a CC-BY-NC-SA 4.0 license from https://github.com/zakopz/icip2020-lvq-challenge.
This dataset was not generated nor is it owned by the authors of this article; the listed owners are Université Sorbonne Paris Nord, France; Norwegian University of Science and Technology, Norway; and Oslo University Hospital, Norway. Therefore, neither the authors nor F1000Research are responsible for the content of this dataset and cannot provide information about data collection. As this dataset contains potentially identifying images/information, caution is advised when using this dataset in future research.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)