Evaluation of electrocardiogram: numerical vs. image data for emotion recognition system

Background: The electrocardiogram (ECG) is a physiological signal used to diagnose and monitor cardiovascular disease, usually using 2- D ECG. Numerous studies have proven that ECG can be used to detect human emotions using 1-D ECG; however, ECG is typically captured as 2-D images rather than as 1-D data. There is still no consensus on the effect of the ECG input format on the accuracy of the emotion recognition system (ERS). The ERS using 2-D ECG is still inadequately studied. Therefore, this study compared ERS performance using 1-D and 2-D ECG data to investigate the effect of the ECG input format on the ERS. Methods: This study employed the DREAMER dataset, which contains 23 ECG recordings obtained during audio-visual emotional elicitation. Numerical data was converted to ECG images for the comparison. Numerous approaches were used to obtain ECG features. The Augsburg BioSignal Toolbox (AUBT) and the Toolbox for Emotional feature extraction from Physiological signals (TEAP) extracted features from numerical data. Meanwhile, features were extracted from image data using Oriented FAST and rotated BRIEF (ORB), Scale Invariant Feature Transform (SIFT), KAZE, Accelerated-KAZE (AKAZE), Binary Robust Invariant Scalable Keypoints (BRISK), and Histogram of Oriented Gradients (HOG). Dimension reduction was accomplished using linear discriminant analysis (LDA), and valence and arousal were classified using the Support Vector Machine (SVM). Results: The experimental results show 1-D ECG-based ERS achieved 65.06% of accuracy and 75.63% of F1 score for valence, and 57.83% of accuracy and 44.44% of F1-score for arousal. For 2-D ECG-based ERS, the highest accuracy and F1-score for valence were 62.35% and 49.57%; whereas, the arousal was 59.64% and 59.71%. Conclusions: The results indicate that both inputs work comparably well in classifying emotions, which demonstrates the potential of 1-D and 2-D as input modalities for the ERS.


Introduction
Medical professionals have been actively using electrocardiogram (ECG) wave images as a tool for monitoring [1][2][3] and diagnosing cardiovascular diseases, 4-6 such as heart attacks, dysrhythmia, and pericarditis, with some reported accuracy of more than 99% in the past decade. Fundamentally, ECG is used to measure electrical activity in the human heart by attaching electrodes to the human body. Due to the continual blood pumping action to the body, the electrical activity of the heart can be found in the sinoatrial node. The electrocardiogram signal is composed of three basic components: P, QRS, and T waves ( Figure 1). P waves are produced during atrium depolarization, QRS complexes are produced during ventricular depolarization, and T waves are produced during ventricle recovery.
Today's ECG devices have advanced from large and immobile to compact, wearable, and portable. Additionally, the signal accuracy of portable devices is comparable to that of traditional medical devices and can be used for the same purposes as traditional devices, including the study of human emotions. Many studies have proven that ECG which is associated with autonomic nervous system's (ANS) physiological responses can be used to identify human emotions. [8][9][10][11] Different emotions influence human heart activities differently; these influences may be hidden in the ECG wave and can be detected through closer monitoring of the main features of ECG, namely, heart rate (HR) and heart rate variability (HRV).
Previous research on human emotions has primarily relied on either direct analysis of 1-D data [12][13][14] or the conversion of 1-D data to a 2-D spectral image 15 prior to identifying the emotions. Despite this, majority of the portable devices record the ECG signal as images (2-D images) in a PDF file rather than as raw numerical data (1-D data). [16][17][18] The example of a PDF-based 2-D ECG is depicted in Figure 2. Due to this problem, researchers were required to convert the PDF file of the ECG into 1-D data before performing further emotion analysis, adding complexity to the pre-processing process. On this account, given the positive results obtained in monitoring and diagnosing cardiovascular-related diseases, the efficacy of 2-D ECG in emotion studies also warrants further investigation.

REVISED Amendments from Version 1
The main points suggested by reviewers were considered in the new version to improve the quality of the manuscript. As suggested by reviewers, some changes have been made to the introduction part, where the subsections "emotion model" and "electrocardiogram & emotion" have been removed and the explanation of what ECG images are added in the manuscript. We expanded the related works section with Table 1, which compares several previous studies that used 1-D and 2-D ECG. In this new version, we better describe the proposed method for both input formats, including the preprocessing of ECG signals and the transformation from 1-D to 2-D ECG. The results have been updated according to the results of the latest experiment. Additionally, the analysis of the computing complexity was added along with the text in Table 7. The discussion and conclusion parts also go through some modifications to address comments and suggestions from the reviewer.
Any further responses from the reviewers can be found at the end of the article As far as our knowledge is concerned, despite numerous attempts to recognize emotions using ECG signals, the effects of employing various types of ECG inputs to recognise emotions in the emotion recognition system (ERS) have yet to be closely studied. In addition, there is no consensus on whether or not the type of ECG input format affects the emotion classification accuracy. Therefore, to address this gap, the contribution of this study is to compare emotion classification performance using 1-D and 2-D ECGs to investigate the effect of the ECG input format on the ERS.
This study analysed ECG data from the DREAMER dataset, a multimodal database. In DREAMER, ECG signals were recorded from 23 participants using 18 audio-visual stimuli for the elicitation of various emotions. The Augsburg BioSignal Toolbox (AUBT) 19 and the Toolbox for Emotional Feature Extraction from Physiological Signals (TEAP) 20 were used to help extract features from the 1-D ECG. Prior to emotion classification, the dimension of the extracted ECG features was reduced using linear discriminant analysis (LDA). On the other hand, the 2-D ECG was obtained by converting the 1-D ECG, and six different feature extractors were used to extract features from the 2-D ECG, namely Oriented FAST and Rotated BRIEF (ORB), Scale Invariant Feature Transform (SIFT), KAZE, Accelerated-KAZE (AKAZE), Binary Robust Invariant Scalable Keypoints (BRISK), and Histogram of Oriented Gradients (HOG). The Support Vector Machine (SVM) classifier is used, and the ERS results for both ECG inputs are compared to examine the effect of signal input on ERS performance. The finding indicates no substantial difference between the two ECG inputs since both produce a promising outcome within same range of accuracy for emotion recognition.
The next section discusses related works. The following section describes the dataset and the proposed methods in depth.
The results are then provided. Finally, the study is concluded in the final section.

Related works
Researchers in the emotion recognition field have been proposing multiple approaches using electrocardiogram signals. For instance, Minhad, Ali, and Reaz 21 used 1-D ECG to classify emotions of happiness and anger. They achieved 83.33% accuracy using the SVM classification method. Besides, Tivatansakul and Ohkura 22 used 1-D ECG from the AUBT dataset to detect emotions for the emotional healthcare system. K-Nearest Neighbour (KNN) successfully classified three emotions (joy, anger, and sadness) with an accuracy 85.75%, 82.75%, and 95.25%, respectively. The MPED database for ERS was proposed by Song et al. 23 using ECG numerical data to recognise discrete emotions (joy, humour, disgust, anger, fear, sadness, and neutrality). Attention Long Short-Term Memory (A-LSTM) was used as a feature extractor to extract the frequency and time-domain features from the physiological signal. The A-LSTM was used as a classifier along with SVM, KNN, and Long Short-Term Memory (LSTM). Averagely, A-LSTM achieved better results of 40% to 55% compared to those of other classifiers.
Katsigiannis and Ramzan 13 suggested that ERS should use low-cost and off-the-shelf devices to collect ECG signals based on numerical format. Their dataset was called DREAMER, and the classification using SVM with a radial basis function kernel successfully achieved 62.37% for valence and arousal. This dataset is adopted here. Additionally, numerous other researchers also used the ECG signals from the DREAMER dataset to perform emotion recognition. For instance, 1-D ECG data from the DREAMER dataset is utilized by Wenwen He et al. 24 that suggested an approach for emotion recognition using ECG contaminated by motion artefacts. The proposed approach improved classification accuracy by 5% to 15%. Additionally, Pritam and Ali 25 also employed 1-D ECG from the DREAMER dataset to develop the self-supervised deep multi-task learning framework ERS, which consists of two stages of learning: ECG representation learning and emotion classification learning. The accuracy gained in this study was greater than 70%. Hasnul et al. 12 also used the 1-D ECG by DREAMER dataset to compare the performance of two feature extractor toolboxes. They noted that the dataset's size and the type of emotion classified might affect the suitability of the extracted features.
As mentioned before, the 2-D ECG was widely used for a variety of other purposes, including human authentication, ECG classification, and cardiovascular-related diseases.  Table 1 summarises these works, including the reference to the work, the dataset details (number of participants, number of stimuli), the signal used, the ECG input, the purpose of the work, the features extracted, the classifiers, and their accuracy-the accuracy denoted by an asterisk (*) refers to the accuracy of works that do not mainly focus on ERS.
Although considerable research has been conducted using ECG for ERS, the majority of research has focused on 1-D ECG analysis rather than 2-D ECG analysis, despite the fact that systems based on 2-D ECG have achieved excellent results in detecting cardiovascular-related diseases and human authentication. Additionally, no comparison of 1-D and 2-D ECG input was found in the emotion studies. As a result, it is unknown whether the ECG input format has an effect on the ERS's emotional classification accuracy. The significance of this study is that it compares emotion classification performance between 1-D and 2-D ECGs to determine the effect of the ECG input format on the ERS.

Methods
In this section, the details of the dataset are described, and the experimental setup for 1-D and 2-D ECGs is explained. The current study began in September 2020. MATLAB version 9.7 was utilized for data conversion and feature extraction, whereas Python version 3.8.5 was used for feature dimension reduction (for 1-D ECG) and classification. The analysis code used in this study is available from GitHub and archived with Zenodo. 47 The dataset (DREAMER) This study used ECG signals from Katsigiannis and Ramzan 13 called DREAMER. The DREAMER dataset is a freely accessible database of electroencephalogram (EEG) and electrocardiogram (ECG) signals used in emotion research. However, EEG signals were removed from this study because the primary focus is on ECG signals. The ECG was recorded using the SHIMMER ECG sensor at 256 Hz and stored in 1-D format. The DREAMER dataset contains 414 ECG recordings from 23 subjects who were exposed to 18 audio-visual stimuli designed to evoke emotion. Each participant assessed their emotions on a scale of 1 to 5 for arousal, valence, and dominance. However, because this study was primarily concerned with arousal and valence ratings, participants' evaluations of dominance were discarded. The summary of the DREAMER dataset is tabulated in Table 2.

Experimental setup 1) 1-D ECG
The proposed ERS for 1-D ECG consists of three stages: feature extraction, feature dimension reduction, and emotion classification. The structure of the proposed 1-D ECG-based ERS is illustrated in Figure 3. Two open-source toolboxes, namely, Augsburg BioSignal Toolbox (AUBT) 19 and Toolbox for Emotional feature extraction from Physiological signals (TEAP), 20 were employed to facilitate feature extraction from the ECG signals. AUBT provides tools for the analysis of physiological signals such as the ECG, RESP, EMG, and GSR. These tools are available for Windows with MATLAB 7.1. On the other hand, TEAP is compatible with the MATLAB and Octave software packages operating on Windows and can analyse and compute features from physiological data such as EEG, GSR, PPG, and EMG.
The AUBT and TEAP feature extractors were included with the Low Pass Filter (LPF), a filter meant to reject all undesirable frequencies in a signal. The LPF was one of the most widely used filters before the computation of statistical features for physiological signals. 31,32 As a result, automated 1-D ECG pre-processing utilizing LPF was performed in this study to reduce muscle and respiratory noise in ECG signals.
AUBT extracted 81 features in the time and frequency domains from each 1-D ECG signal, including the mean, median, and standard deviation for each PQRST wave, HRV, frequency spectrum range, and amplitude signal. Sixteen (16) statistical features are extracted, including mean, IBI, HRV, and multiscale entropy in the time and frequency domains. Table 3 and Table 4 provide abbreviations and descriptions of AUBT and TEAP features, respectively.  Additionally, to prevent the "Curse of Dimensionality," dimensionality reduction is employed to further reduce the number of high-dimensional features to low-dimensional features. The dimensions of the features were decreased using linear discriminant analysis (LDA), a well-known approach for reducing the dimensions of features. 33 LDA is a supervised algorithm that can reduce dimensionality while retaining as much class-discrimination information as possible. Following that, the low-dimensional features were fed into a Support Vector Machine (SVM) classifier for emotion classification. The following section will outline the classifying process.

2) 2-D ECG
The duration of the ECG recording varies according to the duration of the video (average = 199 seconds). As Katsigiannis and Ramzan proposed, this study analysed the final 60 seconds of each recording to allow time for a dominant emotion to emerge. 13 Following that, 1-D ECG was pre-processed using a simple MATLAB function by Ref. 34 to eliminate baseline wander caused by breathing, electrically charged electrodes, or muscle noise. The signal was then divided into four segments corresponding to 15 seconds each. Then, using MATLAB version 9.7, the 1-D ECG was transformed into a 2-D ECG ( Figure 4). The image has a width of 1920 pixels and a height of 620 pixels.
Due to the fact that the 2-D ECG was converted to a rectangle shape, it is not easy to resize the photos to the standard input image sizes of 224Â224 and 299Â299. As a result, the converted 2-D ECG was resized to 60% of its original size using Python version 3.8.5. This scale percentage was chosen after considering the quality of the image, the type of feature extractor used, and the computational cost the system can afford. The coloured images were converted into greyscale images. Then, binarization of the image using an Otsu's automatic image thresholding method 35 was done. This method ascertains the optimal threshold values from pixel values of 0 to 255 by calculating and evaluating their withinclass variance. 36 The area of interest for 2-D ECG is laying on the PQRST waves, making the peaks detector the best approach to be employed. Therefore, six different feature extractors that could extract peaks, edges, or corners were applied to extract features from 2-D ECGs using Python version 3.8.5: 2. SIFT 38 : SIFT identifies feature points by searching for local maxima on the images using Difference-of-Gaussians (DoG) operators. The description approach generates a 16x16 neighbourhood around each identified feature and sub-blocks the region. SIFT is also rotation and scale invariant.
3. KAZE 39 : KAZE is based on the scale of the normalised determinant of the Hessian Matrix, with the maxima of detector responses being captured as feature points using a moving window. Additionally, KAZE makes use of non-linear space via non-linear diffusion filtering to reduce noise while keeping the borders of regions in images.
4. AKAZE 40 : AKAZE is a more sophisticated version of KAZE that is based on the Hessian Matrix determinant. Scharr filters were employed to enhance the quality of the invariance rotation, rendering AKAZE features rotation-and scale-invariant.
5. BRISK 41 : While searching for maxima in the scale-space pyramid, BRISK detects corners using the AGAST algorithm and filters them using the FAST Corner Score. Additionally, the BRISK description is based on the recognised characteristic direction of each feature, which is necessary for rotation invariance.
6. HOG 42 : HOG is a feature descriptor that is used to compute the gradient value for each pixel. The image shape denoted the edge or gradient structure derived using a high-quality local gradient intensity distribution.
All of the extractors successfully extracted the ECG features, including the peaks, edges, and corners from the PQRST waves. The extracted features were then given to the classifier (SVM) to classify the emotions. Figure 5 illustrates the structure of the proposed 2-D ECG-based.

Support vector machine
Emotion classification was performed using SVM. The SVM works by separating the class data points and drawing a boundary called the hyperplane between them. Each hyperplane has what are known as "decision boundaries" to determine which side of each class resides. As reported in previous studies, SVM has a low computational cost and shows excellent performance in classifying emotions, as reported in previous studies. 13,21,24,43,44

Experimental setting
The scale of self-assessed emotions, which ranges from 1 (lowest) to 5 (highest), was classified using a five-point scale with middle-point thresholds (an average rate of 3.8). As a result, scales four and five were assigned to the high class, while the remaining scales were assigned to the low class. This results in an imbalanced distribution of DREAMER classes: valence has 39% of high valence and 61% of low valence; arousal has 44% of low arousal and 56% of high arousal. The hyperparameters for SVM were tuned using an exhaustive parameter search tool, GridSearchCV, from Scikit-learn that automates the tuning procedure. 45 This study tuned only the parameters with a high and relative tuning risk and left the remainder at their default values because they are the least sensitive to the hyperparameter tuning process, as suggested by Weerts, Mueller, and Vanschoren. 46 The dataset was split into a reasonable proportion of training and testing sets to evaluate the model's performance on new unseen data. This study used a stratified train-test split of 80:20 for the training and testing sets. This strategy guarantees the dataset's exact proportions of samples in each class are preserved.
Additionally, as we had a small dataset size, this study applied KFold Cross-Validation, with the number of folds set to 10, the most commonly used number in prior research to improve ERS performance. The experimental setting is tabulated in Table 5.

Results
The testing performance of the ERS in classifying emotions using two different types of ECG data, 1-D and 2-D, is summarised in Table 6. The result denoted by an asterisk (*) corresponds to the original DREAMER publication 13, whereas the best accuracy and F1 score for classifying valence and arousal were bolded and shaded.
For 1-D input, the features extracted using the TEAP feature extractor obtain the best valence performance with an accuracy of 65.06% and an F1 score of 75.63%. The best arousal performance is obtained using features extracted by the AUBT feature extractor, which has a 57.83% and a 44.44% F1 score.
On the other hand, the KAZE feature extractor achieves the best valence performance with 2-D input, achieving 62.35% accuracy and a 49.57% F1 score. Simultaneously, with 59.64% accuracy and a 59.71% F1 score, the AKAZE feature extractor achieves the best performance in arousal emotion.
For comparison purposes, the computation time for both ECG inputs was recorded and reported in Table 7. The average time required to compute 1-D is 1.58 AE 0.07 seconds. In comparison, the average computation time for 2-D is 3377.425 AE 3138.875 seconds. Therefore, according to the observation, 2-D took the longest computation time, whereas 1-D obtained the shortest.

Discussion & conclusions
The results indicate that both inputs work comparably well in classifying emotions. This finding is demonstrated by the fact that the best valence performance was obtained using a 1-D ECG, and the best arousal performance was acquired using a 2-D ECG. Additionally, ERS with 1-D ECG was combined with dimensionality reduction, called LDA. The presence of LDA improved the ERS performance in valence emotion but not in arousal. In terms of computational cost, 1-D ECG is better to 2-D ECG since it requires less computation time.
However, it is worth mentioning that the results obtained using 2-D ECG demonstrated potential for use as an input modality for the ERS. Additionally, 2-D ECGs are appealing because the format enables the use of a variety of imagebased methods such as image augmentation to increase the data size, convolution neural networks (CNN), and the application of transfer learning from models trained using large data. To summarise, the ERS performance of the two ECG inputs is comparable since both yield a promising outcome for emotion recognition.

Marios Fanourakis
University of Geneva, Geneva, Switzerland After evaluating the author's revisions and replies to my previous comments I would like to upgrade the status to Approved.
The authors have carefully addressed the issues previously identified with only some minor clarifications needed.
What method was used to resize/rescale the 2D ECG images (which python function)? I see in the github code that the following method was used "cv2.resize(img, dim, interpolation = cv2.INTER_AREA)", please explain what the interpolation method cv2.INTER_AREA is.
As mentioned the 2D image is 1920x620 pixels, a 60% rescaling doesn't result in either 224x224 nor 299x299. Some information is still missing here. Furthermore, why is 224x224 and 299x299 a "standard size"? Is it that these sizes are common in other ML models? Other reasons?
When splitting the data between the training and test set, was an individual participant's data also split between the train and test sets? For example, suppose there is a participant "A", would one find some data of participant "A" in the training set and some other data of the same participant "A" in the test set, or was it split such that if there was data of participant "A" in the training set there was no data from participant "A" in the test set and vice versa? This could help to further interpret the results.
From a quick glance at the github code, it seems that participants' data could be present in both training and test sets. This is not inherently bad and I don't believe it invalidates the results and conclusions presented, especially since the work's main contribution is to compare the performance of 1D ECG signals to 2D ECG images as inputs to ML models. Nonetheless, it is important to mention.
A short definition of "stratified train-test split" could be helpful for readers.
or image in the article.
The ECG signals used in this work contain some baseline wandering. It should be removed before further analysis. A simple technique with Matlab code is described in Rahman et al (2019) 1 to remove baseline wander. I am expecting a result comparing the emotion recognition rate before and after the baseline wander removal from the ECG signal.

○
In addition to that, is it possible to use LSTM to the ECG time series to find the accuracy of the emotion recognition from ECG signals?

If applicable, is the statistical analysis and its interpretation appropriate? Yes
Are all the source data underlying the results available to ensure full reproducibility? Yes

Are the conclusions drawn adequately supported by the results? Partly
Competing Interests: No competing interests were disclosed.

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.
Author Response 20 May 2022

Sharifah Noor Masidayu Sayed Ismail, Multimedia University, Bukit Beruang,, Malaysia
The paper titled "Evaluation of electrocardiogram: numerical vs. image data for emotion recognition system" looks interesting. Although it is a short paper, I found it interesting and am positive about this work, but it should go through some modification: We appreciate your feedback and suggestions. We have revised the manuscript as needed. Thank you for your suggestions. As per your recommendations, we have redrawn the figure and an additional note has been added in the legend. As several modifications were made to the manuscript, Figure 2 was originally changed to Figure 1. This figure can be found on page 4.

The ECG signals used in this work contain some baseline wandering. It should be removed before further analysis. A simple technique with MATLAB code is described in Rahman et al (2019)1 to remove baseline wander. I am expecting a result comparing the emotion recognition before and after the baseline wander removal from the ECG signal.
Thank you for your suggestion. We did use the method suggested as cited accordingly. However, your expectation of having a result comparing the emotion recognition before and after the baseline wander removal from the ECG signal is inappropriate for our work because the focus of this paper is to compare emotion classification performance using 1-D and 2-D ECGs to investigate the effect of the ECG input format on the ERS.

In addition to that, is it possible to use LSTM to the ECG time series to find the accuracy of the emotion recognition from ECG signals?
As far as we know, it is possible to use LSTM on the ECG time series to find the accuracy of emotion recognition from ECG signals. The work by Song et al. used LSTM for the same purpose as your concern. Below is the paper, which you can take a look at. We hope it answers your question.

Please provide detailed information about transforming the signal to image conversion.
Thank you for pointing this out. We have revised the 2-D ECG subsection, where the preprocessing process until the transformation from 1-D to 2-D is explained in detail. This subsection can be found on pages 11 and 12, which can be read as follows: "The duration of the ECG recording varies according to the duration of the video (average = 199 seconds). As Katsigiannis and Ramzan proposed, this study analysed the final 60 seconds of each recording to allow time for a dominant emotion to emerge 13 . Following that, 1-D ECG was pre-processed using a simple MATLAB function by 34 to eliminate baseline wander caused by breathing, electrically charged electrodes, or muscle noise. The signal was then divided into four segments corresponding to 15 seconds each. Then, using MATLAB version 9.7, the 1-D ECG was transformed into a 2-D ECG (Figure 4). The image has a width of 1920 pixels and a height of 620 pixels.
Due to the fact that the 2-D ECG was converted to a rectangle shape, it is not easy to resize the photos to the standard input image sizes of 224×224 and 299×299. As a result, the converted 2-D ECG was resized to 60% of its original size using Python version 3.8.5. This scale percentage was chosen after considering the quality of the image, the type of feature extractor used, and the computational cost the system can afford. The coloured images were converted into greyscale images. Then, binarization of the image using an Otsu's automatic image thresholding method 35 was done. This method ascertains the optimal threshold values from pixel values of 0 to 255 by calculating and evaluating their withinclass variance 36 " The reference-related comparison and discussion should be completed in the result and discussions. If possible, avoid referencing in the Conclusion section.
Thank you for bringing this to our attention. We found your comments extremely helpful and have revised them accordingly. The revised text can be read as follows: "The results indicate that both inputs work comparably well in classifying emotions. This finding is demonstrated by the fact that the best valence performance was obtained using a 1-D ECG, and the best arousal performance was acquired using a 2-D ECG. Additionally, ERS with 1-D ECG was combined with dimensionality reduction, called LDA. The presence of LDA improved the ERS performance in valence emotion but not in arousal. In terms of computational cost, 1-D ECG is better to 2-D ECG since it requires less computation time.
However, it is worth mentioning that the results obtained using 2-D ECG demonstrated potential for use as an input modality for the ERS. Additionally, 2-D ECGs are appealing because the format enables the use of a variety of image-based methods such as image augmentation to increase the data size, convolution neural networks (CNN), and the application of transfer learning from models trained using large data. To summarise, the ERS performance of the two ECG inputs is comparable since both yield a promising outcome for emotion recognition." Overall, the structure of the report is not coherent, the related work is incomplete, and the motivation for this work is not convincing. Furthermore, many important details are missing about the dataset and the methods used which makes it difficult to trust the results and the comparisons they make with other works.
Detailed comments: Structure: several improvements to be made, information seems to be scattered throughout the article. For example, information about the dataset is present in both the introduction and methods sections. It is best to keep this information in the same section. Same for the related works.
Authors should give a brief explanation on what are ECG wave images in the introduction, otherwise readers in the emotion recognition field might confuse them with spectrograms which are more widely used in the field. It may also be better to change the term "numerical data" to "time-series data" or "1D data" (2D being an image).

Typo on page 3: EGC instead of ECG.
From the references pertaining to the use of ECG images (1-6, 23,24) only half actually use ECG wave images (2,4,23,24). The rest either use time-series or convert the time-series to spectrograms. The ones that do use ECG images, mainly analyze individual beats and not the entire ECG wave in order to detect medical heart issues. For the emotion recognition use-case it is necessary to analyze significantly larger regions of the signal than individual beats.
In emotion recognition, it is common to transform the ECG signal into a spectrogram image. The authors do not cite any work mentioning this method. In the Emotion model paragraph (page 3): it is still unclear which model of emotions was used, Ekman or Russell? Authors say "the latter" (referring to Russel) but then mention binary classification which may be confusing since the arousal/valence space is continuous making it a regression problem or at least multiclass and not binary. Authors should re-word this part to make it clear.
Referring to wearable ECG devices, the authors state: "However, most of these devices store the ECG as images instead of raw numerical data". No references or other market analysis is provided to show that this is the case. The use of ECG wave images for emotion recognition is not properly motivated. I fail to see any advantage of using ECG wave images over the time-series data unless ECG time-series data is not available.
How was the data annotated in the DREAMER dataset? Were they continuous annotations or a single annotation per video clip? This is very relevant to include in the article. A few more words about the dataset are needed like a short description of the experimental protocol.
Arousal/valence rating values are ranging from 1 to 5 in the DREAMER dataset. The authors never explain how they split them into two classes (high/low) for the binary classification.
Authors do not include sufficient information on how the time-series ECG data was converted to an image (resolution, compression, windowing), or how the data was treated in general. Did the authors use the entire ECG signal for each video? Was there any windowing? Since the authors did not properly summarize the dataset (how were the videos annotated?) it is difficult to grasp or guess on how the data was processed.
None of the cited literature used any of the image feature extraction methods that the authors used, and the authors did not discuss their reasoning for why they selected those image feature extraction methods and not the ones established in the ECG image analysis literature that they cited. Some more illustrations of these features may be useful besides the one in Figure 5 in order to convince the readers. Support vector machine: It is not clear what preprocessing steps were applied to the data. For example in the DREAMER dataset baseline they only use the last 60s of data for each film clip.
Data was divided 80:20 and 10-fold cross-validation was used. The authors do not specify exactly how the data was split (see DREAMER dataset paper section V as an example). From reading this part, I can only assume that all data from all participants was used for each fold (general model), something that is diverging from how the data was split in the DREAMER dataset paper (they made models for each individual participant). Therefore any comparisons to the results of the DREAMER baseline are invalid.
Results: If the classes are unbalanced (as the DREAMER dataset paper indicates) accuracy is not valid on it's own, include f1 score, and/or Cohen's Kappa.
Discussion: "Ref. 11" has nothing to do with the statement in that paragraph. LDA was actually applied in the DREAMER dataset paper and they reported that there were no significant differences in performance.

reasons outlined above.
Author Response 20 May 2022 Sharifah Noor Masidayu Sayed Ismail, Multimedia University, Bukit Beruang,, Malaysia The authors use the DREAMER dataset to compare the emotion recognition performance of features extracted from the time-series ECG signal versus features extracted from images of the ECG wave signal. An SVM model is used as the classifier.
Overall, the structure of the report is not coherent, the related work is incomplete, and the motivation for this work is not convincing. Furthermore, many important details are missing about the dataset and the methods used which makes it difficult to trust the results and the comparisons they make with other works.
Thanks for taking the time to review our manuscript. We have revised the manuscript based on your comments and suggestions accordingly.

Structure: several improvements to be made, information seems to be scattered throughout the article. For example, information about the dataset is present in both the introduction and methods sections. It is best to keep this information in the same section. Same for the related works.
Thanks for pointing this out. We agree with your suggestion and have attempted to address the issues. Therefore, we have revised each section as per suggestions by revising the information written in the manuscript accordingly. The revised part can be found on pages 3 until 13.
Authors should give a brief explanation on what are ECG wave images in the introduction, otherwise readers in the emotion recognition field might confuse them with spectrograms which are more widely used in the field. It may also be better to change the term "numerical data" to "time-series data" or "1D data" (2D being an image).
Thank you for drawing our attention to this. As you mentioned, we have added an explanation of what ECG wave images are in the Introduction section that reads as follows: "Fundamentally, ECG is used to measure electrical activity in the human heart by attaching electrodes to the human body. Due to the continual blood pumping action to the body, the electrical activity of the heart can be found in the sinoatrial node. The electrocardiogram signal is composed of three basic components: P, QRS, and T waves ( Figure 1). P waves are produced during atrium depolarization, QRS complexes are produced during ventricular depolarization, and T waves are produced during ventricle recovery.
Despite this, majority of the portable devices record the ECG signal as images (2-D images) in a PDF file rather than as raw numerical data (1-D data) 16 -18 . The example of a PDF-based 2-D ECG is depicted in Figure 2. Due to this problem, researchers were required to convert the PDF file of the ECG into 1-D data before performing further emotion analysis, adding complexity to the pre-processing process. On this account, given the positive results obtained in monitoring and diagnosing cardiovascular-related diseases, the efficacy of 2-D ECG in emotion studies also warrants further investigation." Additionally, we also added Figure 2 that shows the snippet of the 2-D ECG from the PDF file. This figure can be found on page 4.
Furthermore, as per suggestion, we have changed the terms "numerical data" to "1D ECG" and "wave images" to "2-D ECG".

Typo on page 3: EGC instead of ECG.
Thank you so much for catching these glaring and confusing errors, which we have now corrected.

From the references pertaining to the use of ECG images (1-6, 23,24) only half actually use ECG wave images (2,4,23,24). The rest either use time-series or convert the timeseries to spectrograms. The ones that do use ECG images, mainly analyze individual beats and not the entire ECG wave in order to detect medical heart issues. For the emotion recognition use-case it is necessary to analyze significantly larger regions of the signal than individual beats.
Based on your comment, we have revised our Related Works section and corrected them accordingly. Additionally, we added the summary of existing works that use 1-D and 2-D ECG input with their purposes, tabulated in Table 1. The revision of these issues can be found on page 4 until 8.

In emotion recognition, it is common to transform the ECG signal into a spectrogram image. The authors do not cite any work mentioning this method.
Thanks for pointing this out. We agree with your comments. Therefore, we have revised our Related Works section and provided improvements through the fourth paragraph on page 6: . "Despite rising popularity among medical practitioners in assessing patients' cardiac disease, 2-D ECG remains inadequate compared to 1-D ECG usage as a type of input in emotion recognition studies. As a result, the number of studies employing 1-D ECG in ERS is higher than that utilizing 2-D ECG in ERS. However, rather than employing a printout-based 2-D ECG, emotion researchers classified human emotions using 2-D ECG spectral images. For example, 15 determines the R-peaks of the electrocardiogram prior to generating the R-R interval (RRI) spectrogram. Following that, CNN was used to classify the emotions, with an accuracy rate greater than 90%. Elalamy et al. 30 used ResNet-50 to extract features from a 2-D ECG spectrogram. Then, Logistic Regression (LR) was employed as a classifier and achieved an accuracy of 78.30% in classifying emotions." Additionally, in Table 1, the ECG input was listed as either 1-D or 2-D ECGs, where 2-D was further categorised into standard 2-D ECG or spectral 2-D ECG.
Authors do not include any references to other works that use the DREAMER dataset ECG signals for emotion recognition, here are some:

Pritam Sarkar et al. 2020 Self-supervised ECG Representation Learning for Emotion Recognition
I also came across another publication of some of the co-authors which would be advantageous to reference:

Muhammad Anas Hasnul et al. 2021 Evaluation of TEAP and AuBT as ECG's Feature Extraction Toolbox for Emotion Recognition System.
Thank you for the paper suggested. We have added the reference according to your recommendation, which you can find on page 6 as follows: "Additionally, numerous other researchers also used the ECG signals from the DREAMER dataset to perform emotion recognition. For instance, 1-D ECG data from the DREAMER dataset is utilized by Wenwen He et al. 24 that suggested an approach for emotion recognition using ECG contaminated by motion artefacts. The proposed approach improved classification accuracy by 5% to 15%. Additionally, Pritam and Ali 25 also employed 1-D ECG from the DREAMER dataset to develop the self-supervised deep multi-task learning framework ERS, which consists of two stages of learning: ECG representation learning and emotion classification learning. The accuracy gained in this study was greater than 70%. Hasnul et al. 12 also used the 1-D ECG by DREAMER dataset to compare the performance of two feature extractor toolboxes. They noted that the dataset's size and the type of emotion classified might affect the suitability of the extracted features." In the Emotion model paragraph (page 3): it is still unclear which model of emotions was used, Ekman or Russell? Authors say "the latter" (referring to Russel) but then mention binary classification which may be confusing since the arousal/valence space is continuous making it a regression problem or at least multiclass and not binary. Authors should re-word this part to make it clear.
Thank you for pointing this out. The reviewer is correct; the original phrase is confusing. Therefore, we have removed this part to avoid any further confusion. Furthermore, this concern has been changed and revised to make it more straightforward and understandable. The revised text was located in the experimental setting subsection under the Method section on page 13: "The scale of self-assessed emotions, which ranges from 1 (lowest) to 5 (highest), was classified using a five-point scale with middle-point thresholds (an average rate of 3.8). As a result, scales four and five were assigned to the high class, while the remaining scales were assigned to the low class." Referring to wearable ECG devices, the authors state: "However, most of these devices store the ECG as images instead of raw numerical data". No references or other market analysis is provided to show that this is the case. The use of ECG wave images for emotion recognition is not properly motivated. I fail to see any advantage of using ECG wave images over the time-series data unless ECG time-series data is not available.
Thank you for drawing our attention to this, and we agree with the comments. Therefore, we have cited the necessary references and revised the statement as per the suggestion.
The revised text can be found on page 4 as follows: "Previous research on human emotions has primarily relied on either direct analysis of 1-D data 12 -14 or the conversion of 1-D data to a 2-D spectral image 15 prior to identifying the emotions. Despite this, majority of the portable devices record the ECG signal as images (2-D images) in a PDF file rather than as raw numerical data (1-D data) 16 -18 . The example of a PDF-based 2-D ECG is depicted in Figure 2. Due to this problem, researchers were required to convert the PDF file of the ECG into 1-D data before performing further emotion analysis, adding complexity to the pre-processing process. On this account, given the positive results obtained in monitoring and diagnosing cardiovascular-related diseases, the efficacy of 2-D ECG in emotion studies also warrants further investigation." How was the data annotated in the DREAMER dataset? Were they continuous annotations or a single annotation per video clip? This is very relevant to include in the article. A few more words about the dataset are needed like a short description of the experimental protocol.
Thank you for bringing this issue to our attention. The short description of the DREAMER dataset has been improved to address this issue. The revised text was located in the Method section under a subsection called "The dataset (DREAMER") on page 8: "This study used ECG signals from Katsigiannis and Ramzan 13 called DREAMER. The DREAMER dataset is a freely accessible database of electroencephalogram (EEG) and electrocardiogram (ECG) signals used in emotion research. However, EEG signals were removed from this study because the primary focus is on ECG signals. The ECG was recorded using the SHIMMER ECG sensor at 256 Hz and stored in 1-D format. The DREAMER dataset contains 414 ECG recordings from 23 subjects who were exposed to 18 audio-visual stimuli designed to evoke emotion. Each participant assessed their emotions on a scale of 1 to 5 for arousal, valence, and dominance. However, because this study was primarily concerned with arousal and valence ratings, participants' evaluations of dominance were discarded." Arousal/valence rating values are ranging from 1 to 5 in the DREAMER dataset. The authors never explain how they split them into two classes (high/low) for the binary classification.
Thank you for bringing this to our attention. We have improved this part by adding an explanation of how we categorise emotions into two classes. The explanation can be found on page 13: "The scale of self-assessed emotions, which ranges from 1 (lowest) to 5 (highest), was classified using a five-point scale with middle-point thresholds (an average rate of 3.8). As a result, scales four and five were assigned to the high class, while the remaining scales were assigned to the low class." Authors do not include sufficient information on how the time-series ECG data was converted to an image (resolution, compression, windowing), or how the data was treated in general. Did the authors use the entire ECG signal for each video? Was there any windowing? Since the authors did not properly summarize the dataset (how were the videos annotated?) it is difficult to grasp or guess on how the data was processed.
We agree with this comment. Therefore, we improved the explanation part of preprocessing the 2-D ECG. The improvised text can be found in the first and second paragraphs of subsection 2-D ECG on page 11.
"The duration of the ECG recording varies according to the duration of the video (average = 199 seconds). As Katsigiannis and Ramzan proposed, this study analysed the final 60 seconds of each recording to allow time for a dominant emotion to emerge 13 . Following that, 1-D ECG was pre-processed using a simple MATLAB function by 34 to eliminate baseline wander caused by breathing, electrically charged electrodes, or muscle noise. The signal was then divided into four segments corresponding to 15 seconds each. Then, using MATLAB version 9.7, the 1-D ECG was transformed into a 2-D ECG (Figure 4). The image has a width of 1920 pixels and a height of 620 pixels.
Due to the fact that the 2-D ECG was converted to a rectangle shape, it is not easy to resize the photos to the standard input image sizes of 224×224 and 299×299. As a result, the converted 2-D ECG was resized to 60% of its original size using Python version 3.8.5. This scale percentage was chosen after considering the quality of the image, the type of feature extractor used, and the computational cost the system can afford. The coloured images were converted into greyscale images. Then, binarization of the image using an Otsu's automatic image thresholding method 35 was done. This method ascertains the optimal threshold values from pixel values of 0 to 255 by calculating and evaluating their withinclass variance 36 " None of the cited literature used any of the image feature extraction methods that the authors used, and the authors did not discuss their reasoning for why they selected those image feature extraction methods and not the ones established in the ECG image analysis literature that they cited. Some more illustrations of these features may be useful besides the one in Figure 5 in order to convince the readers.
Thank you for pointing this out. Cite literature either using their own algorithm to detect peaks on PQRST waves or automatically extracting ECG features using a deep learning system. However, we had included our reason for employing these image feature extraction methods, which we believed could add a valuable contribution to the state-of-the-art. Additionally, we removed Figure 5 and replaced it with the description of each feature extractor to help convince the readers. The revised text can be found on pages 11 and 12: "The area of interest for 2-D ECG is laying on the PQRST waves, making the peaks detector the best approach to be employed. Therefore, six different feature extractors that could extract peaks, edges, or corners were applied to extract features from 2-D ECGs using Python version 3.8 Thank you for this comment. The pre-processing part can be found in the Methods section under the 1-D ECG and 2-D ECG subsections. The pre-processing for 1-D ECG can be read as follows: "The AUBT and TEAP feature extractors were included with the Low Pass Filter (LPF), a filter meant to reject all undesirable frequencies in a signal. The LPF was one of the most widely used filters before the computation of statistical features for physiological signals 31, 32 . As a result, automated 1-D ECG pre-processing utilizing LPF was performed in this study to reduce muscle and respiratory noise in ECG signals." Whereas the pre-processing for 2-D ECG can be read as follows: "The duration of the ECG recording varies according to the duration of the video (average = 199 seconds). As Katsigiannis and Ramzan proposed, this study analysed the final 60 seconds of each recording to allow time for a dominant emotion to emerge 13 . Following that, 1-D ECG was pre-processed using a simple MATLAB function by 34 to eliminate baseline wander caused by breathing, electrically charged electrodes, or muscle noise. The signal was then divided into four segments corresponding to 15 seconds each. Then, using MATLAB version 9.7, the 1-D ECG was transformed into a 2-D ECG." Data was divided 80:20 and 10-fold cross-validation was used. The authors do not specify exactly how the data was split (see DREAMER dataset paper section V as an example). From reading this part, I can only assume that all data from all participants was used for each fold (general model), something that is diverging from how the data was split in the DREAMER dataset paper (they made models for each individual participant). Therefore, any comparisons to the results of the DREAMER baseline are invalid.
Thank you for pointing this out. We have addressed this issue by adding a new subsection under Method, namely, Experimental Setting. This subsection is located on pages 13 and 14, and can be read as follows: "The hyperparameters for SVM were tuned using an exhaustive parameter search tool, GridSearchCV, from Scikit-learn that automates the tuning procedure 45 . This study tuned only the parameters with a high and relative tuning risk and left the remainder at their default values because they are the least sensitive to the hyperparameter tuning process, as suggested by Weerts, Mueller, and Vanschoren 46 .
The dataset was split into a reasonable proportion of training and testing sets to evaluate the model's performance on new unseen data. This study used a stratified train-test split of 80:20 for the training and testing sets. This strategy guarantees the dataset's exact proportions of samples in each class are preserved.
Additionally, as we had a small dataset size, this study applied KFold Cross-Validation, with the number of folds set to 10, the most commonly used number in prior research to improve ERS performance." that need attention. Hence, the following suggestions should be addressed to enhance the quality of the manuscript. Literature survey is poor. More number of related state-of-the-art works should be cited.
There is no citation of the work taking into account both ECG image and ECG numerical data. 1.
It has been mentioned that the converted ECG images have been resized to 60% of the original size to reduce the computational time. What is the reason for choosing 60%? 2.
There is no analysis of the computational complexity of the proposed method. 3.
In Table 4, emotion classification accuracies for both ECG image and ECG numerical data have been provided. Can these accuracy values be accepted in practical applications?

4.
There is no comparison of the proposed method with state-of-the-art methods. 5.
scale percentage was chosen after considering the quality of the image, the type of feature extractor used, and the computational cost the system can afford." There is no analysis of the computational complexity of the proposed method.
Thank you for pointing out this matter. We have included the analysis of the computational complexity under the result section, tabulated in Table 7, which discussed the time analysis of both inputs used to train and test the ERS model. The analysis was located on page 14: "For comparison purposes, the computation time for both ECG inputs was recorded and reported in Table 7. The average time required to compute 1-D is 1.58 ± 0.07 seconds. In comparison, the average computation time for 2-D is 3377.425 ± 3138.875 seconds. Therefore, according to the observation, 2-D took the longest computation time, whereas 1-D obtained the shortest." Additionally, the analysis of the computational time was touched on in the Discussion and Conclusion section as follows: "In terms of computational cost, 1-D ECG is better to 2-D ECG since it requires less computation time." In Table 4, emotion classification accuracies for both ECG image and ECG numerical data have been provided. Can these accuracy values be accepted in practical applications?
Thank you for pointing this out. For your information, our results have been updated according to the result of the latest experiment. This result is tabulated in Table 6 on page 13. Based on this result, the accuracy and F1-score achieved for this study are on par with the existing works, which shows that our ERS model can be accepted in practical applications. However, this does not rule out the possibility of improving this result, as there is much more room to improve the ERS performance to develop a more robust ERS in the future.
There is no comparison of the proposed method with state-of-the-art methods.
A comparison with existing work (DREAMER) has been provided for 1-D ECG. This comparison can be found in the result section on page 15, tabulated in Table 6. However, no existing work using the 2-D ECG of the DREAMER dataset has been reported. Therefore, no comparison can be made.
Thank you for bringing our attention to this. We have revised the references and citation part accordingly and completed it in all respects and in a uniform style as per suggestions.
Competing Interests: The authors declare that they have no conflict of interest.
The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias • You can publish traditional articles, null/negative results, case reports, data notes and more • The peer review process is transparent and collaborative • Your article is indexed in PubMed after passing peer review • Dedicated customer support at every stage • For pre-submission enquiries, contact research@f1000.com