Keywords
smartphone, one-dimensional motion signal, activity recognition, stacking deep network, discriminant learning
smartphone, one-dimensional motion signal, activity recognition, stacking deep network, discriminant learning
Correction on the mathematical definitions/equations to rectify the inconsistent variables used in the definitions/equations
See the authors' detailed response to the review by Cheng-Yaw Low
See the authors' detailed response to the review by Andrews Samraj
Human activity recognition (HAR) can be categorized into vision-based and sensor-based. In vision-based HAR, an image sequence, in the form of video, recording the human activity is captured by a camera.1 This sequence will be analysed to recognize the nature of an action. This system is applied for surveillance, human-computer interaction and healthcare monitoring. For sensor-based HAR, human activities are captured by inertial sensors, such as accelerometers, gyroscopes or magnetometers. Among these approaches, sensors are more favourable due to their lightweight nature, portability and low energy usage.2 With the advancement of mobile technology, smartphones are equipped with high-end components. Accelerometer and gyroscope sensors embedded in the smartphone make it feasible as an acquisition device for HAR. Smartphone-based HAR has been an area of contemporary research in recent years.3–6 In this work, we categorize the smartphone-based HAR as part of the sensor-based HAR. Activity inertial signals are collected through smartphone sensors.
Hand-crafted approaches using manually computed statistical features have been proposed.4,7 These authors applied various machine learning techniques such as decision tree, logistic regression, multilayer perceptron, naïve Bayes, Support Vector Machine etc. to classify the detected activities. The performance of the handcrafted approaches might be affected when dealing with complex scenarios due to their feature representation incapability. The algorithms could easily plummet into the local minimum despite the global optimal.
Hence, various deep neural networks (DNNs) were explored in HAR owing to the capability of extracting informative features. DNN is a machine learner that can automatically unearth the data characteristics hierarchically from lower to deeper levels.8,9 The work of Ronao and Cho (2016),10 Lee et al. (2017)11 and Ignatov (2018)12 explored the deep convolutional neural networks by exploiting the activity characteristics in the one-dimensional time-series signals captured by the smartphone inertial sensors. The empirical results substantiated that the extracted deep features were crucial for data representation with promising recognition performance.
Zeng et al. (2014) proposed a modified convolutional neural network to extract scale-invariant characteristics and local dependency of the acceleration time-series signal.13 The weight sharing mechanism in the convolutional layer was modified. Unlike in the vanilla model where the local filter weights were shared by all positions within the input space, the authors incorporated a more relax weight sharing strategy (partial weight sharing) to enhance the performance.
Recurrent Neural Network (RNN) was proposed to process sequential data by analysing previously inputted data and processing it linearly. Due to the vanishing gradient problem, RNN was enhanced and Long Short-Term Memory – LSTM was introduced. Chen et al. (2016) explored the feasibility of LSTM in predicting human activities.14 Empirical results demonstrated an encouraging performance of LSTM in HAR. Further, an enhanced version of LSTM, i.e. bidirectional LSTM, was proposed.15 Unlike LSTM, bidirectional LSTM tackles both past and future information during the feature analysis. With this, a richer description of features could be extracted for classification.
A cascade ensemble learning (CELearning) model was proposed for smartphone-based HAR.16 There are multiple layers in this aggregation network and the model goes deeper layer by layer. Each layer contains Extremely Gradient Boosting Trees, Random Forest, Extremely Randomized Trees and Softmax Regression. The CELearning model gains higher performance, and the training process is rather simple and efficient. Besides, Hierarchical Multi-View Aggregation Network (HMVAN) is also one of the aggregation models.17 This model integrates features from various feature spaces in a hierarchical context. In this network, three aggregation modules from the aspect of feature, position and modality levels are designed. Further, a modified Dynamic Time Warping (DTW) has been proposed for template selection in human activity recognition.18 Empirical results showed that the modified DTW was able to improve on the computational efficiency and similarity measure accuracy. In viewing the significance of time-series and continuous characteristics of sensor data, a two-stage continuous hidden Markov model framework was proposed by taking advantage of the innate hierarchical structure of basic activities.19 This framework could diminish feature computation overhead by manipulating different feature subsets on different subclasses. Experiments showed that this proposed hierarchical structure drastically boosted the recognition performance.
In DNNs, there are learning modular components in multiple processing layers for multiple-level feature abstraction. These layers are trained based on a versatile learning principle, no requiring any manual design by experts.20 DNNs accomplish excellent performances. However, these networks are not well trained if they have limited training samples, leading to performance degradation. Furthermore, there is a lack of theoretical ground on how to fine-tune the gigantic hyper-parameter series.17 The outstanding accomplishment of DNNs can only be achieved if and only if sufficient training data is accessible for fine-tuning the numerous parameters. A high specification of GPU is needed to train the network from gargantuan datasets. Besides, impersonal HAR solution is preferable for real-time applications. This solution can be directly applied to new users without necessitating model regeneration.
A stacking-based deep learning model for smartphone-based HAR is proposed. Inspired by the hierarchical learning in the DNNs, the proposed stacked learning network is aggregated with multiple learning modules, one after another, in a hierarchical framework. Specifically, a discriminant learning function is implemented in each module for discriminant mapping to generate discriminative features, level by level. The lower (generic) to deeper features are input to a classifier for activity identification. This proposed approach is termed Stacked Discriminant Feature Learning (SDLF).
The contributions of this work are summarized:
1. A deep analytic model is proposed for smartphone based HAR for quality feature extraction without the need of a gigantic training set and tenuous hyper-parameter tuning.
2. An adaptable modular model is developed with a discriminant learning function in each module to extract discriminant deep features demanding no graphics processing unit (GPU) but only a central processing unit (CPU) with a fast-learning rate.
3. An experimental analysis using various performance evaluation metrics (i.e. recall, precision, the area under the curve, computational time, etc.) with subject-independent protocol implementation (no overlap in subjects between training and testing sets) to facilitate impersonal HAR solution.
Smartphone inertial sensors were used to capture 3-axial linear (total) acceleration and 3-axial angular velocity signals. These signals were pre-processed into time- and frequency-domain features, as listed in Table 1. Next, the pre-processed data was inputted into the Stacked Discriminant Feature Learning (SDFL) for feature learning. The extracted feature template was fed into the nearest-neighbour (NN) classifier for classification. The overview of the system is illustrated in Figure 1.
SDFL is a pile of multiple discriminant learning layers interleaved with a nonlinear activation unit, as illustrated in Figure 2. By cascading multiple discriminant learning modules, each layer of SDFL learns based on the input data and the learned nonlinear features of the preceding module. The depth of the stacking layer is determined using the database subset. If the performance is not improving but showing degradation, the depth of the stacking layer is determined. In this case, the depth of three showed the optimal performance, so we adopted this architecture with three layers. To be detailed, the first discriminant learning module learns based on the input data and the second learning module learns based on an input vector (concatenating the input data and the learned features of the first learning module). This is similar to the third learning process where the third learning module learns based on an input vector (comprising the input data and the learned features of the second learning module).
Let be a set of transformed data with d dimension, i.e. d = 561, is the class label of , is the number of training classes, i.e. C = 6, each of classes has a mean and total mean vector with denotes the number of training samples for jth class. In the first learning layer, the input vector is the transformed data . The computation of the intrapersonal scatter matrix and interpersonal scatter matrix are defined as:
where T denotes a transpose operation. Next, a linear transformation is computed by maximizing the Rayleigh coefficient. With this optimization, the data from the same person could be projected close to each other, while data from different people is projected as far apart as possible. This optimization function is termed as Fisher’s criterion,21
The mapping is constructed through solving the generalized eigenvalue problem,
The learned features are produced through the projection of the input data onto the mapping subspace,
is transformed to dimensions. We denote l for the index of modular layer in SDFL. The learned feature vector of the first modular unit is notated as . A nonlinear input-output mapping is applied to via a nonlinear activation function. In this study, sigmoid function, is used for the nonlinear projection. To be specific, is the nonlinear learned features of the first modular unit.
For deeper modules, the input vector of the respective module is a stacking vector containing the input data and the learned features, i.e. where and . The intrapersonal scatter matrix and interpersonal scatter matrix are formulated,
In this case, is the jth class mean computed from the input vectors of jth class, and the total mean vector at lth modular unit. The final feature vector is the nonlinear learned features of each modular layer with length of 3(C-1),
We scrutinized how well SDFL could analyse the inertial data and correctly classify those activities in a user-independent scenario. The challenge of subject-independent protocol is that there is a certain degree of variance of human gait towards the inertial data patterns, although performing the same activity, as illustrated in Figure 3 (for standing).
In this work, UCI HAR dataset was partitioned into two sets: 70% of the volunteers were selected to generate the training data and the remaining 30% of the volunteers’ data was used as the testing data. There was no subject overlapping between the training and test data sets. Table 2 records the performance of SDFL and Figure 4 shows the confusion matrix. The performance comparison with other approaches is recorded in Table 3. The computational time of SDFL is tabulated in Table 4.
Metric | Performance |
---|---|
True Positive (TP) rate | 0.963 |
False Positive (FP) rate | 0.008 |
Precision | 0.964 |
Recall | 0.963 |
F-score | 0.963 |
Area Under the Curve | 0.977 |
Accuracy (%) | 96.2674 |
Method | Accuracy (%) |
---|---|
Multiclass Support Vector Machine7 | 96 |
Dynamic Time Warping18 | 89.00 |
Hierarchical Continuous Hidden Markov Model19 | 93.18 |
Deep Belief Network (as reported in9) | 95.80 |
Group-based Context-aware method for human activity recognition (GCHAR)3 | 94.16 |
Handcrafted Cascade Ensemble Learning model (CELearning)16 | 96.88 |
Automated Cascade Ensemble Learning model (CELearning)16 | 95.93 |
Convolutional Neural Network (CNN)10 | 95.75 |
Artificial Neural Network (ANN) (as reported in10) | 91.08 |
Stacked Discriminant Feature Learning (SDFL) | 96.27 |
Classifier = Nearest Neighbour (NN) classifier.
From the empirical results, we observed that the proposed SDFL was able to demonstrate superior classification performance compared to most of the existing techniques, even though a simple classifier was adopted in the system. The exceptional performance of SDFL explains the capability of SDFL in capturing the essence of the inertial data without heavily depending on the classifier. Furthermore, SDFL also exhibited its superiority to most of the existing approaches, including deep learning models. To be specific, SDFL obtained an accuracy of 96.3%, whilst Deep Belief Network’s accuracy was 95.8%,9 CNN achieved 95.75% accuracy10 and ANN’s accuracy was 91.08%. Furthermore, we also noticed that SDFL obtains a comparable performance with the benchmark method proposed by the authors of UCI database.7 It is worth nothing that the benchmark method is using multiclass support vector machine for classification; whereas SDFL uses a simpler classifier, i.e. Nearest Neighbour (NN) classifier.
Last but not least, it was discerned that the performance of SDFL is on a par with the Cascade Ensemble Learning model (CELearning).16 Both approaches are ensemble learning methods with multiple layers for data learning. The key difference between these approaches is the analysis algorithms in each layer. CELearning is comprised of four different classifiers, i.e. Random Forest, Extremely Gradient Boosting Trees, Softmax Regression and Extremely Randomized Trees and the final classification result is obtained through the last layer via the score-level fusion of the four complex classifiers. On the other hand, in SDFL, merely Rayleigh coefficient optimization is implemented to extract the discriminant deep features. Further, a simple classifier, i.e. NN classifier, is adopted in SDFL. This deduces that the discrimination capability of SDFL primarily depends on the SDFL modular model to extract discriminant features demanding no complex classifier. From Table 4, we can notice that SDFL just needs ~seconds per sample (sps) for the training phase and ~ sps for the testing phase, on average. The fast feature learning of SDFL and the dimensionality reduction in SDFL to project the data onto a lower-dimensional subspace are the main reasons for having such an efficient computation.
A cascading learning network for human activity recognition using smartphones is proposed. In this network, a chain of independent discriminant learning modules is aggregated, layer by layer in a stackable framework. Each layer is constituted by a discriminant analysis function and a nonlinear activation function to effectively extract the rich features from the inertial data. SDFL network possesses characteristics of good performance even on small-scale training sample sets, as well as less hyper-parameter fine-tuning, and fast computation compared with the other deep learning networks. Despite showing computational efficiency, the proposed network also demonstrated its classification superiority to most of the state-of-the-art approaches with an accuracy score of ~97% in differentiating human activity classes. In future work, study on small-scale database on SDFL will be investigated.
All data underlying the results are available as part of the article and no additional source data are required.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Stacked neural networks, deep neural networks
Competing Interests: No competing interests were disclosed.
Is the work clearly and accurately presented and does it cite the current literature?
Yes
Is the study design appropriate and is the work technically sound?
Yes
Are sufficient details of methods and analysis provided to allow replication by others?
Partly
If applicable, is the statistical analysis and its interpretation appropriate?
I cannot comment. A qualified statistician is required.
Are all the source data underlying the results available to ensure full reproducibility?
Partly
Are the conclusions drawn adequately supported by the results?
Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: AI, Patterns, Bionics, ML and DEEP Learning, signals,sensors
Is the work clearly and accurately presented and does it cite the current literature?
Partly
Is the study design appropriate and is the work technically sound?
Partly
Are sufficient details of methods and analysis provided to allow replication by others?
Partly
If applicable, is the statistical analysis and its interpretation appropriate?
Partly
Are all the source data underlying the results available to ensure full reproducibility?
Partly
Are the conclusions drawn adequately supported by the results?
Partly
References
1. Ronao C, Cho S: Human activity recognition with smartphone sensors using deep learning neural networks. Expert Systems with Applications. 2016; 59: 235-244 Publisher Full TextCompeting Interests: No competing interests were disclosed.
Reviewer Expertise: Stacked neural networks, deep neural networks
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | ||
---|---|---|
1 | 2 | |
Version 3 (revision) 01 Apr 22 |
||
Version 2 (revision) 18 Feb 22 |
read | read |
Version 1 15 Oct 21 |
read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)