ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Research Article

Stacked deep analytic model for human activity recognition on a UCI HAR database

[version 1; peer review: 2 approved with reservations]
PUBLISHED 15 Oct 2021
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

Abstract

Background
Owing to low cost and ubiquity, human activity recognition using smartphones is emerging as a trendy mobile application in diverse appliances such as assisted living, healthcare monitoring, etc. Analysing this one-dimensional time-series signal is rather challenging due to its spatial and temporal variances. Numerous deep neural networks (DNNs) are conducted to unveil deep features of complex real-world data. However, the drawback of DNNs is the un-interpretation of the network's internal logic to achieve the output. Furthermore, a huge training sample size (i.e. millions of samples) is required to ensure great performance.
Methods
In this work, a simpler yet effective stacked deep network, known as Stacked Discriminant Feature Learning (SDFL), is proposed to analyse inertial motion data for activity recognition. Contrary to DNNs, this deep model extracts rich features without the prerequisite of a gigantic training sample set and tenuous hyper-parameter tuning. SDFL is a stacking deep network with multiple learning modules, appearing in a serialized layout for multi-level feature learning from shallow to deeper features. In each learning module, Rayleigh coefficient optimized learning is accomplished to extort discriminant features. A subject-independent protocol is implemented where the system model (trained by data from a group of users) is used to recognize data from another group of users.
Results
Empirical results demonstrate that SDFL surpasses state-of-the-art methods, including DNNs like Convolutional Neural Network, Deep Belief Network, etc., with ~97% accuracy from the UCI HAR database with thousands of training samples. Additionally, the model training time of SDFL is merely a few minutes, compared with DNNs, which require hours for model training.
Conclusions
The supremacy of SDFL is corroborated in analysing motion data for human activity recognition requiring no GPU but only a CPU with a fast- learning rate.

Keywords

smartphone, one-dimensional motion signal, activity recognition, stacking deep network, discriminant learning

Introduction

Human activity recognition (HAR) can be categorized into vision-based and sensor-based. In vision-based HAR, an image sequence, in the form of video, recording the human activity is captured by a camera.1 This sequence will be analysed to recognize the nature of an action. This system is applied for surveillance, human-computer interaction and healthcare monitoring. For sensor-based HAR, human activities are captured by inertial sensors, such as accelerometers, gyroscopes or magnetometers. Among these approaches, sensors are more favourable due to their lightweight nature, portability and low energy usage.2 With the advancement of mobile technology, smartphones are equipped with high-end components. Accelerometer and gyroscope sensors embedded in the smartphone make it feasible as an acquisition device for HAR. Smartphone-based HAR has been an area of contemporary research in recent years.39 In this work, we categorize the smartphone-based HAR as part of the sensor-based HAR. Activity inertial signals are collected through smartphone sensors.

Related work

Hand-crafted approaches using manually computed statistical features have been proposed.1012 These authors applied various machine learning techniques such as decision tree, logistic regression, multilayer perceptron, naïve Bayes, Support Vector Machine etc. to classify the detected activities. The performance of the handcrafted approaches might be affected when dealing with complex scenarios due to their feature representation incapability. The algorithms could easily plummet into the local minimum despite the global optimal.

Hence, various deep neural networks (DNNs) were explored in HAR owing to the capability of extracting informative features. DNN is a machine learner that can automatically unearth the data characteristics hierarchically from lower to higher levels.13 The work of Ronao and Cho (2016),14 Lee et al. (2017)15 and Ignatov (2018)16 explored the deep convolutional neural networks by exploiting the activity characteristics in the one-dimensional time-series signals captured by the smartphone inertial sensors. The empirical results substantiated that the extracted deep features were crucial for data representation with promising recognition performance.

Zeng et al. (2014) proposed a modified convolutional neural network to extract scale-invariant characteristics and local dependency of the acceleration time-series signal.17 The weight sharing mechanism in the convolutional layer was modified. Unlike in the vanilla model where the local filter weights were shared by all positions within the input space, the authors incorporated a more relax weight sharing strategy (partial weight sharing) to enhance the performance.

Recurrent Neural Network (RNN) was proposed to process sequential data by analysing previously inputted data and processing it linearly. Due to the vanishing gradient problem, RNN was enhanced and Long Short-Term Memory – LSTM was introduced. Chen et al. (2016) explored the feasibility of LSTM in predicting human activities.18 Empirical results demonstrated an encouraging performance of LSTM in HAR. Further, an enhanced version of LSTM, known as bidirectional LSTM, was proposed.19 Unlike LSTM, bidirectional LSTM tackles both past and future information during the feature analysis. With this, a richer description of features could be extracted for classification.

A cascade ensemble learning (CELearning) model was proposed for smartphone-based HAR.20 There are multiple layers in this aggregation network and the model goes deeper layer by layer. Each layer contains Extremely Gradient Boosting Trees, Random Forest, Extremely Randomized Trees and Softmax Regression. The CELearning model gains higher performance, and the training process is rather simple and efficient. Besides, Hierarchical Multi-View Aggregation Network (HMVAN) is also one of the aggregation models.21 This model integrates features from various feature spaces in a hierarchical context. In this network, three aggregation modules from the aspect of feature, position and modality levels are designed.

Motivation and contributions

In DNNs, there are learning modular components in multiple processing layers for multiple-level feature abstraction. These layers are trained based on a versatile learning principle, which does not require any manual design by experts.22 These DNNs accomplish excellent performances in pattern recognition. However, these networks are not well trained if they have limited training samples, leading to performance degradation. Furthermore, there is a lack of theoretical ground on how to fine-tune the gigantic hyper-parameter series.21 The outstanding accomplishment of DNNs can only be achieved if and only if sufficient training data is accessible for fine-tuning the large parameter set. A high specification of GPU is needed to train the network from gargantuan datasets.

Thus, a stacking-based deep learning model for smartphone-based HAR is proposed. Inspired by the hierarchical learning in the DNNs, the proposed stacked learning network is aggregated with multiple learning modules, one after another, in a hierarchical framework. Specifically, a discriminant learning function is implemented in each module for discriminant mapping to generate discriminative features, level by level. The lower (generic) to higher level (deeper) features are input to a classifier for activity identification. This proposed approach is termed Stacked Discriminant Feature Learning, coined as SDLF.

The contributions of this work are summarized in three-fold:

  • 1. A deep analytic model is proposed for smartphone based HAR to extract deep features without the need of a gigantic training set and tenuous hyper-parameter tuning.

  • 2. An adaptable modular model is developed with a discriminant learning function in each module to extract discriminant features from lower to higher levels demanding no graphics processing unit (GPU) but only a central processing unit (CPU) with a fast-learning rate.

  • 3. An experimental analysis using various performance evaluation metrics (i.e. recall, precision, the area under the curve, computational time, etc.) with subject-independent protocol implementation in which there is no overlap in subjects between training and testing sets.

Methods

Smartphone inertial sensors were used to capture 3-axial linear (total) acceleration and 3-axial angular velocity signals. These signals were pre-processed into time- and frequency-domain features, as listed in Table 1. Next, the pre-processed data was inputted into the Stacked Discriminant Feature Learning (SDFL) for feature learning. The extracted feature template was fed into the nearest-neighbour (NN) classifier for classification. The overview of the system is illustrated in Figure 1.

c2cad42c-d3e4-40e4-b04a-8b01b7bb0367_figure1.gif

Figure 1. Overview of the proposed Stacked Discriminant Feature Learning (SDFL) system.

Table 1. Pre-processed features as input data into the Stacked Discriminant Feature Learning (SDFL) system.

FunctionFeature
MeanAverage value
Std devStandard deviation
MedianMedian absolute value
MaxLargest value in array
MinSmallest value in array
SmaSignal magnitude area
EnergyAverage sum of squares
IqrInterquartile range
EntropySignal entropy
ArCoeffAuto-regression coefficients
CorrelationCorrelation coefficient
MaxFreqIndLargest frequency component
MeanFreqFrequency signal weighted average
SkewnessFrequency signal skewness
KurtosisFrequency signal kurtosis
EnergyBandEnergy of a frequency interval
AngleAngle between two vectors

SDFL is a pile of multiple discriminant learning layers interleaved with a nonlinear activation unit, as illustrated in Figure 2. By cascading multiple discriminant learning modules, each layer of SDFL learns based on the input data and the learned nonlinear features of the preceding module. The depth of the stacking layer is determined using the database subset. If the performance is not improving but showing degradation, the depth of the stacking layer is determined. In this case, the depth of three showed the optimal performance, so we adopted this architecture with three layers. To be detailed, the first discriminant learning module learns based on the input data and the second learning module learns based on an input vector (concatenating the input data and the learned features of the first learning module). This is similar to the third learning process where the third learning module learns based on an input vector (comprising the input data and the learned features of the second learning module).

c2cad42c-d3e4-40e4-b04a-8b01b7bb0367_figure2.gif

Figure 2. Stacked Discriminant Feature Learning (SDFL) framework.

Let xiyii=1N be a set of N transformed data, yi is the class label of xi, C is the number of training classes, each of C classes has a mean μj and total mean vector μ=1Ni=1Nmjμj with mj denotes the number of training samples for jth class. In the first learning layer, the input vector is the transformed data xi. The computation of the intrapersonal scatter matrix Σintra and interpersonal scatter matrix Σinter are defined as:

(1)
Σintra=j=1CxiCjxiμjxiμjT
(2)
Σinter=j=1CμjμjμμjμT

where T denotes a transpose operation. Next, a linear transformation Φ is computed by maximizing the Rayleigh coefficient. With this optimization, the data from the same person could be projected close to each other, while data from different people is projected as far apart as possible. This optimization function is termed as Fisher’s criterion,23

(3)
JΦ=ΦTΣinterΦΦTΣintraΦ

The mapping Φ is constructed through solving the generalized eigenvalue problem,

(4)
ΣinterΦ=λΣintraΦ

The learned features are produced through the projection of the input data xi onto the mapping subspace,

(5)
x̂i=ΦTxi

x̂ is transformed to C1 dimensions. We denote l for the index of modular layer in SDFL. The learned feature vector of the first modular unit is notated as x̂il=1=x̂i1. A nonlinear input-output mapping is applied to x̂i1 via a nonlinear activation function. In this study, we adopt a sigmoid function, xˇi=Sx̂i=11+ex̂i for the nonlinear projection. To be specific, xˇi1=11+ex̂i1 is the nonlinear learned features of the first modular unit.

For deeper modules, the input vector of the respective module is a stacking vector containing the input data and the learned features, i.e. zil=xixˇil1 where l=2 and 3. The intrapersonal scatter matrix Σintral and interpersonal scatter matrix Σinterl are formulated,

(6)
Σintral=j=1CzilCjzilμjlzilμjlT
(7)
Σinterl=j=1CμjlμjlμlμjlμlT

In this case, μjl is the jth class mean computed from the input vectors of jth class, zilCj and the total mean vector μl=1Ni=1Nmjμjl at lth modular unit. The final feature vector is the nonlinear learned features of each modular layer,

(8)
xˇifinal=xˇi1xˇi2xˇi3

Results

We scrutinized how well SDFL could analyse the inertial data and correctly classify those activities. The experimental hardware platform was constructed on a desktop with an Intel® Core™ i7-7700 processor with 4.20 GHz and 48.0 GB main memory; whereas the experimental software platform was a 64-bit operating system of Windows 10 with Matlab R2018a (MATLAB, RRID:SCR_001622) software (An open-access alternative that provides an equivalent function is GNU Octave (GNU Octave, RRID:SCR_014398)).

We used the UCI HAR dataset12: There were 30 subjects with 7352 training samples and 2947 testing samples. Each subject was required to carry a smartphone (Samsung Galaxy SII) on the waist and perform six different activities. The activities were “walking”, “walking_upstairs”, “walking_downstairs”, “sitting”, “standing” and “laying”.

The generalization level of SDFL was evaluated in a user-independent scenario. SDFL was trained using training samples from a group of users. Then, the model was applied to new users without the necessity of collecting additional samples of these new users to retrain the model. In this experiment, the UCI HAR dataset was partitioned into two sets: 70% of the volunteers were selected to generate the training data and the remaining 30% of the volunteers’ data was used as the testing data. There was no subject overlapping between the training and test data sets. Table 2 records the performance of SDFL and Table 3 records the performance comparison with other approaches.

Table 2. Performance of Stacked Discriminant Feature Learning (SDFL).

MetricPerformance
True Positive (TP) rate0.963
False Positive (FP) rate0.008
Precision0.964
Recall0.963
F-score0.963
Area Under the Curve0.977
Accuracy (%)96.2674

Table 3. Performance comparison of Stacked Discriminant Feature Learning (SDFL) with alternative approaches.

MethodAccuracy (%)
Dynamic Time Warping2489.00
Hierarchical Continuous Hidden Markov Model2593.18
Deep Belief Network (as reported in4)95.80
Group-based Context-aware method for human activity recognition (GCHAR)394.16
Handcrafted Cascade Ensemble Learning model (CELearning)2096.88
Automated Cascade Ensemble Learning model (CELearning)2095.93
Convolutional Neural Network (CNN)1495.75
Artificial Neural Network (ANN) (as reported in14)91.08
Stacked Discriminant Feature Learning (SDFL)96.27

Table 4 tabulates the computational time. The computational time of SDFL is benchmarked with that of the ordinary methodology, which is directly performing classification on the pre-processed data. Instead of using a multiclass support vector machine as in,12 we adopt Nearest Neighbour (NN) classifier for classification because the focus of this work is the feature extraction capability and the classification is standardized with the simplest classifier, i.e. NN.

Table 4. Computational time of Stacked Discriminant Feature Learning (SDFL) compared with directly performing classification on the pre-processed data.

Classifier = Nearest Neighbour (NN) classifier.

System phaseComputational time (s)
Pre-processed data + classifierSDFL + classifier
Training (7352 instances)Model training-0.468258
Classification53.72.67
Total training53.73.138258
Testing
(2947 instances)
Data learning-0.017416
Classification40.421.22
Total testing40.421.237416

Discussion

From the empirical results, we observed that the proposed SDFL was able to demonstrate superior classification performance compared to most of the existing techniques, even though a simple classifier was adopted in the system. The exceptional performance of SDFL explains the capability of SDFL in capturing the essence of the inertial data without heavily depending on the classifier. Furthermore, SDFL also exhibited its superiority to most of the existing approaches, including deep learning models. To be specific, SDFL obtained an accuracy of 96.3%, whilst Deep Belief Network’s accuracy was 95.8%,4 CNN achieved 95.75% accuracy14 and ANN’s accuracy was 91.08%.

Last but not least, it was discerned that the performance of SDFL is on a par with the Cascade Ensemble Learning model (CELearning).20 Both approaches are ensemble learning methods with multiple layers for data learning. The key difference between these approaches is the analysis algorithms in each layer. CELearning is comprised of four different classifiers, i.e. Random Forest, Extremely Gradient Boosting Trees, Softmax Regression and Extremely Randomized Trees and the final classification result is obtained through the last layer via the score-level fusion of the four complex classifiers. On the other hand, in SDFL, merely Rayleigh coefficient optimization is implemented to extract the low-to-higher level of discriminant features. Further, a simple classifier, i.e. NN classifier, is adopted in SDFL. This deduces that the discrimination capability of SDFL primarily depends on the SDFL modular model to extract discriminant features demanding no complex classifier.

From Table 4, we can notice that the overall training and testing time of SDFL are much lesser than those of the benchmark method. On average, SDFL just needs ~4.3×104seconds per sample (sps) for the training phase and ~4.2×104 sps for the testing phase. The fast feature learning of SDFL and the dimensionality reduction in SDFL to project the data onto a lower-dimensional subspace are the main reasons for having such an efficient computation.

Conclusions

A cascading learning network for human activity recognition using smartphones is proposed. In this network, a chain of independent discriminant learning modules is aggregated, layer by layer in a stackable framework. Each layer is constituted by a discriminant analysis function and a nonlinear activation function to effectively extract the rich features from the inertial data. This proposed SDFL network possesses characteristics of good performance even on small-scale training sample sets, as well as less hyper-parameter fine-tuning, and fast computation compared with the other deep learning networks. Despite showing computational efficiency, the proposed network also demonstrated its classification superiority to most of the state-of-the-art approaches with an accuracy score of ~97% in differentiating human activity classes.

Data availability

All data underlying the results are available as part of the article and no additional source data are required.

Comments on this article Comments (0)

Version 3
VERSION 3 PUBLISHED 15 Oct 2021
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Pang YH, Ping LY, Ling GF et al. Stacked deep analytic model for human activity recognition on a UCI HAR database [version 1; peer review: 2 approved with reservations]. F1000Research 2021, 10:1046 (https://doi.org/10.12688/f1000research.73174.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 1
VERSION 1
PUBLISHED 15 Oct 2021
Views
25
Cite
Reviewer Report 30 Nov 2021
Andrews Samraj, Department of Computer Science and Engineering, Mahendra Engineering College, Namakkal, Tamil Nadu, India 
Approved with Reservations
VIEWS 25
  1. The motivation and contributions may be rewritten to make it simpler to understand the purpose and achievements of the work (purpose of deep features have to be mentioned).
     
  2. Any chances of an
... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Samraj A. Reviewer Report For: Stacked deep analytic model for human activity recognition on a UCI HAR database [version 1; peer review: 2 approved with reservations]. F1000Research 2021, 10:1046 (https://doi.org/10.5256/f1000research.76806.r99185)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 18 Feb 2022
    Ying Han Pang, Faculty of Information Science and Technology, Multimedia University, Ayer Keroh, 75450, Malaysia
    18 Feb 2022
    Author Response
    First of all, we would like to express our heartiest thanks to the Editor-in-Chief and Reviewers who have provided us with many insightful comments and a chance to improve our ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 18 Feb 2022
    Ying Han Pang, Faculty of Information Science and Technology, Multimedia University, Ayer Keroh, 75450, Malaysia
    18 Feb 2022
    Author Response
    First of all, we would like to express our heartiest thanks to the Editor-in-Chief and Reviewers who have provided us with many insightful comments and a chance to improve our ... Continue reading
Views
26
Cite
Reviewer Report 25 Nov 2021
Cheng-Yaw Low, Yonsei University, Seoul, South Korea;  Institute for Basic Science, Daejeon, South Korea 
Approved with Reservations
VIEWS 26
This manuscript proposes a stacked deep analytic model, dubbed stacked discriminant feature learning (SDFL), for mobile-platform human activity recognition (HAR). Based on the reported experimental results, the layer-wise SDFL trained with small-scale training data outperforms the SOTAs, including the conventional ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Low CY. Reviewer Report For: Stacked deep analytic model for human activity recognition on a UCI HAR database [version 1; peer review: 2 approved with reservations]. F1000Research 2021, 10:1046 (https://doi.org/10.5256/f1000research.76806.r97058)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 18 Feb 2022
    Ying Han Pang, Faculty of Information Science and Technology, Multimedia University, Ayer Keroh, 75450, Malaysia
    18 Feb 2022
    Author Response
    First of all, we would like to express our heartiest thanks to the Editor-in-Chief and Reviewers who have provided us with many insightful comments and a chance to improve our ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 18 Feb 2022
    Ying Han Pang, Faculty of Information Science and Technology, Multimedia University, Ayer Keroh, 75450, Malaysia
    18 Feb 2022
    Author Response
    First of all, we would like to express our heartiest thanks to the Editor-in-Chief and Reviewers who have provided us with many insightful comments and a chance to improve our ... Continue reading

Comments on this article Comments (0)

Version 3
VERSION 3 PUBLISHED 15 Oct 2021
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.