ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Research Article
Revised

Modelling sentiments based on objectivity and subjectivity with self-attention mechanisms

[version 2; peer review: 1 approved, 2 approved with reservations]
PUBLISHED 17 May 2022
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

This article is included in the Artificial Intelligence and Machine Learning gateway.

This article is included in the Research Synergy Foundation gateway.

Abstract

Background: The proliferation of digital commerce has allowed merchants to reach out to a wider customer base, prompting a study of customer reviews to gauge service and product quality through sentiment analysis. Sentiment analysis can be enhanced through subjectivity and objectivity classification with attention mechanisms.
Methods: This research includes input corpora of contrasting levels of subjectivity and objectivity from different databases to perform sentiment analysis on user reviews, incorporating attention mechanisms at the aspect level. Three large corpora are chosen as the subjectivity and objectivity datasets, the Shopee user review dataset (ShopeeRD) for subjectivity, together with the Wikipedia English dataset (Wiki-en) and Internet Movie Database (IMDb) for objectivity. Word embeddings are created using Word2Vec with Skip-Gram. Then, a bidirectional LSTM with an attention layer (LSTM-ATT) imposed on word vectors. The performance of the model is evaluated and benchmarked against classification models of Logistics Regression (LR) and Linear SVC (L-SVC). Three models are trained with subjectivity (70% of ShopeeRD) and the objectivity (Wiki-en) embeddings, with ten-fold cross-validation. Next, the three models are evaluated against two datasets (IMDb and 20% of ShopeeRD). The experiments are based on benchmark comparisons, embedding comparison and model comparison with 70-10-20 train-validation-test splits. Data augmentation using AUG-BERT is performed and selected models incorporating AUG-BERT, are compared.
Results: L-SVC scored the highest accuracy with 56.9% for objective embeddings (Wiki-en) while the LSTM-ATT scored 69.0% on subjective embeddings (ShopeeRD).  Improved performances were observed with data augmentation using AUG-BERT, where the LSTM-ATT+AUG-BERT model scored the highest accuracy at 60.0% for objective embeddings and 70.0% for subjective embeddings, compared to 57% (objective) and 69% (subjective) for L-SVC+AUG-BERT, and 56% (objective) and 68% (subjective) for L-SVC.
Conclusions: Utilizing attention layers with subjectivity and objectivity notions has shown improvement to the accuracy of sentiment analysis models.

Keywords

Sentiment analysis, subjectivity, objectivity, attention mechanism, neural nets.

Revised Amendments from Version 1

  1. Additional papers have been cited to provide more context on the deep learning based methods in the Literature Review. 
  2. This model places emphasis on attention mechanisms which in this context perform better than other baseline models. Remarks about this is added to the Methods.
  3. The attention mechanism allows the model to utilize the most relevant parts of the input sequence in a flexible manner, by a weighted combination of all of the encoded input vectors, with the most relevant vectors being attributed the highest weights. We have not looked into sentence level attention as the focus of this paper is on word-level, and sentence-level will be explored in future work.
  4. A new paragraph has been added to highlight the contribution of this research work. We believe the paragraph format is more appropriate as follow the format of the journal.
  5. The use of attention mechanisms in this context proves to be beneficial, and this is the advantage over baseline methods.
  6. Seven new references added.
  7. We are of the opinion that once subjectivity is identified the sentiment accuracy can be further increased as objective statements should not contribute to this.
  8. As we are using different datasets, we could not compare the results as is. However, we will perform this in future work. In addition, we take these papers into consideration and have cited them in the paper.
  9. The references have also been corrected.

See the authors' detailed response to the review by Hien D. Nguyen
See the authors' detailed response to the review by Ashima Yadav

Introduction

The proliferation of digital commerce, especially in Malaysia, has allowed many local merchants to reach out to a wider customer base. In order to attract customer’s attention, merchants always compete to offer better price and higher quality of services. Besides that, they also seriously consider the customer feedback or reviews in order to gauge service and product quality.1

By exploring the sentiment tendency of customer reviews, it can provide a good reference for other customer before the purchasing decision is made. Besides, it helps merchants to improve service quality and customer satisfaction.

Sentiment analysis is aimed to determine the sentiment as well as polarity on part of a text. Normally, language terms are under two form of statements, namely fact statement and a non-fact statement, which are known as objective and subjective in categorical terms.2 Facts are objective terms likes events entities and their properties. On the other hand, a non-fact statement is subjective and usually related to an individual’s sentiments, personal beliefs, opinion, perspective, feelings or thoughts.

This paper adopted attention segment3 to a neural network, LSTM, by creating attention-weighted features, namely Long Short Term Memory with Attention (LSTM-ATT)4 to create attention-weighted features. It aims to introduce these features at the input level to the neural network, so that the performance of sentiment can be increased. This paper explores non-contextual embedding on subjective and objective statements, mainly Word2Vec that is proven to be fast and accurate.5–7 LR and L-SVC are employed as benchmark to evaluate the effect of our adopted attention mechanisms (LSTM-ATT) on sentiment analysis based on subjectivity and objectivity. In order to increase the size of the dataset for better classification performance, this paper proposes to adopt data augmentation technique using Bidirectional Encoder Representations from Transformers (AUG-BERT) to two sentiment classifiers, namely, Linear SVC (L-SVC) together with AUG-BERT and LSTM-ATT with AUG-BERT.

Word embedding

Word embeddings are a scheme to convert human language to a word representation that is understandable by computers. The word representation is in the form of a real-valued vector that encodes the meaning of the word, so that the words that are closer in the vector space are expected to be similar in meaning.

Collobert8 declared that a distinction word vector and proper training can increase the performance of NLP works especially the sentiment analysis. Word embedding can be classified into two types; contextual and non-contextual embeddings. Non-contextual embedding does not consider the effects of arrangement of words in a particular sentence, while contextual embedding does the opposite.

For non-contextual embedding, Mikolov et al. initiated Word2Vec.9 The word2vec algorithm uses a neural network model to learn word associations from a large corpus of text. Once trained, such a model can detect synonymous words or suggest additional words for a partial sentence. Bengio et al.10 and Collobert et al.11 enhanced it by implementing Neutral Net Language Model (NNLM). Bojanowski et al.12 made enhancement on Word2Vec by applying n-grams and cables to obtain higher performance in Word Similarity assignments that involved various types of languages and was able to show big enhancements on morphology rich languages, in particular, German datasets such as GUR350 and GUR6513 and ZG222.14 Bhagat et al.15 applied unigrams to extra individual words from Twitter messages and multiple machine learning techniques to perform sentiment analysis. Ebner et al.16 employed three simple bag-of-words representations, where a text is represented as the bag (multiset) of its words, namely pooling encoders, pre-trained word embeddings, and unigram generative regularization to regularize incorporating auxiliary discriminative tasks that managed to reduce training time and model size while maintaining high performance. Gayatry17 employed Count Vectorizer to convert each word into its corresponding vectors.

For context embedding, Peters et al.18 modified LSTM neural nets to create Embedding from Language Models (ELMo) that were able to show better results than the Stanford Tree-bank model (SST-5) from the research work by Socher et al.19 Devlin et al.20 constructed BERT along with Transformers and Attention Mechanism.3 The role of BERT is not limited to embedding functions but also become a language model that is capable to exceed ELMo on General Language Understanding Evaluation assignments (GLUE) from the research outcomes by Wang et al.21 Liu et al.22 enhanced BERT by developing A Robustly Optimized BERT Pre-Training Approach (RoBERTa). RoBERTa omits the Next Sentence Prediction task and applies an unfixed masking configuration rather than static Masked Language Modelling (MLM).

In terms of sentiment analysis, Sangeetha23 proposed multi-head attention fusion model of word and context embedding for student feedback. In addition, Yadav et al. have also provided discussions of sentiment analysis,24 with applications in medical reviews25 and disease impacts.26 For models with attention mechanisms, Nguyen et al. have implemented language-oriented sentiment analysis based on the grammar structure.27

Methods

Ethics approval

Ethical Approval Number: EA1602021 (From Technology Transfer Office (TTO), Multimedia University).

Datasets

Three large corpora datasets were chosen to denote objectivity and subjectivity datasets correspondingly. IMDb28 and Wiki-en29 were chosen as the objectivity datasets while ShopeeRD30 was chosen as the subjectivity dataset.

IMDb consists of 50K of movie reviews with contents based on the true plot and written with a neutral point of view (NPOV). Wiki-en consists of 4677K of records based on Wikipedia that forced the articles to be factual and follow the NPOV policy. ShopeeRD consists of 208K customer reviews taken from the Shopee Code League 2020 Data Science and Data Analytics competition. ShopeeRD’s entries are based on customer experiences, which are potentially judgemental and opinionated.

Wiki-en was used as the objectivity corpus for word embedding, while IMDb was used for objectivity sentiment analysis. 70% of the ShopeeRD was used as the subjectivity corpus for word embedding and the remaining 30% for subjectivity sentiment analysis. Figure 1 displays the mapping of datasets.

b9db3c52-eae2-4c22-b164-49d5e551b1f5_figure1.gif

Figure 1. The mapping of datasets.

Data preparations and word embedding

The reviews and records from the datasets underwent a set of data cleaning steps which included emoji cleaning, text cleaning such as repeated character elimination, punctuation (e.g., ?, ! or,) elimination, stop word (e.g., becomes, against, or at) elimination, lemmatization, case lowering and normalization (normalizing non-English writing into English writing).

Word embedding is carried out to transform the reviews into floating-point numbers that are stored in a high dimension array, which forms a dictionary that the computer is able to obtain word vectors from. The word embedding must be large enough to represent millions of words and for each word is denoted as a high dimension vector. In this paper, one word is represented as a 300-dimension vector.

Word2Vec by Mikolov et al.31 is a word embedding method that consist of two structural design, namely Skip-gram and Continuous Bag-of-Words (CBOW). In the CBOW model, the distributed representations of context (or surrounding words) are combined to predict the word in the middle, while in the Skip-gram model, the distributed representation of the input word is used to predict the context. It has been proven that the Skip-gram structure has been shown better results in comparison to Continuous Bag-of-Words.32–34 Hence, this paper utilizes Word2Vec Skip-Gramm structure to perform word embedding.

ShopeeRD and Wiki-en were trained into embeddings of 300d (300-dimension), with a factor of five negative examples, window dimension of five tokens, and elimination of small sentences. The two embeddings (subjectivity and objectivity) were trained for ten repetitions.

Models

To prevent over-fitting or one model favouring towards a particular embedding, two models (LR and L-SVC) were applied for this paper. In general, a sentence vector is produced from the formation of word vectors.35 Nevertheless, this paper assumes that certain letterings might not apply any weight or produce any consequence, therefore an attention layer, which is adopted from Vaswani et al.3 was produced as a substitute. Self-attention is capable of allocating ‘attention’ to an important vector (keyword). This permits the structural design in a way to highlight attention-ed vectors.36

For that reason, a model integrating attention segments was recommended, and the structural design is presented in Figure 2. The word vectors worked through the attention layer, creating attention-weighted features. By adapting LSTM neural nets, both the original embedding and the attention-weighted embedding are concatenated to create sentiment features. We are of the opinion that attention mechanisms will improve accuracy of the sentiment analysis because of the weighted features. This allows the model to utilize the most relevant parts of the input sequence in a flexible manner, by a weighted combination of all the encoded input vectors, with the most relevant vectors being attributed the highest weights.

b9db3c52-eae2-4c22-b164-49d5e551b1f5_figure2.gif

Figure 2. Structural design of LSTM-ATT.

This paper adopted attention-weighted features model is called Long Short Term Memory with Attention (LSTM-ATT)4 with intention to improve the sentiment performance. These features at the input level to the neural network and go through a few dense layers to flatten the output. Finally, Rectified Linear Unit (RELu), a non-linear activation function is applied to produce the sentiment results. The model, LSTM-ATT, is then evaluated against LR and L-SVC. The workflow of the sentiment analysis on IMDb and ShopeeRD with multiple models is illustrated in Figure 3.

b9db3c52-eae2-4c22-b164-49d5e551b1f5_figure3.gif

Figure 3. Sentiment analysis.

Results and discussion

Design of experiments

The experiments were performed in Python, utilizing the scikit-learn library for machine learning as well as the BERT model architecture. Three models (LR, L-SVC and LSTM-ATT) were trained with the objectivity (Wiki-en) and subjectivity (70% of ShopeeRD) embeddings. Ten-fold cross-validation was applied during the training. After that, the models were tested against the objectivity (IMDb) test set and the subjectivity (20% of ShopeeRD) test set to eliminate bias.

The experiments were based on benchmark comparison, embedding comparison and model comparison with 70-10-20 train-validation-test splits. The validation was carried out to perform parameter tuning, so that the best results among the models could be obtained.

Quality of embeddings

Figures 4 and 5 demonstrate the t-distributed Stochastic Neighbor Embedding (t-SNE) plots for Wiki-en and ShopeeRD embeddings on the top 15 nearest words to the word ‘happy’. The t-SNE for both datasets revealed that word similarities are discovered in the embeddings, for instance, ‘glad’, ‘pleased’, ‘excited’ are grouped together with ‘happy’.

b9db3c52-eae2-4c22-b164-49d5e551b1f5_figure4.gif

Figure 4. t-SNE plots of ‘happy’ on the Wiki-en.

b9db3c52-eae2-4c22-b164-49d5e551b1f5_figure5.gif

Figure 5. t-SNE plots of ‘happy’ on the ShopeeRD.

The words ‘very’ and ‘good’ having closeness to ‘happy’ were only found in t-SNE for Wiki-en only. Meanwhile the words ‘satisfied’ and ‘wonderful’ having closeness to ‘happy’ were found in t-SNE for ShopeeRD. Furthermore, outliers like ‘everyone’ and ‘everybody’ were found to appear in the t-SNE for Wiki-en. This shows that the two embeddings are different in nature.

Sentiment analysis

The three models namely LR, L-SVC and LSTM-ATT were evaluated in terms of their performance in sentiment analysis. The accuracy of the three models is presented in Table 1. L-SVC obtained the highest accuracy (56.9%) for objectivity embedding, whereas LSTM-ATT obtained the highest accuracy (69.0%) for subjectivity embedding. L-SVC performed better than LR properly due to L-SVC attempting to exploit the margin between the closest support vectors whereas LR exploits the posterior class probability.37

Table 1. Accuracy of three classifiers.

DataModel
LRL-SVCLSTM-ATT
Objective embedding0.53380.56850.5604
Subjective embedding0.64180.68920.6902

From Table 1, there is possible limitation factor cause by the capacity of the training data, therefore the size of the dataset is increased through the data augmentation technique.38 As LR has a simpler architecture, data augmentation is not considered, and the focus is made on L-SVC and LSTM-ATT. Table 2 presents the outcome of data augmentation.

Table 2. Accuracy of two classifiers with augmentation technique.

DataModel
L-SVC+AUG-BERTLSTM-ATT+ AUG-BERT
Objective embedding0.57460.5991
Subjective embedding0.69070.7004

From Table 2, it is found that the accuracy of models with the augmented data are found to be better than the models, although not by much. The LSTM-ATT+AUG-BERT was able to beat L-SVC+AUG-BERT on both objective and subjective embeddings.

To the best of our knowledge, there is only one sentiment analysis result from Gayatry’s work17 that accepted by Shopee Code League 2020 Data Science.30 Table 3 shows the comparison of our models with Gayatry’s work on ShopeeRD.

Table 3. Comparison of our models with others research work on ShopeeRD.

DataModel
ShopeeRDL-SVC+AUG-BERT
[our method]
LSTM-ATT+AUG-BERT
[our method]
Multinomial Naïve Bayes14
Training (110K), Testing (36K)--0.58
Training (145K), Testing (41K)0.690.70-

To the best of our knowledge, there is no research work on objectivity sentiment analysis on IMDB without any involvement of pre-train data from taken IMDB, as all of them used 50% of total dataset for training and another 50% for testing. In this paper, we trained the models by Wiki-en and test on IMDB.

Conclusions

This paper has presented word embeddings for both objectivity and subjectivity contexts by applying Word2Vec. Analyzing the embedding using the t-distributed stochastic neighbour embedding plot shows that there are some similarities between the two embeddings, but most of them are dissimilar. Three models namely, LR, L-SVC and LSTM-ATT were employed to evaluate the performance of the adopted embedding technique. The attention model adopted was able to perform sentiment analysis well with the requirement of more data was fed into the model utilizing AUG-BERT data augmentation. Models with differing architectures will be explored in future work.

Data availability

Underlying data

The data are available for personal and non-commercial use, as stipulated by the owner (IMDb).

The data are available under the terms of the Creative Commons Attribution-Share-Alike 3.0 License.

The data are available for personal and non-commercial use, as stipulated by the owner (Shopee).

Comments on this article Comments (0)

Version 2
VERSION 2 PUBLISHED 04 Oct 2021
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Ng H, Chia GJW, Yap TTV and Goh VT. Modelling sentiments based on objectivity and subjectivity with self-attention mechanisms [version 2; peer review: 1 approved, 2 approved with reservations]. F1000Research 2022, 10:1001 (https://doi.org/10.12688/f1000research.73131.2)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 2
VERSION 2
PUBLISHED 17 May 2022
Revised
Views
10
Cite
Reviewer Report 24 May 2022
Hien D. Nguyen, University of Information Technology, VNU-HCM, Ho Chi Minh City, Vietnam 
Approved
VIEWS 10
This paper revised as my recommendation. In ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Nguyen HD. Reviewer Report For: Modelling sentiments based on objectivity and subjectivity with self-attention mechanisms [version 2; peer review: 1 approved, 2 approved with reservations]. F1000Research 2022, 10:1001 (https://doi.org/10.5256/f1000research.134045.r138119)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 25 May 2022
    Hu NG, Faculty of Computing and Informatics, Multimedia University, Cyberjaya, 63100, Malaysia
    25 May 2022
    Author Response
    Dearest Reviewer,

    Thank you for the comments. Thank you for everything.

    Ng Hu
    Competing Interests: No competing interests were disclosed.
COMMENTS ON THIS REPORT
  • Author Response 25 May 2022
    Hu NG, Faculty of Computing and Informatics, Multimedia University, Cyberjaya, 63100, Malaysia
    25 May 2022
    Author Response
    Dearest Reviewer,

    Thank you for the comments. Thank you for everything.

    Ng Hu
    Competing Interests: No competing interests were disclosed.
Version 1
VERSION 1
PUBLISHED 04 Oct 2021
Views
11
Cite
Reviewer Report 08 Apr 2022
Marco Polignano, Department of Computer Science, University of Bari 'Aldo Moro', Bari, Italy 
Approved with Reservations
VIEWS 11
The authors present an approach to performing sentiment analysis by comparing different embeddings and classification strategies. 

Several weaknesses need to be addressed by the authors to enable publication:
  1. Insufficient detail is provided on
... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Polignano M. Reviewer Report For: Modelling sentiments based on objectivity and subjectivity with self-attention mechanisms [version 2; peer review: 1 approved, 2 approved with reservations]. F1000Research 2022, 10:1001 (https://doi.org/10.5256/f1000research.76757.r126996)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
16
Cite
Reviewer Report 23 Mar 2022
Hien D. Nguyen, University of Information Technology, VNU-HCM, Ho Chi Minh City, Vietnam 
Approved with Reservations
VIEWS 16
This paper introduced an architecture to adopt attention segment to a neural network, called LSTM-ATT, to create attention-weighted features. Through that, sentiment classification is performed by non-deep neural network classifiers, this method is called Logistic Regression (LR) and Linear SVC ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Nguyen HD. Reviewer Report For: Modelling sentiments based on objectivity and subjectivity with self-attention mechanisms [version 2; peer review: 1 approved, 2 approved with reservations]. F1000Research 2022, 10:1001 (https://doi.org/10.5256/f1000research.76757.r126992)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 17 May 2022
    Hu NG, Faculty of Computing and Informatics, Multimedia University, Cyberjaya, 63100, Malaysia
    17 May 2022
    Author Response
    Dear Prof Nguyen,
    We are greatly appreciative of the insightful comments and helpful suggestions that you have provided.
    The following are our response on the issues that you have highlighted:
    ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 17 May 2022
    Hu NG, Faculty of Computing and Informatics, Multimedia University, Cyberjaya, 63100, Malaysia
    17 May 2022
    Author Response
    Dear Prof Nguyen,
    We are greatly appreciative of the insightful comments and helpful suggestions that you have provided.
    The following are our response on the issues that you have highlighted:
    ... Continue reading
Views
38
Cite
Reviewer Report 28 Feb 2022
Ashima Yadav, Bennett University, Greater Noida, Uttar Pradesh, India 
Approved with Reservations
VIEWS 38
This paper has presented word embeddings for both objectivity and subjectivity contexts by applying Word2Vec. Analyzing the embedding using the t-distributed stochastic neighbour embedding plot shows that there are some similarities between the two embeddings, but a majority of them ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Yadav A. Reviewer Report For: Modelling sentiments based on objectivity and subjectivity with self-attention mechanisms [version 2; peer review: 1 approved, 2 approved with reservations]. F1000Research 2022, 10:1001 (https://doi.org/10.5256/f1000research.76757.r123733)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 17 May 2022
    Hu NG, Faculty of Computing and Informatics, Multimedia University, Cyberjaya, 63100, Malaysia
    17 May 2022
    Author Response
    Dear Prof Yadav,

    We are greatly appreciative of the insightful comments and helpful suggestions that you have provided.
    The following are our response on the issues that you have ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 17 May 2022
    Hu NG, Faculty of Computing and Informatics, Multimedia University, Cyberjaya, 63100, Malaysia
    17 May 2022
    Author Response
    Dear Prof Yadav,

    We are greatly appreciative of the insightful comments and helpful suggestions that you have provided.
    The following are our response on the issues that you have ... Continue reading

Comments on this article Comments (0)

Version 2
VERSION 2 PUBLISHED 04 Oct 2021
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.