ACL 2017 Linguistically Regularized LSTM for Sentiment Classification Qiao Qian, Minlie Huang, Jinhao Lei, Xiaoyan Zhu Dept. of Computer Science Tsinghua University 1 aihuang@tsinghua.edu.cn
Outline Introduction Methodology Experiment Conclusion 2
Background Sentiment Classification: u positive or negative u more fine-grained classes Examples: u The movie is very interesting. u I saw this movie yesterday. u It's a little boring. Strong Positive Neutral Weakly Negative 3
Related Work Sentiment Lexicon (Hu and Liu, 2004; Wilson et al., 2005) u Lexicon-based classification (Turney, 2002; Taboada et al., 2011) u Automatic construction (Vo and Zhang, 2016; Chen and Skiena, 2014) u Combination with neural networks (Teng et al., 2016) 4
Related Work Negation effect u Reversing assumption: not bad(-) àgood(+) (Polanyi and Zaenen, 2006; Kennedy and Inkpen, 2006) u Shifting hypothesis: shift by a constant value (Taboada et al., 2011; Liu and Seneff, 2009) u Dependence on the modified text: Negator-specific neural approximation (Zhu et al., 2014) 5
Related Work Intensity is not limited to sentiment or intensity words Intensity scales of adjectives such as okay, good, great (Sharma et al., 2015) or gradable terms (e.g. large, huge, gigantic) (Shivade et al., 2015). u Valence value of content words (Wei et al., 2011) u Sentiment intensity scores (Wang et al. 2016) 6
Most Related Models Negation effect depends on the negator, the modified text, and its sentiment 7 Zhu et al., 2014. An empirical study on the effect of negation words on sentiment. In ACL. pages 304 313.
Most Related Models Sentence sentiment score= weighted sum of its sentiment words and negators. 8 Teng et al. EMNLP 2016. Context-sensitive lexicon features for neural sentiment analysis.
Motivation Linguistic knowledge has not been fully employed in neural models. unot interesting uvery interesting Tree-LSTM depend on parsing tree structures and expensive phrase-level annotation u Drop significantly when without sufficient supervision 9
Linguistically Regularized LSTM Linguistic resources for sentiment classification u Negator: not, never, cannot u Intensifier: very, absolutely u Sentiment resources: sentiment words like interesting, wonderful, etc Thisisnot a very interesting movie. Howto leverage linguistic resources in neural networks? 10
Outline Introduction Methodology Experiment Conclusion 11
Overview Linguistically Regularized LSTM 12
Non-Sentiment Regularizer The sentiment distributions of adjacent positions should be close to each other. 13
Sentiment Regularizer The sentiment distributions of adjacent positions should drift accordingly. Each sentiment class has a shifting distribution 14
Negation Regularizer The sentiment distribution should be shifted or reversed accordingly. Each negator has a transition matrix 15
Intensity Regularizer Intensifier can change the valence degree of the content word. Similar to negators: Word-specific sentiment transition matrix 16
Training We plug the linguistic regularized loss term L t into the original cross entropy loss: Cross entropy Regularizers 17
Outline Introduction Methodology Experiment Conclusion 18
Experiment Dataset umovie Review (pos. / neg.) u Sentiment Treebank (fine-grained) 19
Experiment Phrase-level means the models use phrase level annotation for training. Sent.-level means the models only use sentence level annotation. 20
Experiment The accuracy of our model with regularizer ablation. u NSR: Non-sentiment Regularizer u SR: Sentiment Regularizer u NR: Negation Regularizer u IR: Intensity Regularizer 21
Experiment The accuracy on the negation sub-dataset (Neg. Sub.) that only contains negators, and intensity sub-dataset (Int. Sub.) that only contains intensifiers. 22
Transform Matrix Analysis 23
Outline Introduction Methodology Experiment Conclusion 24
Conclusion We present linguistically regularized LSTMs for sentence-level sentiment classification. u Modeling the linguistic role of sentiment, negation, and intensity u Free to expensive phrase-level annotation Future Work u Modeling linguistics in neural networks in other tasks 25
26 Thanks!