Loading…
Inferring Emotions From Large-Scale Internet Voice Data
As voice dialog applications (VDAs, e.g., Siri, 1 1 http://www.apple.com/ios/siri/ . Cortana, 2 2 http://www.microsoft.com/en-us/mobile/campaign-cortana/ . Google Now 3 3 http://www.google.com/landing/now/ . ) are increasing in popularity, inferring emotions from the large-scale internet voice data...
Saved in:
Published in: | IEEE transactions on multimedia 2019-07, Vol.21 (7), p.1853-1866 |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | As voice dialog applications (VDAs, e.g., Siri, 1 1
http://www.apple.com/ios/siri/ .
Cortana, 2 2
http://www.microsoft.com/en-us/mobile/campaign-cortana/ .
Google Now 3 3
http://www.google.com/landing/now/ .
) are increasing in popularity, inferring emotions from the large-scale internet voice data generated from VDAs can help give a more reasonable and humane response. However, the tremendous amounts of users in large-scale internet voice data lead to a great diversity of users accents and expression patterns. Therefore, the traditional speech emotion recognition methods, which mainly target acted corpora, cannot effectively handle the massive and diverse amount of internet voice data. To address this issue, we carry out a series of observations, find suitable emotion categories for large-scale internet voice data, and verify the indicators of the social attributes (query time, query topic, and users location) and emotion inferring. Based on our observations, two different strategies are employed to solve the problem. First, a deep sparse neural network model that uses acoustic information, textual information, and three indicators (a temporal indicator, descriptive indicator, and geo-social indicator) as the input is proposed. Then, to capture the contextual information, we propose a hybrid emotion inference model that includes long short-term memory to capture the acoustic features and a latent dirichlet allocation to extract text features. Experiments on 93Â 000 utterances collected from the Sogou Voice Assistant 4 4
http://yy.sogou.com .
(Chinese Siri) validate the effectiveness of the proposed methodologies. Furthermore, we compare the two methodologies and give their advantages and disadvantages. |
---|---|
ISSN: | 1520-9210 1941-0077 |
DOI: | 10.1109/TMM.2018.2887016 |