000 03098nam a2200289 u 4500
001 3207
003 IFAC
005 20260129200138.0
008 yymmdds2017 bl 000 1 por d
035 _a3207
040 _aBR-IFAC
942 _cDISS
090 _a006.7
_bG137m
260 _c2017
300 _a73 f. :
_bil. color.
100 1 _aGaio Junior, Airton
502 _aDissertação (Mestrado) - Universidade Federal do Amazonas, Programa de Pós graduação em Informática
520 _aAbstract: A large amount of people share their opinions through videos, generates huge volume of data. This phenomenon has lead companies to be highly interested on obtaining from videos the perception of the degree of feeling involved in people’s opinion. It has also been a new trend in the field of sentiment analysis, with important challenges involved. Most of the researches that address this problem propose solutions based on the combination of data provided by three different sources: video, audio and text. Therefore, these solutions are complex and language-dependent. In addition, these solutions achieve low performance. In this context, this work focus on answering the following question: is it possible to develop an opinion classification method that uses only video as data source and still achieving superior or equivalent accuracy rates obtained by current methods that use more than one data source? In response to this question, a multimodal opinion classification method that combines facial expressions and body gestures information extracted from online videos is presented in this work. The proposed method uses a feature coding process to improve data representation in order to improve the classification task, leading to the prediction of the opinion expressed by the user with high precision and independent of the language used in the videos. In order to test the proposed method experiments were performed with three public datasets and three baselines. The results of the experiments show that the proposed method is on average 16% higher that baselines in terms of accuracy and precision, although it uses only video data, while the baselines employ information from video, audio and text. In order to verify whether or not the proposed method is portable and language-independent, the proposed method was trained with instances of a dataset whose language is exclusively English and tested using a dataset whose videos are exclusively in Spanish, applied in the conduct of the tests. The 82% of accuracy achieved in this test indicates that the proposed method may be assumed to be language-independent.
710 1 _aUniversidade Federal do Amazonas
245 1 _aUm método para classificação de opinião em vídeo combinando expressões faciais e gestos
_c/ Airton Gaio Júnior
_h[manuscrito]
082 0 _a006.7
500 _aOrientadora Eulanda Miranda dos Santos
650 0 4 _aReconhecimento multimodal de opinião
650 0 4 _aExpressões faciais e corporais
650 0 4 _aCodificadores
700 1 _aSantos, Eulanda Miranda dos
_eOrient.
999 _c31004
_d31004