<div>
<div>
<div>
<p>The goal of our research is to automaticaly retrieve the satisfaction and the frustration in real-life call-center conversations.
This study focuses an industrial application in which the customer satisfaction is continuously tracked down to improve customer
services. To compensate the lack of large annotated emotional databases, we explore the use of pre-trained speech representations as
a form of transfer learning towards AlloSat corpus. Moreover, several studies have pointed out that emotion can be detected not only in
speech but also in facial trait, in biological response or in textual information. In the context of telephone conversations, we can break
down the audio information into acoustic and linguistic by using the speech signal and its transcription. Our experiments confirms the
large gain in performance obtained with the use of pre-trained features. Surprisingly, we found that the linguistic content is clearly the
major contributor for the prediction of satisfaction and best generalizes to unseen data. Our experiments conclude to the definitive
advantage of using CamemBERT representations, however the benefit of the fusion of acoustic and linguistic modalities is not as
obvious. With models learnt on individual annotations, we found that fusion approaches are more robust to the subjectivity of the
annotation task. This study also tackles the problem of performances variability and intends to estimate this variability from different
views: weights initialization, confidence intervals and annotation subjectivity. A deep analysis on the linguistic content investigates
interpretable factors able to explain the high contribution of the linguistic modality for this task.
</p>
</div>
</div>
</div>