Multi-modal depression detection based on emotional audio and evaluation text

2021 ◽  
Vol 295 ◽  
pp. 904-913
Author(s):  
Jiayu Ye ◽  
Yanhong Yu ◽  
Qingxiang Wang ◽  
Wentao Li ◽  
Hu Liang ◽  
...  
Keyword(s):  
Author(s):  
Nujud Aloshban ◽  
Anna Esposito ◽  
Alessandro Vinciarelli

AbstractDepression is one of the most common mental health issues. (It affects more than 4% of the world’s population, according to recent estimates.) This article shows that the joint analysis of linguistic and acoustic aspects of speech allows one to discriminate between depressed and nondepressed speakers with an accuracy above 80%. The approach used in the work is based on networks designed for sequence modeling (bidirectional Long-Short Term Memory networks) and multimodal analysis methodologies (late fusion, joint representation and gated multimodal units). The experiments were performed over a corpus of 59 interviews (roughly 4 hours of material) involving 29 individuals diagnosed with depression and 30 control participants. In addition to an accuracy of 80%, the results show that multimodal approaches perform better than unimodal ones owing to people’s tendency to manifest their condition through one modality only, a source of diversity across unimodal approaches. In addition, the experiments show that it is possible to measure the “confidence” of the approach and automatically identify a subset of the test data in which the performance is above a predefined threshold. It is possible to effectively detect depression by using unobtrusive and inexpensive technologies based on the automatic analysis of speech and language.


Author(s):  
Marzieh Mousavian ◽  
Jianhua Chen ◽  
Zachary Traylor ◽  
Steven Greening

Author(s):  
Xuhai Xu ◽  
Prerna Chikersal ◽  
Janine M. Dutcher ◽  
Yasaman S. Sefidgar ◽  
Woosuk Seo ◽  
...  

The prevalence of mobile phones and wearable devices enables the passive capturing and modeling of human behavior at an unprecedented resolution and scale. Past research has demonstrated the capability of mobile sensing to model aspects of physical health, mental health, education, and work performance, etc. However, most of the algorithms and models proposed in previous work follow a one-size-fits-all (i.e., population modeling) approach that looks for common behaviors amongst all users, disregarding the fact that individuals can behave very differently, resulting in reduced model performance. Further, black-box models are often used that do not allow for interpretability and human behavior understanding. We present a new method to address the problems of personalized behavior classification and interpretability, and apply it to depression detection among college students. Inspired by the idea of collaborative-filtering, our method is a type of memory-based learning algorithm. It leverages the relevance of mobile-sensed behavior features among individuals to calculate personalized relevance weights, which are used to impute missing data and select features according to a specific modeling goal (e.g., whether the student has depressive symptoms) in different time epochs, i.e., times of the day and days of the week. It then compiles features from epochs using majority voting to obtain the final prediction. We apply our algorithm on a depression detection dataset collected from first-year college students with low data-missing rates and show that our method outperforms the state-of-the-art machine learning model by 5.1% in accuracy and 5.5% in F1 score. We further verify the pipeline-level generalizability of our approach by achieving similar results on a second dataset, with an average improvement of 3.4% across performance metrics. Beyond achieving better classification performance, our novel approach is further able to generate personalized interpretations of the models for each individual. These interpretations are supported by existing depression-related literature and can potentially inspire automated and personalized depression intervention design in the future.


2021 ◽  
Author(s):  
Esaú Villatoro-Tello ◽  
Gabriela Ramírez-de-la-Rosa ◽  
Daniel Gática-Pérez ◽  
Mathew Magimai.-Doss ◽  
Héctor Jiménez-Salazar

2021 ◽  
Vol 36 (6) ◽  
pp. 99-105
Author(s):  
Raymond Chiong ◽  
Gregorious Satia Budhi ◽  
Sandeep Dhakal ◽  
Erik Cambria
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document