Attention to Emotions: Detecting Mental Disorders in Social Media

Author(s):  
Mario Ezra Aragón ◽  
A. Pastor López-Monroy ◽  
Luis C. González ◽  
Manuel Montes-y-Gómez
2021 ◽  
Vol 2 (2) ◽  
pp. 1-31
Author(s):  
Esteban A. Ríssola ◽  
David E. Losada ◽  
Fabio Crestani

Mental state assessment by analysing user-generated content is a field that has recently attracted considerable attention. Today, many people are increasingly utilising online social media platforms to share their feelings and moods. This provides a unique opportunity for researchers and health practitioners to proactively identify linguistic markers or patterns that correlate with mental disorders such as depression, schizophrenia or suicide behaviour. This survey describes and reviews the approaches that have been proposed for mental state assessment and identification of disorders using online digital records. The presented studies are organised according to the assessment technology and the feature extraction process conducted. We also present a series of studies which explore different aspects of the language and behaviour of individuals suffering from mental disorders, and discuss various aspects related to the development of experimental frameworks. Furthermore, ethical considerations regarding the treatment of individuals’ data are outlined. The main contributions of this survey are a comprehensive analysis of the proposed approaches for online mental state assessment on social media, a structured categorisation of the methods according to their design principles, lessons learnt over the years and a discussion on possible avenues for future research.


2021 ◽  
Vol 66 (Special Issue) ◽  
pp. 133-133
Author(s):  
Regina Mueller ◽  
◽  
Sebastian Laacke ◽  
Georg Schomerus ◽  
Sabine Salloch ◽  
...  

"Artificial Intelligence (AI) systems are increasingly being developed and various applications are already used in medical practice. This development promises improvements in prediction, diagnostics and treatment decisions. As one example, in the field of psychiatry, AI systems can already successfully detect markers of mental disorders such as depression. By using data from social media (e.g. Instagram or Twitter), users who are at risk of mental disorders can be identified. This potential of AI-based depression detectors (AIDD) opens chances, such as quick and inexpensive diagnoses, but also leads to ethical challenges especially regarding users’ autonomy. The focus of the presentation is on autonomy-related ethical implications of AI systems using social media data to identify users with a high risk of suffering from depression. First, technical examples and potential usage scenarios of AIDD are introduced. Second, it is demonstrated that the traditional concept of patient autonomy according to Beauchamp and Childress does not fully account for the ethical implications associated with AIDD. Third, an extended concept of “Health-Related Digital Autonomy” (HRDA) is presented. Conceptual aspects and normative criteria of HRDA are discussed. As a result, HRDA covers the elusive area between social media users and patients. "


10.2196/17758 ◽  
2020 ◽  
Vol 22 (7) ◽  
pp. e17758 ◽  
Author(s):  
Diana Ramírez-Cifuentes ◽  
Ana Freire ◽  
Ricardo Baeza-Yates ◽  
Joaquim Puntí ◽  
Pilar Medina-Bravo ◽  
...  

Background Suicide risk assessment usually involves an interaction between doctors and patients. However, a significant number of people with mental disorders receive no treatment for their condition due to the limited access to mental health care facilities; the reduced availability of clinicians; the lack of awareness; and stigma, neglect, and discrimination surrounding mental disorders. In contrast, internet access and social media usage have increased significantly, providing experts and patients with a means of communication that may contribute to the development of methods to detect mental health issues among social media users. Objective This paper aimed to describe an approach for the suicide risk assessment of Spanish-speaking users on social media. We aimed to explore behavioral, relational, and multimodal data extracted from multiple social platforms and develop machine learning models to detect users at risk. Methods We characterized users based on their writings, posting patterns, relations with other users, and images posted. We also evaluated statistical and deep learning approaches to handle multimodal data for the detection of users with signs of suicidal ideation (suicidal ideation risk group). Our methods were evaluated over a dataset of 252 users annotated by clinicians. To evaluate the performance of our models, we distinguished 2 control groups: users who make use of suicide-related vocabulary (focused control group) and generic random users (generic control group). Results We identified significant statistical differences between the textual and behavioral attributes of each of the control groups compared with the suicidal ideation risk group. At a 95% CI, when comparing the suicidal ideation risk group and the focused control group, the number of friends (P=.04) and median tweet length (P=.04) were significantly different. The median number of friends for a focused control user (median 578.5) was higher than that for a user at risk (median 372.0). Similarly, the median tweet length was higher for focused control users, with 16 words against 13 words of suicidal ideation risk users. Our findings also show that the combination of textual, visual, relational, and behavioral data outperforms the accuracy of using each modality separately. We defined text-based baseline models based on bag of words and word embeddings, which were outperformed by our models, obtaining an increase in accuracy of up to 8% when distinguishing users at risk from both types of control users. Conclusions The types of attributes analyzed are significant for detecting users at risk, and their combination outperforms the results provided by generic, exclusively text-based baseline models. After evaluating the contribution of image-based predictive models, we believe that our results can be improved by enhancing the models based on textual and relational features. These methods can be extended and applied to different use cases related to other mental disorders.


2017 ◽  
Vol 57 (6) ◽  
pp. 625-649 ◽  
Author(s):  
Peter Kinderman ◽  
Kate Allsopp ◽  
Anne Cooke

The idea and practice of diagnosis in psychiatry has always been controversial. Controversy came to a head in the period preceding and immediately after publication of the latest version of the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders–Fifth edition. There was widespread international discussion and debate not only in scholarly journals but in mainstream and social media, and to the formation of International DSM Response Committee and an International Summit on Psychiatric Diagnosis. This article documents that process and outlines the issues that provoked, and continue to provoke most controversy, from the (admittedly personal) perspective of those involved. It ends with suggestions of alternatives to diagnosis, which avoid some of these problems and outlines how these are being taken forward. The next 10 years are likely to see significant change.


2020 ◽  
Author(s):  
Diana Ramírez-Cifuentes ◽  
Ana Freire ◽  
Ricardo Baeza-Yates ◽  
Joaquim Puntí ◽  
Pilar Medina-Bravo ◽  
...  

BACKGROUND Suicide risk assessment usually involves an interaction between doctors and patients. However, a significant number of people with mental disorders receive no treatment for their condition due to the limited access to mental health care facilities; the reduced availability of clinicians; the lack of awareness; and stigma, neglect, and discrimination surrounding mental disorders. In contrast, internet access and social media usage have increased significantly, providing experts and patients with a means of communication that may contribute to the development of methods to detect mental health issues among social media users. OBJECTIVE This paper aimed to describe an approach for the suicide risk assessment of Spanish-speaking users on social media. We aimed to explore behavioral, relational, and multimodal data extracted from multiple social platforms and develop machine learning models to detect users at risk. METHODS We characterized users based on their writings, posting patterns, relations with other users, and images posted. We also evaluated statistical and deep learning approaches to handle multimodal data for the detection of users with signs of suicidal ideation (suicidal ideation risk group). Our methods were evaluated over a dataset of 252 users annotated by clinicians. To evaluate the performance of our models, we distinguished 2 control groups: users who make use of suicide-related vocabulary (focused control group) and generic random users (generic control group). RESULTS We identified significant statistical differences between the textual and behavioral attributes of each of the control groups compared with the suicidal ideation risk group. At a 95% CI, when comparing the suicidal ideation risk group and the focused control group, the number of friends (<i>P</i>=.04) and median tweet length (<i>P</i>=.04) were significantly different. The median number of friends for a focused control user (median 578.5) was higher than that for a user at risk (median 372.0). Similarly, the median tweet length was higher for focused control users, with 16 words against 13 words of suicidal ideation risk users. Our findings also show that the combination of textual, visual, relational, and behavioral data outperforms the accuracy of using each modality separately. We defined text-based baseline models based on bag of words and word embeddings, which were outperformed by our models, obtaining an increase in accuracy of up to 8% when distinguishing users at risk from both types of control users. CONCLUSIONS The types of attributes analyzed are significant for detecting users at risk, and their combination outperforms the results provided by generic, exclusively text-based baseline models. After evaluating the contribution of image-based predictive models, we believe that our results can be improved by enhancing the models based on textual and relational features. These methods can be extended and applied to different use cases related to other mental disorders.


Author(s):  
Prof. Narinder Kaur and Lakshay Monga

Social Network Mental Disorder Detection” or “SNMD” is an approach to analyse data and retrieve sentiment that it embodies. Twitter SNMD analysis is an application of sentiment analysis on data from Twitter (tweets), in order to extract sentiments conveyed by the user. In this paper, we aim to review some papers regarding research in sentiment analysis on Twitter, describing the methodologies adopted and models applied, along with describing a generalized Python based approach. A prototype system is developed and tested.


2021 ◽  
Author(s):  
Rami Kanaan ◽  
Batoul Haidar ◽  
Rima Kilany

Author(s):  
Mario Ezra Aragon ◽  
Adrian Pastor Lopez-Monroy ◽  
Luis-Carlos Gonzalez Gonzalez-Gurrola ◽  
Manuel Montes

Sign in / Sign up

Export Citation Format

Share Document