Predicting Suicide Risk in Turkey Using Machine Learning

Author(s):  
Elif Şanlıalp ◽  
İbrahim Şanlıalp ◽  
Tuncay Yiğit
2021 ◽  
Author(s):  
Kate Bentley ◽  
Kelly Zuromski ◽  
Rebecca Fortgang ◽  
Emily Madsen ◽  
Daniel Kessler ◽  
...  

Background: Interest in developing machine learning algorithms that use electronic health record data to predict patients’ risk of suicidal behavior has recently proliferated. Whether and how such models might be implemented and useful in clinical practice, however, remains unknown. In order to ultimately make automated suicide risk prediction algorithms useful in practice, and thus better prevent patient suicides, it is critical to partner with key stakeholders (including the frontline providers who will be using such tools) at each stage of the implementation process.Objective: The aim of this focus group study was to inform ongoing and future efforts to deploy suicide risk prediction models in clinical practice. The specific goals were to better understand hospital providers’ current practices for assessing and managing suicide risk; determine providers’ perspectives on using automated suicide risk prediction algorithms; and identify barriers, facilitators, recommendations, and factors to consider for initiatives in this area. Methods: We conducted 10 two-hour focus groups with a total of 40 providers from psychiatry, internal medicine and primary care, emergency medicine, and obstetrics and gynecology departments within an urban academic medical center. Audio recordings of open-ended group discussions were transcribed and coded for relevant and recurrent themes by two independent study staff members. All coded text was reviewed and discrepancies resolved in consensus meetings with doctoral-level staff. Results: Though most providers reported using standardized suicide risk assessment tools in their clinical practices, existing tools were commonly described as unhelpful and providers indicated dissatisfaction with current suicide risk assessment methods. Overall, providers’ general attitudes toward the practical use of automated suicide risk prediction models and corresponding clinical decision support tools were positive. Providers were especially interested in the potential to identify high-risk patients who might be missed by traditional screening methods. Some expressed skepticism about the potential usefulness of these models in routine care; specific barriers included concerns about liability, alert fatigue, and increased demand on the healthcare system. Key facilitators included presenting specific patient-level features contributing to risk scores, emphasizing changes in risk over time, and developing systematic clinical workflows and provider trainings. Participants also recommended considering risk-prediction windows, timing of alerts, who will have access to model predictions, and variability across treatment settings.Conclusions: Providers were dissatisfied with current suicide risk assessment methods and open to the use of a machine learning-based risk prediction system to inform clinical decision-making. They also raised multiple concerns about potential barriers to the usefulness of this approach and suggested several possible facilitators. Future efforts in this area will benefit from incorporating systematic qualitative feedback from providers, patients, administrators, and payers on the use of new methods in routine care, especially given the complex, sensitive, and unfortunately still stigmatized nature of suicide risk.


2020 ◽  
Vol 77 (1) ◽  
pp. 25 ◽  
Author(s):  
Jaimie L. Gradus ◽  
Anthony J. Rosellini ◽  
Erzsébet Horváth-Puhó ◽  
Amy E. Street ◽  
Isaac Galatzer-Levy ◽  
...  

2020 ◽  
Author(s):  
Stevie Chancellor ◽  
Steven A Sumner ◽  
Corinne David-Ferdon ◽  
Tahirah Ahmad ◽  
Munmun De Choudhury

BACKGROUND Online communities provide support for individuals looking for help with suicidal ideation and crisis. As community data are increasingly used to devise machine learning models to infer who might be at risk, there have been limited efforts to identify both risk and protective factors in web-based posts. These annotations can enrich and augment computational assessment approaches to identify appropriate intervention points, which are useful to public health professionals and suicide prevention researchers. OBJECTIVE This qualitative study aims to develop a valid and reliable annotation scheme for evaluating risk and protective factors for suicidal ideation in posts in suicide crisis forums. METHODS We designed a valid, reliable, and clinically grounded process for identifying risk and protective markers in social media data. This scheme draws on prior work on construct validity and the social sciences of measurement. We then applied the scheme to annotate 200 posts from r/SuicideWatch—a Reddit community focused on suicide crisis. RESULTS We documented our results on producing an annotation scheme that is consistent with leading public health information coding schemes for suicide and advances attention to protective factors. Our study showed high internal validity, and we have presented results that indicate that our approach is consistent with findings from prior work. CONCLUSIONS Our work formalizes a framework that incorporates construct validity into the development of annotation schemes for suicide risk on social media. This study furthers the understanding of risk and protective factors expressed in social media data. This may help public health programming to prevent suicide and computational social science research and investigations that rely on the quality of labels for downstream machine learning tasks.


10.2196/24471 ◽  
2021 ◽  
Vol 8 (11) ◽  
pp. e24471
Author(s):  
Stevie Chancellor ◽  
Steven A Sumner ◽  
Corinne David-Ferdon ◽  
Tahirah Ahmad ◽  
Munmun De Choudhury

Background Online communities provide support for individuals looking for help with suicidal ideation and crisis. As community data are increasingly used to devise machine learning models to infer who might be at risk, there have been limited efforts to identify both risk and protective factors in web-based posts. These annotations can enrich and augment computational assessment approaches to identify appropriate intervention points, which are useful to public health professionals and suicide prevention researchers. Objective This qualitative study aims to develop a valid and reliable annotation scheme for evaluating risk and protective factors for suicidal ideation in posts in suicide crisis forums. Methods We designed a valid, reliable, and clinically grounded process for identifying risk and protective markers in social media data. This scheme draws on prior work on construct validity and the social sciences of measurement. We then applied the scheme to annotate 200 posts from r/SuicideWatch—a Reddit community focused on suicide crisis. Results We documented our results on producing an annotation scheme that is consistent with leading public health information coding schemes for suicide and advances attention to protective factors. Our study showed high internal validity, and we have presented results that indicate that our approach is consistent with findings from prior work. Conclusions Our work formalizes a framework that incorporates construct validity into the development of annotation schemes for suicide risk on social media. This study furthers the understanding of risk and protective factors expressed in social media data. This may help public health programming to prevent suicide and computational social science research and investigations that rely on the quality of labels for downstream machine learning tasks.


2021 ◽  
pp. 114118
Author(s):  
Lauren McMullen ◽  
Neelang Parghi ◽  
Megan L. Rogers ◽  
Heng Yao ◽  
Sara Block-Elkouby ◽  
...  

Author(s):  
Junggu Choi ◽  
Seoyoung Cho ◽  
Inhwan Ko ◽  
Sanghoon Han

Investigating suicide risk factors is critical for socioeconomic and public health, and many researchers have tried to identify factors associated with suicide. In this study, the risk factors for suicidal ideation were compared, and the contributions of different factors to suicidal ideation and attempt were investigated. To reflect the diverse characteristics of the population, the large-scale and longitudinal dataset used in this study included both socioeconomic and clinical variables collected from the Korean public. Three machine learning algorithms (XGBoost classifier, support vector classifier, and logistic regression) were used to detect the risk factors for both suicidal ideation and attempt. The importance of the variables was determined using the model with the best classification performance. In addition, a novel risk-factor score, calculated from the rank and importance scores of each variable, was proposed. Socioeconomic and sociodemographic factors showed a high correlation with risks for both ideation and attempt. Mental health variables ranked higher than other factors in suicidal attempts, posing a relatively higher suicide risk than ideation. These trends were further validated using the conditions from the integrated and yearly dataset. This study provides novel insights into suicidal risk factors for suicidal ideations and attempts.


2020 ◽  
Author(s):  
Yaakov Ophir ◽  
Refael Tikochinski ◽  
Christa Asterhan ◽  
Itay Sisso ◽  
Roi Reichart

Background: Detection of suicide risk is a highly prioritized, yet complicated task. In fact, five decades of suicide research produced predictions that were only marginally better than chance (AUCs = 0.56 – 0.58). Advanced machine learning methods open up new opportunities for progress in mental health research. In the present study, Artificial Neural Network (ANN) models were constructed to predict externally valid suicide risk from everyday language of social media users. Method: The dataset included 83,292 postings authored by 1,002 authenticated, active Facebook users, alongside clinically valid psychosocial information about the users. Results: Using Deep Contextualized Word Embeddings (CWEs) for text representation, two models were constructed: A Single Task Model (STM), to predict suicide risk from Facebook postings directly (Facebook texts → suicide) and a Multi-Task Model (MTM), which included hierarchical, multilayered sets of theory-driven risk factors (Facebook texts → personality traits → psychosocial risks → psychiatric disorders → suicide). Compared with the STM predictions (.606 ≤ AUC ≤ .608), the MTM produced improved prediction accuracy (.690 ≤ AUC ≤ .759), with substantially larger effect sizes (.701 ≤ d ≤ .994). Subsequent content analyses suggest that predictions did not rely on explicit suicide-related themes, but on a wide range of content. Conclusions: Advanced machine learning methods can improve our ability to predict suicide risk from everyday social media activities. The knowledge generated by this research may eventually lead to the development of more accurate and objective detection tools and get individuals the help they need in time.


2020 ◽  
Vol 11 ◽  
Author(s):  
André Bittar ◽  
Sumithra Velupillai ◽  
Johnny Downs ◽  
Rosemary Sedgwick ◽  
Rina Dutta

Suicide is a serious public health issue worldwide, yet current clinical methods for assessing a person's risk of taking their own life remain unreliable and new methods for assessing suicide risk are being explored. The widespread adoption of electronic health records (EHRs) has opened up new possibilities for epidemiological studies of suicide and related behaviour amongst those receiving healthcare. These types of records capture valuable information entered by healthcare practitioners at the point of care. However, much recent work has relied heavily on the structured data of EHRs, whilst much of the important information about a patient's care pathway is recorded in the unstructured text of clinical notes. Accessing and structuring text data for use in clinical research, and particularly for suicide and self-harm research, is a significant challenge that is increasingly being addressed using methods from the fields of natural language processing (NLP) and machine learning (ML). In this review, we provide an overview of the range of suicide-related studies that have been carried out using the Clinical Records Interactive Search (CRIS): a database for epidemiological and clinical research that contains de-identified EHRs from the South London and Maudsley NHS Foundation Trust. We highlight the variety of clinical research questions, cohorts and techniques that have been explored for suicide and related behaviour research using CRIS, including the development of NLP and ML approaches. We demonstrate how EHR data provides comprehensive material to study prevalence of suicide and self-harm in clinical populations. Structured data alone is insufficient and NLP methods are needed to more accurately identify relevant information from EHR data. We also show how the text in clinical notes provide signals for ML approaches to suicide risk assessment. We envision increased progress in the decades to come, particularly in externally validating findings across multiple sites and countries, both in terms of clinical evidence and in terms of NLP and machine learning method transferability.


2019 ◽  
Vol 22 (3) ◽  
pp. 125-128 ◽  
Author(s):  
Daniel Whiting ◽  
Seena Fazel

Prediction models assist in stratifying and quantifying an individual’s risk of developing a particular adverse outcome, and are widely used in cardiovascular and cancer medicine. Whether these approaches are accurate in predicting self-harm and suicide has been questioned. We searched for systematic reviews in the suicide risk assessment field, and identified three recent reviews that have examined current tools and models derived using machine learning approaches. In this clinical review, we present a critical appraisal of these reviews, and highlight three major limitations that are shared between them. First, structured tools are not compared with unstructured assessments routine in clinical practice. Second, they do not sufficiently consider a range of performance measures, including negative predictive value and calibration. Third, the potential role of these models as clinical adjuncts is not taken into consideration. We conclude by presenting the view that the current role of prediction models for self-harm and suicide is currently not known, and discuss some methodological issues and implications of some machine learning and other analytic techniques for clinical utility.


Sign in / Sign up

Export Citation Format

Share Document