scholarly journals AI enabled suicide prediction tools: a qualitative narrative review

2020 ◽  
Vol 27 (3) ◽  
pp. e100175
Author(s):  
Daniel D’Hotman ◽  
Erwin Loh

Background: Suicide poses a significant health burden worldwide. In many cases, people at risk of suicide do not engage with their doctor or community due to concerns about stigmatisation and forced medical treatment; worse still, people with mental illness (who form a majority of people who die from suicide) may have poor insight into their mental state, and not self-identify as being at risk. These issues are exacerbated by the fact that doctors have difficulty in identifying those at risk of suicide when they do present to medical services. Advances in artificial intelligence (AI) present opportunities for the development of novel tools for predicting suicide.Method: We searched Google Scholar and PubMed for articles relating to suicide prediction using artificial intelligence from 2017 onwards.Conclusions: This paper presents a qualitative narrative review of research focusing on two categories of suicide prediction tools: medical suicide prediction and social suicide prediction. Initial evidence is promising: AI-driven suicide prediction could improve our capacity to identify those at risk of suicide, and, potentially, save lives. Medical suicide prediction may be relatively uncontroversial when it pays respect to ethical and legal principles; however, further research is required to determine the validity of these tools in different contexts. Social suicide prediction offers an exciting opportunity to help identify suicide risk among those who do not engage with traditional health services. Yet, efforts by private companies such as Facebook to use online data for suicide prediction should be the subject of independent review and oversight to confirm safety, effectiveness and ethical permissibility.

Diabetes ◽  
2018 ◽  
Vol 67 (Supplement 1) ◽  
pp. 2261-PUB
Author(s):  
NANA F. HEMPLER ◽  
VINIE H. LEVISEN ◽  
REGITZE S. PALS ◽  
NAJA RAMSKOV KROGH ◽  
RIKKE H. LAURSEN

Diabetes ◽  
2020 ◽  
Vol 69 (Supplement 1) ◽  
pp. 265-OR
Author(s):  
MICHAEL FRALICK ◽  
DAVID DAI ◽  
CHLOE POU-PROM ◽  
AMOL A. VERMA ◽  
MUHAMMAD MAMDANI

2021 ◽  
Author(s):  
Nathan Szymanski ◽  
Yan Zeng ◽  
Haoyan Huo ◽  
Chris Bartel ◽  
Haegyum Kim ◽  
...  

Autonomous experimentation driven by artificial intelligence (AI) provides an exciting opportunity to revolutionize inorganic materials discovery and development. Herein, we review recent progress in the design of self-driving laboratories, including...


Author(s):  
Joachim Roski ◽  
Ezekiel J Maier ◽  
Kevin Vigilante ◽  
Elizabeth A Kane ◽  
Michael E Matheny

Abstract Artificial intelligence (AI) is critical to harnessing value from exponentially growing health and healthcare data. Expectations are high for AI solutions to effectively address current health challenges. However, there have been prior periods of enthusiasm for AI followed by periods of disillusionment, reduced investments, and progress, known as “AI Winters.” We are now at risk of another AI Winter in health/healthcare due to increasing publicity of AI solutions that are not representing touted breakthroughs, and thereby decreasing trust of users in AI. In this article, we first highlight recently published literature on AI risks and mitigation strategies that would be relevant for groups considering designing, implementing, and promoting self-governance. We then describe a process for how a diverse group of stakeholders could develop and define standards for promoting trust, as well as AI risk-mitigating practices through greater industry self-governance. We also describe how adherence to such standards could be verified, specifically through certification/accreditation. Self-governance could be encouraged by governments to complement existing regulatory schema or legislative efforts to mitigate AI risks. Greater adoption of industry self-governance could fill a critical gap to construct a more comprehensive approach to the governance of AI solutions than US legislation/regulations currently encompass. In this more comprehensive approach, AI developers, AI users, and government/legislators all have critical roles to play to advance practices that maintain trust in AI and prevent another AI Winter.


2016 ◽  
Vol 33 (S1) ◽  
pp. S68-S68
Author(s):  
H. Blasco-Fontecilla

Objectiveto explore future directions on the assessment of the risk of suicidal behavior (SB).Methodsnarrative review of current and future methods to improving the assessment of the risk of suicidal behavior (SB).ResultsPredicting future SB is a long-standing goal. Currently, the identification of individuals at risk of SB is based on clinician's subjective reports. Unfortunately, most individuals at risk of SB often do not disclose their suicidal thoughts. In the near future, predicting the risk of SB will be enhanced by: (1) introducing objective, reliable measures – i.e. biomarkers – of suicide risk; (2) selecting the most discriminant variables, and developing more accurate measures – i.e. questionnaires – and models for suicide prediction; (3) incorporating new sources of information – i.e. facebook, online monitoring; (4) applying novel methodological instruments such as data mining, or computer adaptive testing; and, (5) most importantly, combining predictors from different domains (clinical, neurobiological and cognitive).ConclusionsGiven the multi-determined nature of SB, a combination of clinical, neuropsychological, biological, and neuroimaging factors, among other might help overcome current limitations in the prediction of SB. Furthermore, given the complexity of prediction of future SB, currently our efforts should be focused on the prevention of SB.Disclosure of interestThe author has not supplied his declaration of competing interest.


2021 ◽  
pp. bjophthalmol-2021-319365
Author(s):  
Tien-En Tan ◽  
Hwei Wuen Chan ◽  
Mandeep Singh ◽  
Tien Yin Wong ◽  
Jose S Pulido ◽  
...  

2021 ◽  
Author(s):  
Asma Alamgir ◽  
Osama Mousa 2nd ◽  
Zubair Shah 3rd

BACKGROUND Cardiac arrest is a life-threatening cessation of heart activity. Early prediction of cardiac arrest is important as it provides an opportunity to take the necessary measures to prevent or intervene during the onset. Artificial intelligence technologies and big data have been increasingly used to enhance the ability to predict and prepare for the patients at risk. OBJECTIVE This study aims to explore the use of AI technology in predicting cardiac arrest as reported in the literature. METHODS Scoping review was conducted in line with guidelines of PRISMA Extension for Scoping Review (PRISMA-ScR). Scopus, Science Direct, Embase, IEEE, and Google Scholar were searched to identify relevant studies. Backward reference list checking of included studies was also conducted. The study selection and data extraction were conducted independently by two reviewers. Data extracted from the included studies were synthesized narratively. RESULTS Out of 697 citations retrieved, 41 studies were included in the review, and 6 were added after backward citation checking. The included studies reported the use of AI in the prediction of cardiac arrest. We were able to classify the approach taken by the studies in three different categories - 26 studies predicted cardiac arrest by analyzing specific parameters or variables of the patients while 16 studies developed an AI-based warning system. The rest of the 5 studies focused on distinguishing high-risk cardiac arrest patients from patients, not at risk. 2 studies focused on the pediatric population, and the rest focused on adults (n=45). The majority of the studies used datasets with a size of less than 10,000 (n=32). Machine learning models were the most prominent branch of AI used in the prediction of cardiac arrest in the studies (n=38) and the most used algorithm belonged to the neural network (n=23). K-Fold cross-validation was the most used algorithm evaluation tool reported in the studies (n=24). CONCLUSIONS : AI is extensively being used to predict cardiac arrest in different patient settings. Technology is expected to play an integral role in changing cardiac medicine for the better. There is a need for more reviews to learn the obstacles of implementing AI technologies in the clinical setting. Moreover, research focusing on how to best provide clinicians support to understand, adapt and implement the technology in their practice is also required.


2022 ◽  
pp. 131-148
Author(s):  
Burcu Karabulut Coşkun ◽  
Ezgi Mor Dirlik

In today's world, which has been administered by computers and artificial intelligence in many areas, online data gathering has become an inevitable way of collecting data. Many researchers have preferred online surveying, considering the advantages of this method over the classical ones. Hence, the factors that may affect the response rate of online surveying have become a prominent research topic. In line with the popularity of this issue, the purpose of this chapter was to clarify the concept of online surveys; give information about their types, advantages, and usage; and investigate the factors that affect the participants' response behaviors. Besides the discussions on the theoretical framework of online surveying, an online survey aiming to determine the factors affecting the participation in online surveying was administered to a group of people to investigate the response behaviors thoroughly. The findings revealed that rs might affect ants' response behaviors to online surveys in various ways radically.


Sign in / Sign up

Export Citation Format

Share Document