scholarly journals Conscience Relevance and Sensitivity in Psychiatric Evaluations in the Youth-span

2020 ◽  
Vol 9 (3) ◽  
pp. 167-184
Author(s):  
Matthew Galvin ◽  
Leslie Hulvershorn ◽  
Margaret Gaffney

Background: While practice parameters recommend assessment of conscience and values, few resources are available to guide clinicians. Objective: To improve making moral inquiry in youth aged 15 to 24. Method: After documenting available resources for behavioral health clinicians who are inquiring about their patient’s moral life, we consider our studies of conscience development and functioning in youth. We align descriptions of domains of conscience with neurobiology. We compare youth reared in relative advantage, who have fairly smooth functional progressions across domains, with youth reared in adverse circumstances. We offer the heuristic conscience developmental quotient to help mind the gap between conscience in adversity and conscience in advantage. Next, we consider severity of psychopathological interference as distinct from delay. A case illustration is provided to support the distinction be Results: Our findings support the hypotheses that youth who experience adverse childhood experiences show evidence of fragmentation, unevenness and delay in their conscience stage-attainment. We demonstrate proof of concept for conscience sensitive psychiatric assessment in the youth-span. Conscience sensitive inquiries improve upon merely conscience relevant interpretations by affording better appreciation of moral wounding, in turn setting the stage for moral-imaginative efforts that elicit and make the latent values of the youth more explicit. Conclusions: A conscience sensitive approach should be part of both psychiatric and general medical education, supported explicitly by clinical guidelines recommending conscience sensitive interview techniques that aim to acquire information aligned with current neurobiological terminology..

PLoS ONE ◽  
2019 ◽  
Vol 14 (5) ◽  
pp. e0216657
Author(s):  
Anne Marsman ◽  
Rosan Luijcks ◽  
Catherine Vossen ◽  
Jim van Os ◽  
Richel Lousberg

10.2196/18752 ◽  
2020 ◽  
Vol 8 (11) ◽  
pp. e18752
Author(s):  
Nariman Ammar ◽  
Arash Shaban-Nejad

Background The study of adverse childhood experiences and their consequences has emerged over the past 20 years. Although the conclusions from these studies are available, the same is not true of the data. Accordingly, it is a complex problem to build a training set and develop machine-learning models from these studies. Classic machine learning and artificial intelligence techniques cannot provide a full scientific understanding of the inner workings of the underlying models. This raises credibility issues due to the lack of transparency and generalizability. Explainable artificial intelligence is an emerging approach for promoting credibility, accountability, and trust in mission-critical areas such as medicine by combining machine-learning approaches with explanatory techniques that explicitly show what the decision criteria are and why (or how) they have been established. Hence, thinking about how machine learning could benefit from knowledge graphs that combine “common sense” knowledge as well as semantic reasoning and causality models is a potential solution to this problem. Objective In this study, we aimed to leverage explainable artificial intelligence, and propose a proof-of-concept prototype for a knowledge-driven evidence-based recommendation system to improve mental health surveillance. Methods We used concepts from an ontology that we have developed to build and train a question-answering agent using the Google DialogFlow engine. In addition to the question-answering agent, the initial prototype includes knowledge graph generation and recommendation components that leverage third-party graph technology. Results To showcase the framework functionalities, we here present a prototype design and demonstrate the main features through four use case scenarios motivated by an initiative currently implemented at a children’s hospital in Memphis, Tennessee. Ongoing development of the prototype requires implementing an optimization algorithm of the recommendations, incorporating a privacy layer through a personal health library, and conducting a clinical trial to assess both usability and usefulness of the implementation. Conclusions This semantic-driven explainable artificial intelligence prototype can enhance health care practitioners’ ability to provide explanations for the decisions they make.


2020 ◽  
Author(s):  
Nariman Ammar ◽  
Arash Shaban-Nejad

BACKGROUND The study of adverse childhood experiences and their consequences has emerged over the past 20 years. Although the conclusions from these studies are available, the same is not true of the data. Accordingly, it is a complex problem to build a training set and develop machine-learning models from these studies. Classic machine learning and artificial intelligence techniques cannot provide a full scientific understanding of the inner workings of the underlying models. This raises credibility issues due to the lack of transparency and generalizability. Explainable artificial intelligence is an emerging approach for promoting credibility, accountability, and trust in mission-critical areas such as medicine by combining machine-learning approaches with explanatory techniques that explicitly show what the decision criteria are and why (or how) they have been established. Hence, thinking about how machine learning could benefit from knowledge graphs that combine “common sense” knowledge as well as semantic reasoning and causality models is a potential solution to this problem. OBJECTIVE In this study, we aimed to leverage explainable artificial intelligence, and propose a proof-of-concept prototype for a knowledge-driven evidence-based recommendation system to improve mental health surveillance. METHODS We used concepts from an ontology that we have developed to build and train a question-answering agent using the Google DialogFlow engine. In addition to the question-answering agent, the initial prototype includes knowledge graph generation and recommendation components that leverage third-party graph technology. RESULTS To showcase the framework functionalities, we here present a prototype design and demonstrate the main features through four use case scenarios motivated by an initiative currently implemented at a children’s hospital in Memphis, Tennessee. Ongoing development of the prototype requires implementing an optimization algorithm of the recommendations, incorporating a privacy layer through a personal health library, and conducting a clinical trial to assess both usability and usefulness of the implementation. CONCLUSIONS This semantic-driven explainable artificial intelligence prototype can enhance health care practitioners’ ability to provide explanations for the decisions they make.


2009 ◽  
Author(s):  
Caroline Kelly ◽  
Katherine Jakle ◽  
Anna Leshner ◽  
Kerri Schutz ◽  
Marissa Burgoyne ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document