scholarly journals Assessing Ethical Thinking about AI

2020 ◽  
Vol 34 (09) ◽  
pp. 13525-13528
Author(s):  
Judy Goldsmith ◽  
Emanuelle Burton ◽  
David M. Dueber ◽  
Beth Goldstein ◽  
Shannon Sampson ◽  
...  

As is evidenced by the associated AI, Ethics and Society conference, we now take as given the need for ethics education in the AI and general CS curricula. The anticipated surge in AI ethics education will force the field to reckon with delineating and then evaluating learner outcomes to determine what is working and improve what is not. We argue for a more descriptive than normative focus of this ethics education, and propose the development of assessments that can measure descriptive ethical thinking about AI. Such an assessment tool for measuring ethical reasoning capacity in CS contexts must be designed to produce reliable scores for which there is established validity evidence concerning their interpretation and use.

2018 ◽  
Vol 13 (1) ◽  
pp. 99-106 ◽  
Author(s):  
Maria Cecilie Havemann ◽  
Torur Dalsgaard ◽  
Jette Led Sørensen ◽  
Kristin Røssaak ◽  
Steffen Brisling ◽  
...  

2015 ◽  
Vol 4 (3) ◽  
pp. 180-188
Author(s):  
Sivalingam Nalliah ◽  
Chandramani Thuraisingham ◽  
Su Ping Ong

In a pilot study conducted to explore if reading fictional works of medical writers could be used as a tool to formatively assess learning of Humanism and Bioethics, a medical student in her elective rotation at International Medical University (IMU) was assigned to read a story-book relating to daily life and suffering authored by a medical-writer, and subsequently write a reflective narrative report which was assessed with guided reflection by her mentor. It was perceived that reading of fictional works of medical writers during medical students’ leisure time may prove to be a worthwhile and enjoyable way for students to learn higher levels of clinical competence, in the realm of humanism and bioethics. From the student’s report in this pilot study it was evident that she had gained experiential learning in three areas, namely, self-reflection and self-awareness, empathy, and ethical reasoning skills. Although Bioethics and Professionalism delivered through formal face to face teaching in classrooms and the clinical setting is taught in all ten semesters of the medical program, reading fiction of medical writers as an innovative tool to formatively assess the learning of Humanism and Bioethics could be explored further from the observations noted in this pilot study.


2021 ◽  
Vol 36 (1) ◽  
Author(s):  
Btissam El Hassar ◽  
Cheryl Poth ◽  
Rebecca Gokiert ◽  
Okan Bulut

Organizations are required to evaluate their programs for both learning and accountability purposes, which has increased the need to build their internal evaluation capacity. A remaining challenge is access to tools that lead to valid evidence supporting internal capacity development. The authors share practical insights from the development and use of the Evaluation Capacity Needs Assessment tool and framework and implications for using its data to make concrete decisions within Canadian contexts. The article refers to validity evidence generated from factor analyses and structural equation modelling and describes how applying the framework can be used to identify individual and organizational evaluation capac­ity strengths and gaps, concluding with practice considerations and future directions for this work.  


2013 ◽  
Vol 20 (3) ◽  
pp. 321-324 ◽  
Author(s):  
Maya S. Iyer ◽  
Sally A. Santen ◽  
Michele Nypaver ◽  
Kavita Warrier ◽  
Stuart Bradin ◽  
...  

2016 ◽  
Vol 2 (3) ◽  
pp. 61-67 ◽  
Author(s):  
Jane Runnacles ◽  
Libby Thomas ◽  
James Korndorffer ◽  
Sonal Arora ◽  
Nick Sevdalis

IntroductionDebriefing is essential to maximise the simulation-based learning experience, but until recently, there was little guidance on an effective paediatric debriefing. A debriefing assessment tool, Objective Structured Assessment of Debriefing (OSAD), has been developed to measure the quality of feedback in paediatric simulation debriefings. This study gathers and evaluates the validity evidence of OSAD with reference to the contemporary hypothesis-driven approach to validity.MethodsExpert input on the paediatric OSAD tool from 10 paediatric simulation facilitators provided validity evidence based on content and feasibility (phase 1). Evidence for internal structure validity was sought by examining reliability of scores from video ratings of 35 postsimulation debriefings; and evidence for validity based on relationship to other variables was sought by comparing results with trainee ratings of the same debriefings (phase 2).ResultsSimulation experts’ scores were significantly positive regarding the content of OSAD and its instructions. OSAD's feasibility was demonstrated with positive comments regarding clarity and application. Inter-rater reliability was demonstrated with intraclass correlations above 0.45 for 6 of the 7 dimensions of OSAD. The internal consistency of OSAD (Cronbach α) was 0.78. Pearson correlation of trainee total score with OSAD total score was 0.82 (p<0.001) demonstrating validity evidence based on relationships to other variables.ConclusionThe paediatric OSAD tool provides a structured approach to debriefing, which is evidence-based, has multiple sources of validity evidence and is relevant to end-users. OSAD may be used to improve the quality of debriefing after paediatric simulations.


2020 ◽  
Vol 95 (1) ◽  
pp. 129-135
Author(s):  
Brittany N. Hasty ◽  
James N. Lau ◽  
Ara Tekian ◽  
Sarah E. Miller ◽  
Edward S. Shipper ◽  
...  

2020 ◽  
Vol 8 (10) ◽  
pp. 1-14
Author(s):  
Jose Figueiredo

Engineers need to understand and measure the consequences of their actions, in terms of value for the organization, community, and in terms of sustainability. This simple formulation, however, hides a background of complexity that is not easy to identify. Students can develop an ethical consciousness, but their awareness is usually fragmented, that is, it is not deeply internalized. And one cannot have ethical thinking only in certain circumstances, looking only into some aspects and not the whole. What we are trying to achieve with this paper is to raise awareness to make ethical reasoning possible, to make ethics a whole system in engineering life. To guide us we use three aligned methodological approaches: Actor Network Theory (to formulate our settings and problematize goals), Bologna framework (to visit the roots of an “innovative” learning breakdown), and finally narrative (as storytelling, a constructivist way to reason and explore our reflections).


2009 ◽  
Vol 86 (3) ◽  
pp. 654-672 ◽  
Author(s):  
Teresa Correa

This study investigated experimentally whether social class of people who appear in news stories influences Chilean journalists' ethical reasoning. Based on schema, social identity, and moral development theories, it found that journalists applied lower levels of ethical reasoning when faced with an ethical dilemma associated with the poor, an effect moderated by participants' involvement in the story. Psychological mechanisms—such as involvement, mental elaboration about stories' subjects, and identification with them—influenced participants' ethical thinking.


2020 ◽  
Vol 12 (4) ◽  
pp. 447-454
Author(s):  
Cristina E. Welch ◽  
Melissa M. Carbajal ◽  
Shelley Kumar ◽  
Satid Thammasitboon

ABSTRACT Background Recent studies showed that psychological safety is important to resident perception of the work environment, and improved psychological safety improves resident satisfaction survey scores. However, there is no evidence in medical education literature specifically addressing relationships between psychological safety and learning behaviors or its impact on learning outcomes. Objective We developed and gathered validity evidence for a group learning environment assessment tool using Edmondson's Teaming Theory and Webb's Depth of Knowledge model as a theoretical framework. Methods In 2018, investigators developed the preliminary tool. The authors administered the resulting survey to neonatology faculty and trainees at Baylor College of Medicine morning report sessions and collected validity evidence (content, response process, and internal structure) to describe the instrument's psychometric properties. Results Between December 2018 and July 2019, 450 surveys were administered, and 393 completed surveys were collected (87% response rate). Exploratory factor analysis and confirmatory factor analysis testing the 3-factor measurement model of the 15-item tool showed acceptable fit of the hypothesized model with standardized root mean square residual = 0.034, root mean square error approximation = 0.088, and comparative fit index = 0.987. Standardized path coefficients ranged from 0.66 to 0.97. Almost all absolute standardized residual correlations were less than 0.10. Cronbach's alpha scores showed internal consistency of the constructs. There was a high correlation among the constructs. Conclusions Validity evidence suggests the developed group learning assessment tool is a reliable instrument to assess psychological safety, learning behaviors, and learning outcomes during group learning sessions such as morning report.


Sign in / Sign up

Export Citation Format

Share Document