cognitive systems engineering
Recently Published Documents


TOTAL DOCUMENTS

113
(FIVE YEARS 9)

H-INDEX

11
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Robert Hoffman ◽  
Shane T. Mueller ◽  
Gary Klein ◽  
Jordan Litman

Trust in automation, is of concern in computer science and cognitive systems engineering, as well as the popular media (e.g., Chancey et al., 2015; Hoff and Bashir 2015; Hoffman et al., 2009; Huynh et al., 2006; Naone, 2009; Merritt and Ilgen, 2008; Merritt et al. 2013, 2015a; Pop et al. 2015; Shadbolt, 2002; Wickens et al. ,2015; Woods and Hollnagel 2006). Trust is of particular concern as more AI systems are being developed and tested (Schaefer et al., 2016).


Author(s):  
Philip J. Smith ◽  
Karen Feigh ◽  
Nadine Sarter ◽  
David Woods

One of the impacts of the pandemic has been a rapid increase in the development and offering of online courses focused on cognitive systems engineering. This presents opportunities to: ● Identify and share alternative instructional design strategies and more specific instructional tactics tailored to the online environment and learn from each others’ experiences. ● Discuss how lessons learned from the design and offering of online courses can not only inform future offerings of online courses but also generalize to in-person courses. ● Identify the opportunities created by online instruction to reach a broader audience, not only geographically but also in terms of reaching practitioners whose specializations are outside of human factors. The panelists have expertise and experience in all of these areas. Their perspectives are briefly described below.


Author(s):  
Jacob Keller ◽  
Martijn IJtsma

Human-machine teams (HMTs) in complex work domains need to be able to adapt to variable and uncertain work demands. Computational modeling and simulation can provide novel approaches to the evaluation of HMTs performing complex joint activities, affording large-scale, quantitative analysis of team characteristics (such as system architecture and governance protocols) and their effects on resilience. Drawing from literature in resilience engineering, human-automation interaction, and cognitive systems engineering, this paper provides a theoretical exploration of the use of computational modeling and simulation to analyze resilience in HMTs. Findings from literature are summarized in a set of requirements that highlight key aspects of resilience in HMTs that need to be accounted for in future modeling and evaluation efforts. These requirements include a need to model HMTs as joint cognitive systems, the need to account for the interdependent nature of activity, the temporal dynamics of work, and the need to support formative exploration and inquiry. We provide a brief overview of existing modeling and simulation approaches to evaluating HMTs and discuss further steps for operationalizing the identified requirements.


2020 ◽  
Author(s):  
Marc Canellas ◽  
Rachel Haga

CITE AS: M. Canellas and R. Haga, "Lost in Translation: Building a Common Language for Regulating Autonomous Weapons," in IEEE Technology and Society Magazine, vol. 35, no. 3, pp. 50-58, Sept. 2016. doi: 10.1109/MTS.2016.2593218There have been three UN meetings of experts in 2014, 2015, and 2016 to address autonomous weapons systems (AWS), however, little to no progress. In this article, we argue that the fundamental reason for the stalled discussions is the lack of a unifying, technical language for describing and understanding the problems posed by AWS. A unifying, technical language would address two major communications issues facing the discussants of AWS: an inability to identify the sources of the conflict and solutions that have consensus, and an inability to operationalize the regulations that are agreed upon. We propose that the language of cognitive systems engineering can be the unifying technical language to provide initial answers to the four key questions at the UN: (1) How do we define autonomy? Use the requirements for effective function allocation to develop standards for human-AWS interaction and meaningful human control. (2) What amount or quality of human control necessary for lawful use of AWS? Use function allocation’s models and metrics to evaluate human-AWS interaction and enforce meaningful human control standards. (3) What would an accountability framework look like for AWS? Use the models and metrics for evaluating authority-responsibility mismatches in function allocation to address the AWS responsibility gap. (4) How do we review and certify permissible AWS? Use the human-automation issues that have been explored and addressed by function allocation to develop case studies and technical standards for human-AWS interaction.


Author(s):  
John Flach ◽  
Peter Reynolds ◽  
Libby Duryee ◽  
Bryan Young ◽  
Jeff Graley

The design of digital information management systems for healthcare presents developers with several formidable engineering challenges. These systems must manage huge amounts of data and support communications across disparate platforms and divisions within a healthcare organization. They must ensure that data is kept private, secure, and available to the right people at the right time. However, as shown in other complex systems (e.g., nuclear power), simply making data available may be insufficient. The goals in designing digital healthcare as a ‘cognitive system’ are to present patient information in more meaningful ways, to help healthcare professionals become more productive, and to support healthcare professionals with clinical decision making. This paper describes how principles of cognitive systems engineering (CSE) and user experience (UX) design were applied to Cardiac Consultant, an interactive cardiovascular risk calculator, with those goals in mind.


Author(s):  
Penelope Sanderson ◽  
Tara McCurdie ◽  
Tobias Grundgeiger

Objective: We address the problem of how researchers investigate the actual or potential causal connection between interruptions and medical errors, and whether interventions might reduce the potential for harm. Background: It is widely assumed that interruptions lead to errors and patient harm. However, many reviewers and authors have commented that there is not strong evidence for a causal connection. Method: We introduce a framework of criteria for assessing how strongly evidence implies causality: the so-called Bradford Hill criteria. We then examine four key “metanarratives” of research into interruptions in health care—applied cognitive psychology, epidemiology, quality improvement, and cognitive systems engineering—and assess how each tradition has addressed the causal connection between interruptions and error. Results: Outcomes of applying the Bradford Hill criteria are that the applied cognitive psychology and epidemiology metanarratives address the causal connection relatively directly, whereas the quality improvement metanarrative merely assumes causality, and the cognitive systems engineering metanarrative either implicitly or explicitly questions the feasibility of finding a direct causal connection with harm. Conclusion: The Bradford Hill criteria are useful for evaluating the existing literature on the relationship between interruptions in health care, clinical errors, and the potential for patient harm. In the future, more attention is needed to the issue of why interruptions usually do not lead to harm, and the implications for how we approach patient safety.


Author(s):  
Neelam Naikar ◽  
Ashleigh Brady

This chapter presents a perspective of human expertise in sociotechnical systems based on the phenomenon of self-organization. Consistent with the ideals of the field of cognitive systems engineering, this perspective is based on empirical observations of how work is achieved in complex settings and incorporates an emphasis on design. The proposed perspective is motivated by the observation that workers in sociotechnical systems adapt not just their individual behaviors, but also their collective structures, in ways that are closely fitted to the evolving circumstances, such that these systems are necessarily self-organizing, a phenomenon that is essential for dealing with complexity in the task environment. Accordingly, the chapter explores in depth the theoretical and design implications of the phenomenon of self-organization for understanding and supporting human expertise in sociotechnical systems, and draws attention to the broader implications of this phenomenon for advancing a social basis for human cognition.


2018 ◽  
Vol 85 ◽  
pp. 138-148 ◽  
Author(s):  
April Savoy ◽  
Laura G. Militello ◽  
Himalaya Patel ◽  
Mindy E. Flanagan ◽  
Alissa L. Russ ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document