Detecting and Characterizing Archetypes of Unintended Consequences in Engineered Systems

Author(s):  
Hannah S. Walsh ◽  
Andy Dong ◽  
Irem Y. Tumer ◽  
Guillaume Brat

Abstract When designing engineered systems, the potential for unintended consequences of design policies exists despite best intentions. The effect of risk factors for unintended consequences are often known only in hindsight. However, since historical knowledge is generally associated with a single event, it is difficult to uncover general trends in the formation and types of unintended consequences. In this research, archetypes of unintended consequences are learned from historical data. This research contributes toward the understanding of archetypes of unintended consequences by using machine learning over a large data set of lessons learned from adverse events at NASA. Sixty-six archetypes are identified because they share similar sets of risk factors such as complexity and human-machine interaction. To validate the learned archetypes, system dynamics representations of the archetypes are compared to known high-level archetypes of unintended consequences. The main contribution of the paper is a set of archetypes that apply to many engineered systems and a pattern of leading indicators that open a new path to manage unintended consequences and mitigate the magnitude of potentially adverse outcomes.

BMJ Open ◽  
2019 ◽  
Vol 9 (2) ◽  
pp. e022137 ◽  
Author(s):  
Allison S Letica-Kriegel ◽  
Hojjat Salmasian ◽  
David K Vawdrey ◽  
Brett E Youngerman ◽  
Robert A Green ◽  
...  

MotivationCatheter-associated urinary tract infections (CAUTI) are a common and serious healthcare-associated infection. Despite many efforts to reduce the occurrence of CAUTI, there remains a gap in the literature about CAUTI risk factors, especially pertaining to the effect of catheter dwell-time on CAUTI development and patient comorbidities.ObjectiveTo examine how the risk for CAUTI changes over time. Additionally, to assess whether time from catheter insertion to CAUTI event varied according to risk factors such as age, sex, patient type (surgical vs medical) and comorbidities.DesignRetrospective cohort study of all patients who were catheterised from 2012 to 2016, including those who did and did not develop CAUTIs. Both paediatric and adult patients were included. Indwelling urinary catheterisation is the exposure variable. The variable is interval, as all participants were exposed but for different lengths of time.SettingUrban academic health system of over 2500 beds. The system encompasses two large academic medical centres, two community hospitals and a paediatric hospital.ResultsThe study population was 47 926 patients who had 61 047 catheterisations, of which 861 (1.41%) resulted in a CAUTI. CAUTI rates were found to increase non-linearly for each additional day of catheterisation; CAUTI-free survival was 97.3% (CI: 97.1 to 97.6) at 10 days, 88.2% (CI: 86.9 to 89.5) at 30 days and 71.8% (CI: 66.3 to 77.8) at 60 days. This translated to an instantaneous HR of. 49%–1.65% in the 10–60 day time range. Paraplegia, cerebrovascular disease and female sex were found to statistically increase the chances of a CAUTI.ConclusionsUsing a very large data set, we demonstrated the incremental risk of CAUTI associated with each additional day of catheterisation, as well as the risk factors that increase the hazard for CAUTI. Special attention should be given to patients carrying these risk factors, for example, females or those with mobility issues.


2011 ◽  
Vol 29 (27_suppl) ◽  
pp. 8-8
Author(s):  
J. L. B. Bevilacqua ◽  
M. W. Kattan ◽  
C. Yu ◽  
S. Koifman ◽  
I. E. Mattos ◽  
...  

8 Background: Lymphedema (LE) after axillary dissection (AD) is a multifactorial, chronic, and disabling condition that currently affects an estimated 4 million people worldwide. Although several risk factors have been described, it is difficult to estimate the risk in individual patients. We therefore developed nomograms based on a large data set. Methods: Clinicopathological features were collected from a prospective cohort study of 1,054 women with unilateral breast cancer undergoing AD as part of their surgical treatment from 8/2001 to 11/2002. LE was defined as a volume difference of at least 200 mL between arms at 6 months or more after surgery. The cumulative incidence of LE was ascertained by the Kaplan-Meier method, and Cox proportional hazards models were used to predict the risk of developing LE based on the available data at each timepoint: (I) preoperatively; (II) within 6 months from surgery; and (III) 6 months or later after surgery. Results: The 5-year cumulative incidence of LE was 30.3%. Independent risk factors for LE were age, body mass index, ipsilateral arm chemotherapy infusions, level of AD, location of radiotherapy field, development of postoperative seroma, infection, and early edema. When applied to the validation set, the concordance indexes were 0.706, 0.729, and 0.736 for models I, II, and III, respectively. Conclusions: The proposed nomograms can help physicians and patients to predict the 5-year probability of LE after AD for breast cancer. Free online versions of the nomograms will be available.


2012 ◽  
pp. 2016-2026
Author(s):  
Hong Lin ◽  
Jeremy Kemp ◽  
Padraic Gilbert

Gamma Calculus is an inherently parallel, high-level programming model, which allows simple programming molecules to interact, creating a complex system with minimum of coding. Gamma calculus modeled programs were written on top of IBM’s TSpaces middleware, which is Java-based and uses a “Tuple Space” based model for communication, similar to that in Gamma. A parser was written in C++ to translate the Gamma syntax. This was implemented on UHD’s grid cluster (grid.uhd.edu), and in an effort to increase performance and scalability, existing Gamma programs are being transferred to Nvidia’s CUDA architecture. General Purpose GPU computing is well suited to run Gamma programs, as GPU’s excel at running the same operation on a large data set, potentially offering a large speedup.


2020 ◽  
Vol 52 ◽  
pp. 93-98.e2 ◽  
Author(s):  
Gaspar Manuel Parra-Bracamonte ◽  
Nicolas Lopez-Villalobos ◽  
Francisco E. Parra-Bracamonte

2021 ◽  
Vol 11 (11) ◽  
pp. 1213
Author(s):  
Morteza Esmaeili ◽  
Riyas Vettukattil ◽  
Hasan Banitalebi ◽  
Nina R. Krogh ◽  
Jonn Terje Geitung

Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in different image analysis tasks, including differentiating tumor-containing brains from healthy brains. AI models, however, perform as a black box, concealing the rational interpretations that are an essential step towards translating AI imaging tools into clinical routine. An explainable AI approach aims to visualize the high-level features of trained models or integrate into the training process. This study aims to evaluate the performance of selected deep-learning algorithms on localizing tumor lesions and distinguishing the lesion from healthy regions in magnetic resonance imaging contrasts. Despite a significant correlation between classification and lesion localization accuracy (R = 0.46, p = 0.005), the known AI algorithms, examined in this study, classify some tumor brains based on other non-relevant features. The results suggest that explainable AI approaches can develop an intuition for model interpretability and may play an important role in the performance evaluation of deep learning models. Developing explainable AI approaches will be an essential tool to improve human–machine interactions and assist in the selection of optimal training methods.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Vyacheslav I. Zavalin ◽  
Shawne D. Miksa

Purpose This paper aims to discuss the challenges encountered in collecting, cleaning and analyzing the large data set of bibliographic metadata records in machine-readable cataloging [MARC 21] format. Possible solutions are presented. Design/methodology/approach This mixed method study relied on content analysis and social network analysis. The study examined subject representation in MARC 21 metadata records created in 2020 in WorldCat – the largest international database of “big smart data.” The methodological challenges that were encountered and solutions are examined. Findings In this general review paper with a focus on methodological issues, the discussion of challenges is followed by a discussion of solutions developed and tested as part of this study. Data collection, processing, analysis and visualization are addressed separately. Lessons learned and conclusions related to challenges and solutions for the design of a large-scale study evaluating MARC 21 bibliographic metadata from WorldCat are given. Overall recommendations for the design and implementation of future research are suggested. Originality/value There are no previous publications that address the challenges and solutions of data collection and analysis of WorldCat’s “big smart data” in the form of MARC 21 data. This is the first study to use a large data set to systematically examine MARC 21 library metadata records created after the most recent addition of new fields and subfields to MARC 21 Bibliographic Format standard in 2019 based on resource description and access rules. It is also the first to focus its analyzes on the networks formed by subject terms shared by MARC 21 bibliographic records in a data set extracted from a heterogeneous centralized database WorldCat.


2020 ◽  
Vol 07 (01) ◽  
pp. 15-24
Author(s):  
Paul Bello ◽  
Will Bridewell

If artificial agents are to be created such that they occupy space in our social and cultural milieu, then we should expect them to be targets of folk psychological explanation. That is to say, their behavior ought to be explicable in terms of beliefs, desires, obligations, and especially intentions. Herein, we focus on the concept of intentional action, and especially its relationship to consciousness. After outlining some lessons learned from philosophy and psychology that give insight into the structure of intentional action, we find that attention plays a critical role in agency, and indeed, in the production of intentional action. We argue that the insights offered by the literature on agency and intentional action motivate a particular kind of computational cognitive architecture, and one that hasn’t been well-explicated or computationally fleshed out among the community of AI researchers and computational cognitive scientists who work on cognitive systems. To give a sense of what such a system might look like, we present the ARCADIA attention-driven cognitive system as first steps toward an architecture to support the type of agency that rich human–machine interaction will undoubtedly demand.


2020 ◽  
pp. 0887302X2093119 ◽  
Author(s):  
Rachel Rose Getman ◽  
Denise Nicole Green ◽  
Kavita Bala ◽  
Utkarsh Mall ◽  
Nehal Rawat ◽  
...  

With the proliferation of digital photographs and the increasing digitization of historical imagery, fashion studies scholars must consider new methods for interpreting large data sets. Computational methods to analyze visual forms of big data have been underway in the field of computer science through computer vision, where computers are trained to “read” images through a process called machine learning. In this study, fashion historians and computer scientists collaborated to explore the practical potential of this emergent method by examining a trend related to one particular fashion item—the baseball cap—across two big data sets—the Vogue Runway database (2000–2018) and the Matzen et al. Streetstyle-27K data set (2013–2016). We illustrate one implementation of high-level concept recognition to map a fashion trend. Tracking trend frequency helps visualize larger patterns and cultural shifts while creating sociohistorical records of aesthetics, which benefits fashion scholars and industry alike.


Author(s):  
Shan Chen ◽  
Bin Yao ◽  
Zheng Chen ◽  
Xiaocong Zhu ◽  
Shiqiang Zhu

The control objective of exoskeleton for human performance augmentation is to minimize the human machine interaction force while carrying external loads and following human motion. This paper addresses the dynamics and the interaction force control of a 1-DOF hydraulically actuated joint exoskeleton. A spring with unknown stiffness is used to model the human-machine interface. A cascade force control method is adopted with high-level controller generating the reference position command while low level controller doing motion tracking. Adaptive robust control (ARC) algorithm is developed for both two controllers to deal with the effect of parametric uncertainties and uncertain nonlinearities of the system. The proposed adaptive robust cascade force controller can achieve small human-machine interaction force and good robust performance to model uncertainty which have been validated by experiment.


10.29007/ztmt ◽  
2020 ◽  
Author(s):  
Alexander Iliev ◽  
Peter Stanchev

In this article we summarize at a high-level some of the popular smart technologies that may contrive many smart city ecosystems. More specifically we will emphasize the automation of various processes based on the extraction and analysis of digital media, through speech signals and images. Currently, there are many productized systems for personalization and recommendation of digital media content as well as various services in different areas. Most of them are developed with human-machine interaction in mind. Usually, this is done through a conventional use of a mouse and a keyboard. The user types their response manually, which is then recorded by the system for further analysis.


Sign in / Sign up

Export Citation Format

Share Document