Evaluating human-automation interaction using task analytic behavior models, strategic knowledge-based erroneous human behavior generation, and model checking

Author(s):  
Matthew L. Bolton ◽  
Ellen J. Bass
Author(s):  
Meng Li ◽  
Sara Behdad ◽  
Matthew L. Bolton

Model checking is increasingly being used with task analytic behavior models to prove whether models of human-interactive systems are safe and reliable. Such methods could be used to predict how different types of users will choose to use system features. However, existing methods focus on modeling the full space of possible human behaviors without considering how users will choose to navigate this space. In this work, we present a new approach that enables model checking to predict how different types of users will use features of an interactive system by employing a novel combination of task analytic modeling and utility theory. This paper presents this method and illustrates its power with a smart thermostat application. The results of the application analysis and its implications for future research are discussed.


2008 ◽  
Author(s):  
Steven Solomon ◽  
Michael van Lent ◽  
Mark Core ◽  
Paul Carpenter ◽  
Milton Rosenberg

2017 ◽  
Vol 46 (6) ◽  
pp. 985-1002 ◽  
Author(s):  
Gian Paolo Cimellaro ◽  
Fabrizio Ozzello ◽  
Alessio Vallero ◽  
Stephen Mahin ◽  
Benshun Shao

Author(s):  
Yì N Wáng ◽  
Xu Li

Abstract We introduce a logic of knowledge in a framework in which knowledge is treated as a kind of belief. The framework is based on a standard KD45 characterization of belief, and the characterization of knowledge undergoes the classical tripartite analysis that knowledge is justified true belief, which has a natural link to the studies of logics of evidence and justification. The interpretation of knowledge avoids the unwanted properties of logical omniscience, independent of the choice of the base logic of belief. We axiomatize the logic, prove its soundness and completeness and study the computational complexity results of the model checking and satisfiability problems. We extend the logic to a multi-agent setting and introduce a variant in which belief is characterized in a weaker system to avoid the problem of logical omniscience.


Author(s):  
Matthew L Bolton ◽  
Ellen J. Bass

Predicting failures in complex, human-interactive systems is difficult as they may occur under rare operational conditions and may be influenced by many factors including the system mission, the human operator's behavior, device automation, human-device interfaces, and the operational environment. This paper presents a method that integrates task analytic models of human behavior with formal models and model checking in order to formally verify properties of human-interactive systems. This method is illustrated with a case study: the programming of a patient controlled analgesia pump. Two specifications, one of which produces a counterexample, illustrate the analysis and visualization capabilities of the method.


2010 ◽  
Vol 20 (1) ◽  
pp. 681-693 ◽  
Author(s):  
Charlotte Seidner ◽  
Jean-Philippe Lerat ◽  
Olivier H. Roux

Sign in / Sign up

Export Citation Format

Share Document