The Midas Human Performance Model

Author(s):  
Sherman W. Tyler ◽  
Christian Neukom ◽  
Michael Logan ◽  
Jay Shively

A unique software tool for conducting human factors analyses of complex human-machine systems has been developed at NASA Ames Research Center. Called the Man-Machine Integration Design and Analysis System (MIDAS), this simulation system contains models of human performance that can be used to evaluate candidate procedures, controls, and displays prior to more expensive and time consuming hardware simulators and human subject experiments. While this tool has been successfully applied to research issues in several domains, particularly in aeronautics, a desire to expand its functionality and its ease of use has led to the construction of a new object-oriented system. This new version of MIDAS contains a substantially modified human performance model, one that is aimed at being more consistent with empirical data on human behavior and more natural for designers to apply to the analyses of complex new designs. This paper offers a summary of this new human performance model, together with justifications for some of its main components, and indicates plans for its subsequent verification and validation.

Author(s):  
Richard Steinberg ◽  
Raytheon Company ◽  
Alice Diggs ◽  
Raytheon Company ◽  
Jade Driggs

Verification and validation (V&V) for human performance models (HPMs) can be likened to building a house with no bricks, since it is difficult to obtain metrics to validate a model when the system is still in development. HPMs are effective for performing trade-offs between the human system designs factors including number of operators needed, the role of automated tasks versus operator tasks, and member task responsibilities required to operate a system. On a recent government contract, our team used a human performance model to provide additional analysis beyond traditional trade studies. Our team verified the contractually mandated staff size for using the system. This task demanded that the model have sufficient fidelity to provide information for high confidence staffing decisions. It required a method for verifying and validating the model and its results to ensure that it accurately reflected the real world. The situation caused a dilemma because there was no actual system to gather real data to use to validate the model. It is a challenge to validate human performance models, since they support design decisions prior to system. For example, crew models are typically inform the design, staffing needs, and the requirements for each operator’s user interface prior to development. This paper discusses a successful case study for how our team met the V&V challenges with the US Air Force model accreditation authority and successfully accredited our human performance model with enough fidelity for requirements testing on an Air Force Command and Control program.


1980 ◽  
Vol 24 (1) ◽  
pp. 606-607
Author(s):  
Ben B. Morgan

Vigilance is one of the most thoroughly researched areas of human performance. Volumes have been written concerning vigilance performance in both laboratory and real-world settings, and there is a clear trend in the literature toward an increasing emphasis on the study of operational task behavior under environmental conditions that are common to real world jobs. Although a great deal of this research has been designed to test various aspects of the many theories of vigilance, there is a general belief that vigilance research is relevant and applicable to the performances required in real-world monitoring and inspection tasks. Indeed, many of the reported studies are justified on the basis of their apparent relevance to vigilance requirements in modern man-machine systems, industrial inspection tasks, and military jobs. There is a growing body of literature, however, which suggests that many vigilance studies are of limited applicability to operational task performance. For example, Kibler (1965) has argued that technological changes have altered job performance requirements to the extent that laboratory vigilance studies are no longer applicable to real-world jobs. Many others have simply been unable to reproduce the typical “vigilance decrement” in field situations. This has led Teichner (1974) to conclude that “the decremental function itself is more presumed than established.”


JOURNAL ASRO ◽  
2020 ◽  
Vol 11 (2) ◽  
pp. 35
Author(s):  
Didit Herdiawan ◽  
Joni Widjayanto ◽  
Benny Sukandari ◽  
Made Suwandiyana

KRI is one of the main components possessed by the Indonesian Navy which has the main task of security and national defense at sea. The Termination of Operation KRI over the age of 40 results in a lack of existing KRI, so there is a need for research that represents an evaluation and analysis of the work system so that the main tasks can still be carried out. The SWOT and CIPP analysis in this study aims to identify several factors that influence the achievement of the main tasks. The results obtained are in Quadrant I (+; +) which states the status "on the track".Keywords : KRI, SWOT, CIPP.


1975 ◽  
Vol 88 (4) ◽  
pp. 703
Author(s):  
James K. Arima ◽  
Thomas B. Sheridan ◽  
William R. Ferrell

Author(s):  
Paul G. Lee ◽  
Daeyong Lee ◽  
Gary A. Gabriele

Abstract The proper use of integral attachment features in mechanical assemblies has been more of an art than an engineering science. An organized set of design steps for generating conceptual integral attachment designs has recently been developed based on work begun by Bonenberger. These steps outline a formal design methodology for exploring the design space of possible alternative attachment concepts. This paper describes the development of a software tool that attempts to implement the integral attachment design methodology to assist a designer in developing attachment concepts. The tool is implemented using the Java programming language. A graphical interface is used to present the methodology as a series of options that approximate the design situation. This hides many of the details of the methodology in favor of ease of use. The end result is a set of suggestions for integral fasteners that are matched to the design situation. A discussion of how the hundreds of images are handled using Java is provided. A sample case study illustrates the approach of the program. The tool represents one of the few examples of a design tool aimed specifically at generating design concepts.


2017 ◽  
Vol 3 (2) ◽  
pp. 195-198
Author(s):  
Philip Westphal ◽  
Sebastian Hilbert ◽  
Michael Unger ◽  
Claire Chalopin

AbstractPlanning of interventions to treat cardiac arrhythmia requires a 3D patient specific model of the heart. Currently available commercial or free software dedicated to this task have important limitations for routinely use. Automatic algorithms are not robust enough while manual methods are time-consuming. Therefore, the project attempts to develop an optimal software tool. The heart model is generated from preoperative MR data-sets acquired with contrast agent and allows visualisation of damaged cardiac tissue. A requirement in the development of the software tool was the use of semi-automatic functions to be more robust. Once the patient image dataset has been loaded, the user selects a region of interest. Thresholding functions allow selecting the areas of high intensities which correspond to anatomical structures filled with contrast agent, namely cardiac cavities and blood vessels. Thereafter, the target-structure, for example the left ventricle, is coarsely selected by interactively outlining the gross shape. An active contour function adjusts automatically the initial contour to the image content. The result can still be manually improved using fast interaction tools. Finally, possible scar tissue located in the cavity muscle is automatically detected and visualized on the 3D heart model. The model is exported in format which is compatible with interventional devices at hospital. The evaluation of the software tool included two steps. Firstly, a comparison with two free software tools was performed on two image data sets of variable quality. Secondly, six scientists and physicians tested our tool and filled out a questionnaire. The performance of our software tool was visually judged more satisfactory than the free software, especially on the data set of lower quality. Professionals evaluated positively our functionalities regarding time taken, ease of use and quality of results. Improvements would consist in performing the planning based on different MR modalities.


Author(s):  
Timothy P. Hanratty ◽  
E. Allison Newcomb ◽  
Robert J. Hammell II ◽  
John T. Richardson ◽  
Mark R. Mittrick

Data for military intelligence operations are increasing at astronomical rates. As a result, significant cognitive and temporal resources are required to determine which information is relevant to a particular situation. Soft computing techniques, such as fuzzy logic, have recently been applied toward decision support systems to support military intelligence analysts in selecting relevant and reliable data within the military decision making process. This article examines the development of one such system and its evaluation using a constructive simulation and human performance model to provided critical understanding of how this conceptual information system might interact with personnel, organizational, and system architectures. In addition, similarities between military intelligence analysts and cyber intelligence analysts are detailed along with a plan for transitioning the current fuzzy-based system to the cyber security domain.


1975 ◽  
Vol 97 (1) ◽  
pp. 105-105 ◽  
Author(s):  
Thomas B. Sheridan ◽  
William R. Ferrell ◽  
Masayoshi Tomizuka

Sign in / Sign up

Export Citation Format

Share Document