Live–virtual–constructive simulation for testing and evaluation of air combat tactics, techniques, and procedures, Part 2: demonstration of the framework

Author(s):  
Heikki Mansikka ◽  
Kai Virtanen ◽  
Don Harris ◽  
Jaakko Salomäki

In this paper, the use of the live (L), virtual (V), and constructive (C) simulation framework introduced in Part 1 of this two-part study is demonstrated in the testing and evaluation of air combat tactics, techniques, and procedures (TTP). Each TTP consists of rules that describe how aircraft pilots coordinate their actions to achieve goals in air combat. In the demonstration, the initial rules are defined by subject matter experts (SMEs). These rules are refined iteratively in separate C-, V-, and L-simulation stages. In the C-stage, an operationally used C-simulation model is used to provide optimal rules with respect to the probabilities of survival ( Ps) and kill ( Pk) of aircraft without considering human–machine interaction (HMI). In the V-stage, fighter squadrons’ V-simulators and SMEs’ assessment are used to modify these rules by evaluating their applicability with Pk and Ps, as well as HMI measures regarding pilots’ situation awareness, mental workload, and TTP rule adherence. In the L-stage, qualified fighter pilots fly F/A-18C aircraft in a real-life environment. Based on SMEs’ assessment, the TTP rules refined in the C- and L-stages result in acceptable Pk, Ps, and HMI measures in the L-stage. As such, the demonstration highlights the utility of the LVC framework.

Author(s):  
Heikki Mansikka ◽  
Kai Virtanen ◽  
Don Harris ◽  
Jaakko Salomäki

This paper advances live (L), virtual (V), and constructive (C) simulation methodologies by introducing a new LVC simulation framework for the development of air combat tactics, techniques, and procedures (TTP). In the framework, TTP is developed iteratively in separate C-, V-, and L-simulation stages. This allows the utilization of the strengths of each simulation class while avoiding the challenges of pure LVC simulations. The C-stage provides the optimal TTP with respect to the probabilities of survival ( Ps) and kill ( Pk) of aircraft without considering the human–machine interaction (HMI). In the V-stage, the optimal TTP is modified by assessing its applicability with Pk and Ps, as well as HMI measures regarding pilots’ situation awareness, mental workload, and TTP adherence. In the L-stage, real aircraft are used to evaluate whether the developed TTP leads to acceptable Pk, Ps, and HMI measures in a real-life environment. The iterative nature of the framework enables that V- or L-stages can reveal flaws of the TTP and an inadequate TTP can be returned to the C- or V-stage for revision. This paper is Part 1 of a two-part study. Part 2 demonstrates the use of the framework with operationally used C- and V-simulators as well as real F/A-18C aircraft and pilots.


Author(s):  
Fabio Grandi ◽  
Margherita Peruzzini ◽  
Roberto Raffaeli ◽  
Marcello Pellicciari

Successful interaction with complex systems is based on the system ability to satisfy the user needs during interaction tasks, mainly related to performances, physical comfort, usability, accessibility, visibility, and mental workload. However, the “real” user experience (UX) is hidden and usually difficult to detect. The paper proposes a Transdisciplinary Assessment Matrix (TAS) based on collection of physiological, postural and visibility data during interaction analysis, and calculation of a consolidated User eXperience Index (UXI). Physiological data are based on heart rate parameters and eye pupil dilation parameters; postural data consists of analysis of main anthropometrical parameters; and interaction data from the system CAN-bus. Such a method can be adopted to assess interaction on field, during real task execution, or within simulated environments. It has been applied to a simulated case study focusing on agricultural machinery control systems, involving users with a different level of expertise. Results showed that TAS is able to validly objectify UX and can be used for industrial cases.


Author(s):  
Brian T. Schreiber ◽  
Herbert H. Bell ◽  
William B. Raspotnik

In an exploratory study, we examined whether communication could distinguish between high- or low-situation awareness (SA) F-15 lead pilots. With aid from an assigned wingman and an air weapons controller, the lead pilots flew 36 simulated combat engagements. Two measures of SA were utilized. First, ratings of SA were obtained from the operational squadrons. Second, subject matter experts based SA ratings of 40 lead pilots on (a) 28 critical behaviors identified in a task analysis, and (b) behaviors such as communication. Subsequent rankings from both SA measures revealed that, during the simulated engagements, high-situation awareness pilots directed team members more frequently and requested more information. Despite the varied complex simulated engagements, communication patterns were stable; lead pilots' communications were similar for identical engagements that were flown both early and late in the study. Larger studies using a correlational approach with communication categorization are suggested.


Author(s):  
Lucas Paletta

AbstractHuman attention processes play a major role in the optimization of human-machine interaction (HMI) systems. This work describes a suite of innovative components within a novel framework in order to assess the human factors state of the human operator primarily by gaze and in real-time. The objective is to derive parameters that determine information about situation awareness of the human collaborator that represents a central concept in the evaluation of interaction strategies in collaboration. The human control of attention provides measures of executive functions that enable to characterize key features in the domain of human-machine collaboration. This work presents a suite of human factors analysis components (the Human Factors Toolbox) and its application in the assembly processes of a future production line. Comprehensive experiments on HMI are described which were conducted with typical tasks including collaborative pick-and-place in a lab based prototypical manufacturing environment.


2021 ◽  
pp. 1-9
Author(s):  
Harshadkumar B. Prajapati ◽  
Ankit S. Vyas ◽  
Vipul K. Dabhi

Face expression recognition (FER) has gained very much attraction to researchers in the field of computer vision because of its major usefulness in security, robotics, and HMI (Human-Machine Interaction) systems. We propose a CNN (Convolutional Neural Network) architecture to address FER. To show the effectiveness of the proposed model, we evaluate the performance of the model on JAFFE dataset. We derive a concise CNN architecture to address the issue of expression classification. Objective of various experiments is to achieve convincing performance by reducing computational overhead. The proposed CNN model is very compact as compared to other state-of-the-art models. We could achieve highest accuracy of 97.10% and average accuracy of 90.43% for top 10 best runs without any pre-processing methods applied, which justifies the effectiveness of our model. Furthermore, we have also included visualization of CNN layers to observe the learning of CNN.


Sign in / Sign up

Export Citation Format

Share Document