Effects of Touch, Voice, and Multimodal Input on Multiple-UAV Monitoring During Simulated Manned-Unmanned Teaming in a Military Helicopter

Author(s):  
Samuel J. Levulis ◽  
So Young Kim ◽  
Patricia R. DeLucia

A key component of the U.S. Army’s vision for future unmanned aerial vehicle (UAV) operations is to integrate UAVs into manned missions, an effort called manned-unmanned teaming (MUM-T; Department of Defense, 2010). One candidate application of MUM-T is to provide the Air Mission Commander (AMC) of a team of Black Hawk helicopters control of multiple UAVs, offering advanced reconnaissance and real-time intelligence of the upcoming flight route and landing zones. One important design decision in the development of a system to support multi-UAV control by an AMC is the selection of the interface used to control the system, for example, through a touchscreen or voice commands. A variety of input methods is feasible from an engineering standpoint, but little is known about the effect of the input interface on AMC performance. The current study evaluated three interface input methods for a MUM-T supervisory control system used by an AMC located in a Black Hawk helicopter. The evaluation was conducted with simulation software developed by General Electric. Eighteen participants supervised a team of two helicopters and three UAVs as they traveled towards a landing zone to deploy ground troops. A primary monitor, located in front of the participant, presented displays used to monitor flight instruments and to supervise the manned and unmanned vehicles that were under the AMC’s control. A secondary monitor, located adjacent to the participant, presented displays used to inspect and classify aerial photographs taken by the UAVs. Participants were responsible for monitoring and responding to instrument warnings, classifying the aerial photographs as either neutral or hostile, and responding to radio communications. We manipulated interface input modality (touch, voice, multimodal) and workload (rate of photographs to classify). Participants completed three blocks of 8.5-minute experimental trials, one for each input modality. Results indicated that touch and multimodal input methods were superior to voice input. Participants were more efficient with touch and multimodal control (compared to voice), evidenced by relatively shorter photograph classification times, a greater percentage of classified photographs, and shorter instrument warning response times. Touch and multimodal input also resulted in a greater percentage of correct responses to communication task queries, lower subjective workload, greater subjective situation awareness, and higher usability ratings. Multimodal input did not result in significant performance advantages compared to touch alone. Designers should carefully consider the performance tradeoffs when selecting from candidate input methods during system development.

Author(s):  
Samuel J. Levulis ◽  
Patricia R. DeLucia ◽  
So Young Kim

Objective: We evaluated three interface input methods for a simulated manned-unmanned teaming (MUM-T) supervisory control system designed for Air Mission Commanders (AMCs) in Black Hawk helicopters. Background: A key component of the U.S. Army’s vision for unmanned aerial vehicles (UAVs) is to integrate UAVs into manned missions, called MUM-T (Department of Defense, 2010). One application of MUM-T is to provide the AMC of a team of Black Hawk helicopters control of multiple UAVs, offering advanced reconnaissance and real-time intelligence of flight routes and landing zones. Method: Participants supervised a (simulated) team of two helicopters and three UAVs while traveling toward a landing zone to deploy ground troops. Participants classified aerial photographs collected by UAVs, monitored instrument warnings, and responded to radio communications. We manipulated interface input modality (touch, voice, multimodal) and task load (number of photographs). Results: Compared with voice, touch and multimodal control resulted in better performance on all tasks and resulted in lower subjective workload and greater subjective situation awareness, ps < .05. Participants with higher spatial ability classified more aerial photographs ( r = .75) and exhibited shorter response times to instrument warnings ( r = −.58) than participants with lower spatial ability. Conclusion: Touchscreen and multimodal control were superior to voice control in a supervisory control task that involved monitoring visual displays and communicating on radio channels. Application: Although voice control is often considered a more natural and less physically demanding input method, caution is needed when designing visual displays for users sharing common communication channels.


Author(s):  
E. de Visser ◽  
R. Parasuraman ◽  
A. Freedy ◽  
E. Freedy ◽  
G. Weltman

New methodologies and quantitative measurements for evaluating human-robot team performance must be developed to achieve effective coordination between teams of humans and unmanned vehicles. The Mixed Initiative Team Performance Assessment System (MITPAS) provides such a comprehensive measurement methodology. MITPAS consists of a methodology, tools and procedures to measure the performance of mixed manned and unmanned teams in both training and real world operational environments. This paper shows results of an initial experiment conducted to validate the Situation Awareness Global Assessment Technique (SAGAT) methodology as part of the MITPAS tool and gain insight into the effect of robot competence on operator situation awareness as well as on overall human-robot team performance.


Author(s):  
Patricia L. McDermott ◽  
Jason Luck ◽  
Laurel Allender ◽  
Alia Fisher

Much of the research on unmanned-vehicles (UVs) focuses on technology or interface design. This study however, investigated how to best support effective communication between the operator monitoring a UV and the Soldier in the field using that information to complete a mission. Several questions arise: Does the operator need to be co-located with Soldiers in the field or can he or she be in a more secure rearward location? Does the team need the capability to transmit visual images or is radio communication adequate? Is information from one type of UV better than others? Do real time mapping and tracking technologies increase situation awareness (SA)? To begin to answer these questions, military teams conducted rescue missions using the video game Raven Shield as a simulated battlefield. The analysis of performance data, self reports, and observations provide some valuable insight to these questions.


1973 ◽  
Vol 37 (2) ◽  
pp. 471-476 ◽  
Author(s):  
Richard L. Cahoon

2 experiments were conducted to determine the effects of high altitude atmospheres on the performance of a simulated Army radio-communication task. Ss monitored 2-hr. tapes of simulated radio traffic at 4 different altitudes (sea level, 13,000 ft., 15,000 ft., and 17,000 ft.). The results of Exp. I indicated a significant drop in performance above 13,000 ft. altitude. However, Exp. II, using highly motivated, radio-trained Ss showed no performance decrements up to 17,000 ft. The data suggest that high motivation and training can compensate for altitude stress on monitoring tasks of relatively short duration.


2006 ◽  
Author(s):  
Mark A. Livingston ◽  
Simon J. Julier ◽  
Dennis G. Brown

2012 ◽  
Vol 7 (1) ◽  
pp. 26-48 ◽  
Author(s):  
Ronny Ophir-Arbelle ◽  
Tal Oron-Gilad ◽  
Avinoam Borowsky ◽  
Yisrael Parmet

Operational tactics in urban areas are often aided by information from unmanned aerial vehicles (UAVs). A major challenge for dismounted soldiers, particularly in urban environments, is to understand the conflict area in general and particularly from the UAV feed. The UAV feed is usually used to enhance soldiers’ situation awareness abilities but less for identifying specific elements. A possible way to further enhance soldiers’ abilities is to provide them with multiple sources of information (e.g., aerial and ground views). This study examined the benefits of presenting video feed from UAVs and unmanned ground vehicles (UGVs) in a combined interface, relative to presenting aerial feed alone. Thirty former infantry soldiers with no experience in operating unmanned vehicles participated. Objective performance, subjective evaluations, and eye-tracking patterns were examined in two scenarios. In Scenario 1, performance scores in both identification and orientation tasks were superior in the combined configuration. In Scenario 2, performance scores in the identification tasks were improved, and the addition of the UGV feed did not harm performance in the orientation task. Eye movement scanning patterns reinforced that both UAV and UGV feeds were used for the mission. The combined configuration generated consistent benefits with regard to the identification tasks, perceived mental demand, and reduction of false reports without having any apparent cost on participants. Ground views may provide additional support to dismounted soldiers.


2021 ◽  
Vol 18 (1) ◽  
pp. 172988142097854
Author(s):  
Eduardo Jose Fabris ◽  
Vicenzo Abichequer Sangalli ◽  
Leonardo Pavanatto Soares ◽  
Márcio Sarroglia Pinho

Unmanned ground vehicles are usually deployed in situations, where it is too dangerous or not feasible to have an operator onboard. One challenge faced when such vehicles are teleoperated is maintaining a high situational awareness, due to aspects such as limitation of cameras, characteristics of network transmission, and the lack of other sensory information, such as sounds and vibrations. Situation awareness refers to the understanding of the information, events, and actions that will impact the execution and the objectives of the tasks at the current and near future of the operation of the vehicle. This work investigates how the simultaneous use of immersive telepresence and mixed reality could impact the situation awareness of the operator and the navigation performance. A user study was performed to compare our proposed approach with a traditional unmanned vehicle control station. Quantitative data obtained from the vehicle’s behavior and the situation awareness global assessment technique were used to analyze such impacts. Results provide evidence that our approach is relevant when the task requires a detailed observation of the surroundings, leading to higher situation awareness and navigation performance.


Sign in / Sign up

Export Citation Format

Share Document