Effective Human to Human Communication of Information Provided by an Unmanned Vehicle

Author(s):  
Patricia L. McDermott ◽  
Jason Luck ◽  
Laurel Allender ◽  
Alia Fisher

Much of the research on unmanned-vehicles (UVs) focuses on technology or interface design. This study however, investigated how to best support effective communication between the operator monitoring a UV and the Soldier in the field using that information to complete a mission. Several questions arise: Does the operator need to be co-located with Soldiers in the field or can he or she be in a more secure rearward location? Does the team need the capability to transmit visual images or is radio communication adequate? Is information from one type of UV better than others? Do real time mapping and tracking technologies increase situation awareness (SA)? To begin to answer these questions, military teams conducted rescue missions using the video game Raven Shield as a simulated battlefield. The analysis of performance data, self reports, and observations provide some valuable insight to these questions.

Author(s):  
Elizabeth M. Mersch ◽  
Kyle J. Behymer ◽  
Gloria L. Calhoun ◽  
Heath A. Ruff ◽  
Jared S. Dewey

Video game interfaces featuring multiple distinct icons that enable a player to quickly select specific actions from a larger set of possible actions have the potential to inform the development of interfaces that enable a single operator to control multiple unmanned vehicles (UVs). The goal of this research was to examine the design of a video game inspired interface for delegating actions (called “plays”) to highly autonomous UVs. Specifically, the impact of color coding (by Play Type, by Vehicle Type, and No Color) and icon row assignment (by Play Type, by Vehicle Type, and Random) for a delegation play calling interface was evaluated in terms of participants’ performance in identifying and manually selecting a commanded play icon in an interface depicting a large set of UV plays. Both the objective performance data and subjective ratings indicated that icon row assignment impacted icon selection, whereas color coding did not. Mean icon selection time and subjective ratings were more favorable when the icons were assigned to rows in the Play Calling interface by vehicle type. Suggestions are made for follow-on research.


2021 ◽  
Vol 13 (8) ◽  
pp. 188
Author(s):  
Marianna Di Gregorio ◽  
Marco Romano ◽  
Monica Sebillo ◽  
Giuliana Vitiello ◽  
Angela Vozella

The use of Unmanned Aerial Systems, commonly called drones, is growing enormously today. Applications that can benefit from the use of fleets of drones and a related human–machine interface are emerging to ensure better performance and reliability. In particular, a fleet of drones can become a valuable tool for monitoring a wide area and transmitting relevant information to the ground control station. We present a human–machine interface for a Ground Control Station used to remotely operate a fleet of drones, in a collaborative setting, by a team of multiple operators. In such a collaborative setting, a major interface design challenge has been to maximize the Team Situation Awareness, shifting the focus from the individual operator to the entire group decision-makers. We were especially interested in testing the hypothesis that shared displays may improve the team situation awareness and hence the overall performance. The experimental study we present shows that there is no difference in performance between shared and non-shared displays. However, in trials when unexpected events occurred, teams using shared displays-maintained good performance whereas in teams using non-shared displays performance reduced. In particular, in case of unexpected situations, operators are able to safely bring more drones home, maintaining a higher level of team situational awareness.


2017 ◽  
Vol 12 (1) ◽  
pp. 29-34 ◽  
Author(s):  
Mica R. Endsley

The concept of different levels of automation (LOAs) has been pervasive in the automation literature since its introduction by Sheridan and Verplanck. LOA taxonomies have been very useful in guiding understanding of how automation affects human cognition and performance, with several practical and theoretical benefits. Over the past several decades a wide body of research has been conducted on the impact of various LOAs on human performance, workload, and situation awareness (SA). LOA has a significant effect on operator SA and level of engagement that helps to ameliorate out-of-the-loop performance problems. Together with other aspects of system design, including adaptive automation, granularity of control, and automation interface design, LOA is a fundamental design characteristic that determines the ability of operators to provide effective oversight and interaction with system autonomy. LOA research provides a solid foundation for guiding the creation of effective human–automation interaction, which is critical for the wide range of autonomous and semiautonomous systems currently being developed across many industries.


Author(s):  
Cyril Onwubiko

This chapter describes work on modelling situational awareness information and system requirements for the mission. Developing this model based on Goal-Oriented Task Analysis representation of the mission using an Agent Oriented Software Engineering methodology advances current information requirement models because it provides valuable insight on how to effectively achieve the mission’s requirements (information, systems, networks, and IT infrastructure), and offers enhanced situational awareness within the Computer Network Defence environment. Further, the modelling approach using Secure Tropos is described, and model validation using a security test scenario is discussed.


2014 ◽  
Vol 281 (1785) ◽  
pp. 20133201 ◽  
Author(s):  
Federico Rossano ◽  
Marie Nitzschner ◽  
Michael Tomasello

Domestic dogs are particularly skilled at using human visual signals to locate hidden food. This is, to our knowledge, the first series of studies that investigates the ability of dogs to use only auditory communicative acts to locate hidden food. In a first study, from behind a barrier, a human expressed excitement towards a baited box on either the right or left side, while sitting closer to the unbaited box. Dogs were successful in following the human's voice direction and locating the food. In the two following control studies, we excluded the possibility that dogs could locate the box containing food just by relying on smell, and we showed that they would interpret a human's voice direction in a referential manner only when they could locate a possible referent (i.e. one of the boxes) in the environment. Finally, in a fourth study, we tested 8–14-week-old puppies in the main experimental test and found that those with a reasonable amount of human experience performed overall even better than the adult dogs. These results suggest that domestic dogs’ skills in comprehending human communication are not based on visual cues alone, but are instead multi-modal and highly flexible. Moreover, the similarity between young and adult dogs’ performances has important implications for the domestication hypothesis.


2019 ◽  
Vol 953 ◽  
pp. 53-58 ◽  
Author(s):  
Elsayed Fathallah

Excellent mechanical behavior and low density of composite materials make them candidates to replace metals for many underwater applications. This paper presents a comprehensive study about the multi-objective optimization of composite pressure hull subjected to hydrostatic pressure to minimize the weight of the pressure hull and maximize the buckling load capacity according to the design requirements. Two models were constructed, one model constructed from Carbon/Epoxy composite (USN-150), the other model is metallic pressure hull constructed from HY100. The analysis and the optimization process were completely performed using ANSYS Parametric Design Language (APDL). Tsai-Wu failure criterion was incorporated in the optimization process. The results obtained emphasize that, the submarine constructed from Carbon/Epoxy composite (USN-150) is better than the submarine constructed from HY100. Finally, an optimized model with an optimum pattern of fiber orientations was presented. Hopefully, the results may provide a valuable insight for the future of designing composite underwater vehicles.


Author(s):  
Samuel J. Levulis ◽  
So Young Kim ◽  
Patricia R. DeLucia

A key component of the U.S. Army’s vision for future unmanned aerial vehicle (UAV) operations is to integrate UAVs into manned missions, an effort called manned-unmanned teaming (MUM-T; Department of Defense, 2010). One candidate application of MUM-T is to provide the Air Mission Commander (AMC) of a team of Black Hawk helicopters control of multiple UAVs, offering advanced reconnaissance and real-time intelligence of the upcoming flight route and landing zones. One important design decision in the development of a system to support multi-UAV control by an AMC is the selection of the interface used to control the system, for example, through a touchscreen or voice commands. A variety of input methods is feasible from an engineering standpoint, but little is known about the effect of the input interface on AMC performance. The current study evaluated three interface input methods for a MUM-T supervisory control system used by an AMC located in a Black Hawk helicopter. The evaluation was conducted with simulation software developed by General Electric. Eighteen participants supervised a team of two helicopters and three UAVs as they traveled towards a landing zone to deploy ground troops. A primary monitor, located in front of the participant, presented displays used to monitor flight instruments and to supervise the manned and unmanned vehicles that were under the AMC’s control. A secondary monitor, located adjacent to the participant, presented displays used to inspect and classify aerial photographs taken by the UAVs. Participants were responsible for monitoring and responding to instrument warnings, classifying the aerial photographs as either neutral or hostile, and responding to radio communications. We manipulated interface input modality (touch, voice, multimodal) and workload (rate of photographs to classify). Participants completed three blocks of 8.5-minute experimental trials, one for each input modality. Results indicated that touch and multimodal input methods were superior to voice input. Participants were more efficient with touch and multimodal control (compared to voice), evidenced by relatively shorter photograph classification times, a greater percentage of classified photographs, and shorter instrument warning response times. Touch and multimodal input also resulted in a greater percentage of correct responses to communication task queries, lower subjective workload, greater subjective situation awareness, and higher usability ratings. Multimodal input did not result in significant performance advantages compared to touch alone. Designers should carefully consider the performance tradeoffs when selecting from candidate input methods during system development.


Sign in / Sign up

Export Citation Format

Share Document