A Smart Visual Information Tool for Situational Awareness

Author(s):  
Marco Vernier ◽  
Manuela Farinosi ◽  
Gian Luca Foresti
2021 ◽  
Vol 11 (24) ◽  
pp. 11611
Author(s):  
Dmitry M. Igonin ◽  
Pavel A. Kolganov ◽  
Yury V. Tiumentsev

Situational awareness formation is one of the most critical elements in solving the problem of UAV behavior control. It aims to provide information support for UAV behavior control according to its objectives and tasks to be completed. We consider the UAV to be a type of controlled dynamic system. The article shows the place of UAVs in the hierarchy of dynamic systems. We introduce the concepts of UAV behavior and activity and formulate requirements for algorithms for controlling UAV behavior. We propose the concept of situational awareness as applied to the problem of behavior control of highly autonomous UAVs (HA-UAVs) and analyze the levels and types of this situational awareness. We show the specifics of situational awareness formation for UAVs and analyze its differences from situational awareness for manned aviation and remotely piloted UAVs. We propose the concept of situational awareness as applied to the problem of UAV behavior control and analyze the levels and types of this situational awareness. We highlight and discuss in more detail two crucial elements of situational awareness for HA-UAVs. The first of them is related to the analysis and prediction of the behavior of objects in the vicinity of the HA-UAV. The general considerations involved in solving this problem, including the problem of analyzing the group behavior of such objects, are discussed. As an illustrative example, the solution to the problem of tracking an aircraft maneuvering in the vicinity of a HA-UAV is given. The second element of situational awareness is related to the processing of visual information, which is one of the primary sources of situational awareness formation required for the operation of the HA-UAV control system. As an example here, we consider solving the problem of semantic segmentation of images processed when selecting a landing site for the HA-UAV in unfamiliar terrain. Both of these problems are solved using machine learning methods and tools. In the field of situational awareness for HA-UAVs, there are several problems that need to be solved. We formulate some of these problems and briefly describe them.


2020 ◽  
Vol 10 (24) ◽  
pp. 8783
Author(s):  
Laura Lopez-Fuentes ◽  
Alessandro Farasin ◽  
Mirko Zaffaroni ◽  
Harald Skinnemoen ◽  
Paolo Garza

During natural disasters, situational awareness is needed to understand the situation and respond accordingly. A key need is assessing open roads for transporting emergency support to victims. This can be done via analysis of photos from affected areas with known location. This paper studies the problem of detecting blocked/open roads from photos during floods by applying a two-step approach based on classifiers: does the image have evidence of road? If it does, is the road passable or not? We propose a single double-ended neural network (NN) architecture which addresses both tasks simultaneously. Both problems are treated as a single class classification problem with the use of a compactness loss. The study was performed on a set of tweets, posted during flooding events, that contain (i) metadata and (ii) visual information. We studied the usefulness of each data source and the combination of both. Finally, we conducted a study of the performance gain from ensembling different networks. Through the experimental results, we prove that the proposed double-ended NN makes the model almost two times faster and the load on memory lighter while improving the results with respect to training two separate networks to solve each problem independently.


2016 ◽  
Author(s):  
F. Michael Williams-Bell ◽  
Tom M McLellan ◽  
Bernadette A Murphy

Background. Firefighting requires tremendous cognitive demands including assessing emergency scenes, executing critical decisions, and situational awareness of their surroundings. The aim of this study was to determine the effects of differing rates of increasing core temperature on cognitive function during exercise-induced heat stress. Methods. Nineteen male firefighters were exposed to repeated cognitive assessments, randomized and counter-balanced, in 30°C and 35°C and 50% humidity. Participants performed treadmill walking (4.5 km.h-1 and 2.5% grade) with cognitive function assessed before exercise (PRE), after mounting the treadmill (Cog 1), at core temperatures of 37.8°C (Cog 2), 38.5°C (Cog 3), and 39.0°C (Cog 4), after dismounting the treadmill (POST), and following an active cooling recovery to a core temperature of 37.8°C (REC). The cognitive tests implemented at PRE and POST were spatial working memory (SWM), rapid visual information processing (RVP), and reaction time (RTI) while paired associates learning (PAL) and spatial span (SSP) were assessed at Cog 1, Cog 2, Cog 3, and Cog 4. All five cognitive tests were assessed at REC. Results. Planned contrasts revealed that SSP and PAL were impaired at Cog 3, with SSP also impaired at Cog 4 compared to Cog 1. REC revealed no difference compared to Cog 1, but increased errors compared to Cog 2 for PAL. Conclusions. The decrements in cognitive function observed at a core temperature of 38.5°C are likely attributed to the cognitive resources required to maintain performance being overloaded due to increasing task complexity and external stimuli from exercise-induced heat stress. The addition of an active cooling recovery restored cognitive function to initial levels.


2016 ◽  
Author(s):  
F. Michael Williams-Bell ◽  
Tom M McLellan ◽  
Bernadette A Murphy

Background. Firefighting requires tremendous cognitive demands including assessing emergency scenes, executing critical decisions, and situational awareness of their surroundings. The aim of this study was to determine the effects of differing rates of increasing core temperature on cognitive function during exercise-induced heat stress. Methods. Nineteen male firefighters were exposed to repeated cognitive assessments, randomized and counter-balanced, in 30°C and 35°C and 50% humidity. Participants performed treadmill walking (4.5 km.h-1 and 2.5% grade) with cognitive function assessed before exercise (PRE), after mounting the treadmill (Cog 1), at core temperatures of 37.8°C (Cog 2), 38.5°C (Cog 3), and 39.0°C (Cog 4), after dismounting the treadmill (POST), and following an active cooling recovery to a core temperature of 37.8°C (REC). The cognitive tests implemented at PRE and POST were spatial working memory (SWM), rapid visual information processing (RVP), and reaction time (RTI) while paired associates learning (PAL) and spatial span (SSP) were assessed at Cog 1, Cog 2, Cog 3, and Cog 4. All five cognitive tests were assessed at REC. Results. Planned contrasts revealed that SSP and PAL were impaired at Cog 3, with SSP also impaired at Cog 4 compared to Cog 1. REC revealed no difference compared to Cog 1, but increased errors compared to Cog 2 for PAL. Conclusions. The decrements in cognitive function observed at a core temperature of 38.5°C are likely attributed to the cognitive resources required to maintain performance being overloaded due to increasing task complexity and external stimuli from exercise-induced heat stress. The addition of an active cooling recovery restored cognitive function to initial levels.


Author(s):  
Sylvain Daronnat ◽  
Leif Azzopardi ◽  
Martin Halvey

Uncertainty in Human-Agent interactions is often studied in terms of transparency and understandability of agent actions. Less work, however, has focused on how Visual Environmental Uncertainty (VEU) that restricts or occludes visual information affects the Human-Agent Teaming (HAT) in terms of trust, reliance, performance, cognitive load and situational awareness. We conducted a mixed-design experiment (n=96) where participants interacted with an agent during a collaborative aiming task under four different types of VEUs involving global and dynamic occlusions. Our results show that while environmental uncertainties led to increases in perceived trust, they induced differences in reliance and performance. Counter to intuition, when participants trusted the agent the most, they relied on the agent more, but performed worst. These findings highlight how trust in agents is also influenced by external environmental conditions and suggest that reported trust in HAT scenarios may not always generalize beyond the environmental factors in which they were studied.


2009 ◽  
Vol 23 (2) ◽  
pp. 63-76 ◽  
Author(s):  
Silke Paulmann ◽  
Sarah Jessen ◽  
Sonja A. Kotz

The multimodal nature of human communication has been well established. Yet few empirical studies have systematically examined the widely held belief that this form of perception is facilitated in comparison to unimodal or bimodal perception. In the current experiment we first explored the processing of unimodally presented facial expressions. Furthermore, auditory (prosodic and/or lexical-semantic) information was presented together with the visual information to investigate the processing of bimodal (facial and prosodic cues) and multimodal (facial, lexic, and prosodic cues) human communication. Participants engaged in an identity identification task, while event-related potentials (ERPs) were being recorded to examine early processing mechanisms as reflected in the P200 and N300 component. While the former component has repeatedly been linked to physical property stimulus processing, the latter has been linked to more evaluative “meaning-related” processing. A direct relationship between P200 and N300 amplitude and the number of information channels present was found. The multimodal-channel condition elicited the smallest amplitude in the P200 and N300 components, followed by an increased amplitude in each component for the bimodal-channel condition. The largest amplitude was observed for the unimodal condition. These data suggest that multimodal information induces clear facilitation in comparison to unimodal or bimodal information. The advantage of multimodal perception as reflected in the P200 and N300 components may thus reflect one of the mechanisms allowing for fast and accurate information processing in human communication.


Author(s):  
Weiyu Zhang ◽  
Se-Hoon Jeong ◽  
Martin Fishbein†

This study investigates how multitasking interacts with levels of sexually explicit content to influence an individual’s ability to recognize TV content. A 2 (multitasking vs. nonmultitasking) by 3 (low, medium, and high sexual content) between-subjects experiment was conducted. The analyses revealed that multitasking not only impaired task performance, but also decreased TV recognition. An inverted-U relationship between degree of sexually explicit content and recognition of TV content was found, but only when subjects were multitasking. In addition, multitasking interfered with subjects’ ability to recognize audio information more than their ability to recognize visual information.


Sign in / Sign up

Export Citation Format

Share Document