Close Target Reconnaissance

2016 ◽  
Vol 11 (1) ◽  
pp. 63-80 ◽  
Author(s):  
Tal Oron-Gilad ◽  
Yisrael Parmet

The focus of the current study was on how the dismounted soldiers’ decision cycle is affected by the use of a display device for utilizing intelligence from an unmanned ground vehicle during a patrol mission. Via a handheld monocular display, participants received a route map and sensor imagery from the vehicle that was ~20–50 m ahead. Twenty-two male participants were divided into two groups, with or without the sensor imagery. Each participant navigated for 2 km in a military urban terrain training facility, while encountering civilians, moving and stationary suspects, and improvised explosive devices. The OODA loop (observe–orient–decide–act) framework was used to examine soldiers’ decisions. The experimental group was slower to respond to threats and to orient. They also reported higher workload, more difficulties in allocating their attention to the environment, and more frustration. These can be partially attributed to the novelty of the technological capability, but also to its implementation in the study. The breakdown of performance metrics into the OODA loop components enabled analysis of the major difficulties in the decision-making process. This evaluation highlights the need for new roles in combat-team setups and for additional training when unmanned vehicle sensor imagery is introduced.

The Art of Modern Warfare has seen a significant shift with the onset of the 21st Century as Military Operations have become more diverse and intense in nature. The rise of insurgency and terrorism with the onset of the “War on Terror” led by the USA and its allies had led to a need where soldiers are more specifically trained in Military Operations in Urban Terrain ( MOUT), Counter-Insurgency and Counter-Terrorism duties. This has prompted the enlistment of late headways done in the field of Computers and innovation like Augmented Reality (AR) into military applications to improve a soldier’s battle-space knowledge as real-world objects can then be augmented onto the real-world environment to create better training facilities for the soldiers and help the soldier to adapt up to complex conditions and increment communitarian circumstance mindfulness among warriors occupied with high-hazard tasks. AR can limit inadvertent blow-back, increment risk markings like the likelihood of Improvised Explosive Devices (IEDs) and increment war zone observation. AR in coming time would be viewed as a venturing stone to future military applications as an arranged correspondence framework would mean a minimal loss of life and maximum impact during Military Operations.


2019 ◽  
Vol 63 (6) ◽  
pp. 60402-1-60402-16
Author(s):  
Sander R. Klomp ◽  
Dennis W. J. M. van de Wouw ◽  
Peter H. N. de With

Abstract Detecting changes in an uncontrolled environment using cameras mounted on a ground vehicle is critical for the detection of roadside Improvised Explosive Devices (IEDs). Hidden IEDs are often accompanied by visible markers, whose appearances are a priori unknown. Little work has been published on detecting unknown objects using deep learning. This article shows the feasibility of applying convolutional neural networks (CNNs) to predict the location of markers in real time, compared to an earlier reference recording. The authors investigate novel encoder‐decoder Siamese CNN architectures and introduce a modified double-margin contrastive loss function, to achieve pixel-level change detection results. Their dataset consists of seven pairs of challenging real-world recordings, and they investigate augmentation with artificial object data. The proposed network architecture can compare two images of 1920 × 1440 pixels in 27 ms on an RTX Titan GPU and significantly outperforms state-of-the-art networks and algorithms on our dataset in terms of F-1 score by 0.28.


Author(s):  
Nuphar Katzman ◽  
Tal Oron-Gilad

Vibro-tactile interfaces can support users in various aspects and contexts. Despite their inherent advantages, it is important to realize that they are limited in the type and capacity of information they can convey. This study is part of a series of experiments that aim to develop and evaluate a “tactile taxonomy” for dismounted operational environments. The current experiment includes a simulation of an operational mission with a remote Unmanned Ground Vehicle (UGV). During the mission, 20 participants were required to interpret notifications that they received in one (or more) of the following modalities: auditory, visual and/or tactile. Three specific notification types were chosen based on previous studies, in order to provide an intuitive connection between the notification and its semantic meaning. Response times to notifications, the ability to distinguish between the information types that they provided, and the operational mission performance metrics, were collected. Results indicate that it is possible to use a limited “tactile taxonomy” in a visually loaded and auditory noisy scene while performing a demanding operational task. The use of the tactile modality with other sensory modalities leverages the participants’ ability to perceive and identify the notifications.


2020 ◽  
Vol 2020 (6) ◽  
pp. 60402-1-60402-16
Author(s):  
Sander R. Klomp ◽  
Dennis W. J. M. van de Wouw ◽  
Peter H. N. de With

Detecting changes in an uncontrolled environment using cameras mounted on a ground vehicle is critical for the detection of roadside Improvised Explosive Devices (IEDs). Hidden IEDs are often accompanied by visible markers, whose appearances are a priori unknown. Little work has been published on detecting unknown objects using deep learning. This article shows the feasibility of applying convolutional neural networks (CNNs) to predict the location of markers in real time, compared to an earlier reference recording. The authors investigate novel encoder–decoder Siamese CNN architectures and introduce a modified double-margin contrastive loss function, to achieve pixel-level change detection results. Their dataset consists of seven pairs of challenging real-world recordings, and they investigate augmentation with artificial object data. The proposed network architecture can compare two images of 1920 × 1440 pixels in 27 ms on an RTX Titan GPU and significantly outperforms state-of-the-art networks and algorithms on our dataset in terms of F-1 score by 0.28.


ROBOT ◽  
2013 ◽  
Vol 35 (6) ◽  
pp. 657 ◽  
Author(s):  
Taoyi ZHANG ◽  
Tianmiao WANG ◽  
Yao WU ◽  
Qiteng ZHAO

Sign in / Sign up

Export Citation Format

Share Document