scholarly journals Diagnosing the Environment Bias in Vision-and-Language Navigation

Author(s):  
Yubo Zhang ◽  
Hao Tan ◽  
Mohit Bansal

Vision-and-Language Navigation (VLN) requires an agent to follow natural-language instructions, explore the given environments, and reach the desired target locations. These step-by-step navigational instructions are crucial when the agent is navigating new environments about which it has no prior knowledge. Most recent works that study VLN observe a significant performance drop when tested on unseen environments (i.e., environments not used in training), indicating that the neural agent models are highly biased towards training environments. Although this issue is considered as one of the major challenges in VLN research, it is still under-studied and needs a clearer explanation. In this work, we design novel diagnosis experiments via environment re-splitting and feature replacement, looking into possible reasons for this environment bias. We observe that neither the language nor the underlying navigational graph, but the low-level visual appearance conveyed by ResNet features directly affects the agent model and contributes to this environment bias in results. According to this observation, we explore several kinds of semantic representations that contain less low-level visual information, hence the agent learned with these features could be better generalized to unseen testing environments. Without modifying the baseline agent model and its training method, our explored semantic features significantly decrease the performance gaps between seen and unseen on multiple datasets (i.e. R2R, R4R, and CVDN) and achieve competitive unseen results to previous state-of-the-art models.

Author(s):  
Richard Stone ◽  
Minglu Wang ◽  
Thomas Schnieders ◽  
Esraa Abdelall

Human-robotic interaction system are increasingly becoming integrated into industrial, commercial and emergency service agencies. It is critical that human operators understand and trust automation when these systems support and even make important decisions. The following study focused on human-in-loop telerobotic system performing a reconnaissance operation. Twenty-four subjects were divided into groups based on level of automation (Low-Level Automation (LLA), and High-Level Automation (HLA)). Results indicated a significant difference between low and high word level of control in hit rate when permanent error occurred. In the LLA group, the type of error had a significant effect on the hit rate. In general, the high level of automation was better than the low level of automation, especially if it was more reliable, suggesting that subjects in the HLA group could rely on the automatic implementation to perform the task more effectively and more accurately.


2020 ◽  
Author(s):  
Haider Al-Tahan ◽  
Yalda Mohsenzadeh

AbstractWhile vision evokes a dense network of feedforward and feedback neural processes in the brain, visual processes are primarily modeled with feedforward hierarchical neural networks, leaving the computational role of feedback processes poorly understood. Here, we developed a generative autoencoder neural network model and adversarially trained it on a categorically diverse data set of images. We hypothesized that the feedback processes in the ventral visual pathway can be represented by reconstruction of the visual information performed by the generative model. We compared representational similarity of the activity patterns in the proposed model with temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) visual brain responses. The proposed generative model identified two segregated neural dynamics in the visual brain. A temporal hierarchy of processes transforming low level visual information into high level semantics in the feedforward sweep, and a temporally later dynamics of inverse processes reconstructing low level visual information from a high level latent representation in the feedback sweep. Our results append to previous studies on neural feedback processes by presenting a new insight into the algorithmic function and the information carried by the feedback processes in the ventral visual pathway.Author summaryIt has been shown that the ventral visual cortex consists of a dense network of regions with feedforward and feedback connections. The feedforward path processes visual inputs along a hierarchy of cortical areas that starts in early visual cortex (an area tuned to low level features e.g. edges/corners) and ends in inferior temporal cortex (an area that responds to higher level categorical contents e.g. faces/objects). Alternatively, the feedback connections modulate neuronal responses in this hierarchy by broadcasting information from higher to lower areas. In recent years, deep neural network models which are trained on object recognition tasks achieved human-level performance and showed similar activation patterns to the visual brain. In this work, we developed a generative neural network model that consists of encoding and decoding sub-networks. By comparing this computational model with the human brain temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) response patterns, we found that the encoder processes resemble the brain feedforward processing dynamics and the decoder shares similarity with the brain feedback processing dynamics. These results provide an algorithmic insight into the spatiotemporal dynamics of feedforward and feedback processes in biological vision.


2019 ◽  
Vol 32 (2) ◽  
pp. 87-109 ◽  
Author(s):  
Galit Buchs ◽  
Benedetta Heimler ◽  
Amir Amedi

Abstract Visual-to-auditory Sensory Substitution Devices (SSDs) are a family of non-invasive devices for visual rehabilitation aiming at conveying whole-scene visual information through the intact auditory modality. Although proven effective in lab environments, the use of SSDs has yet to be systematically tested in real-life situations. To start filling this gap, in the present work we tested the ability of expert SSD users to filter out irrelevant background noise while focusing on the relevant audio information. Specifically, nine blind expert users of the EyeMusic visual-to-auditory SSD performed a series of identification tasks via SSDs (i.e., shape, color, and conjunction of the two features). Their performance was compared in two separate conditions: silent baseline, and with irrelevant background sounds from real-life situations, using the same stimuli in a pseudo-random balanced design. Although the participants described the background noise as disturbing, no significant performance differences emerged between the two conditions (i.e., noisy; silent) for any of the tasks. In the conjunction task (shape and color) we found a non-significant trend for a disturbing effect of the background noise on performance. These findings suggest that visual-to-auditory SSDs can indeed be successfully used in noisy environments and that users can still focus on relevant auditory information while inhibiting irrelevant sounds. Our findings take a step towards the actual use of SSDs in real-life situations while potentially impacting rehabilitation of sensory deprived individuals.


2007 ◽  
pp. 128-150
Author(s):  
Andreas Savaki ◽  
Jiebo Luo ◽  
Michael Kane

Image understanding deals with extracting and interpreting scene content for use in various applications. In this chapter, we illustrate that Bayesian networks are particularly well-suited for image understanding problems, and present case studies in indoor-outdoor scene classification and parts-based object detection. First, improved scene classification is accomplished using both low-level features, such as color and texture, and semantic features, such as the presence of sky and grass. Integration of low-level and semantic features is achieved using a Bayesian network framework. The network structure can be determined by expert opinion or by automated structure learning methods. Second, object detection at multiple views relies on a parts-based approach, where specialized detectors locate object parts and a Bayesian network acts as the arbitrator in order to determine the object presence. In general, Bayesian networks are found to be powerful integrators of different features and help improve the performance of image understanding systems.


2018 ◽  
Vol 30 (11) ◽  
pp. 1590-1605 ◽  
Author(s):  
Alex Clarke ◽  
Barry J. Devereux ◽  
Lorraine K. Tyler

Object recognition requires dynamic transformations of low-level visual inputs to complex semantic representations. Although this process depends on the ventral visual pathway, we lack an incremental account from low-level inputs to semantic representations and the mechanistic details of these dynamics. Here we combine computational models of vision with semantics and test the output of the incremental model against patterns of neural oscillations recorded with magnetoencephalography in humans. Representational similarity analysis showed visual information was represented in low-frequency activity throughout the ventral visual pathway, and semantic information was represented in theta activity. Furthermore, directed connectivity showed visual information travels through feedforward connections, whereas visual information is transformed into semantic representations through feedforward and feedback activity, centered on the anterior temporal lobe. Our research highlights that the complex transformations between visual and semantic information is driven by feedforward and recurrent dynamics resulting in object-specific semantics.


Author(s):  
Weichun Liu ◽  
Xiaoan Tang ◽  
Chenglin Zhao

Recently, deep trackers based on the siamese networking are enjoying increasing popularity in the tracking community. Generally, those trackers learn a high-level semantic embedding space for feature representation but lose low-level fine-grained details. Meanwhile, the learned high-level semantic features are not updated during online tracking, which results in tracking drift in presence of target appearance variation and similar distractors. In this paper, we present a novel end-to-end trainable Convolutional Neural Network (CNN) based on the siamese network for distractor-aware tracking. It enhances target appearance representation in both the offline training stage and online tracking stage. In the offline training stage, this network learns both the low-level fine-grained details and high-level coarse-grained semantics simultaneously in a multi-task learning framework. The low-level features with better resolution are complementary to semantic features and able to distinguish the foreground target from background distractors. In the online stage, the learned low-level features are fed into a correlation filter layer and updated in an interpolated manner to encode target appearance variation adaptively. The learned high-level features are fed into a cross-correlation layer without online update. Therefore, the proposed tracker benefits from both the adaptability of the fine-grained correlation filter and the generalization capability of the semantic embedding. Extensive experiments are conducted on the public OTB100 and UAV123 benchmark datasets. Our tracker achieves state-of-the-art performance while running with a real-time frame-rate.


2019 ◽  
Author(s):  
Michael B. Bone ◽  
Fahad Ahmad ◽  
Bradley R. Buchsbaum

AbstractWhen recalling an experience of the past, many of the component features of the original episode may be, to a greater or lesser extent, reconstructed in the mind’s eye. There is strong evidence that the pattern of neural activity that occurred during an initial perceptual experience is recreated during episodic recall (neural reactivation), and that the degree of reactivation is correlated with the subjective vividness of the memory. However, while we know that reactivation occurs during episodic recall, we have lacked a way of precisely characterizing the contents—in terms of its featural constituents—of a reactivated memory. Here we present a novel approach, feature-specific informational connectivity (FSIC), that leverages hierarchical representations of image stimuli derived from a deep convolutional neural network to decode neural reactivation in fMRI data collected while participants performed an episodic recall task. We show that neural reactivation associated with low-level visual features (e.g. edges), high-level visual features (e.g. facial features), and semantic features (e.g. “terrier”) occur throughout the dorsal and ventral visual streams and extend into the frontal cortex. Moreover, we show that reactivation of both low- and high-level visual features correlate with the vividness of the memory, whereas only reactivation of low-level features correlates with recognition accuracy when the lure and target images are semantically similar. In addition to demonstrating the utility of FSIC for mapping feature-specific reactivation, these findings resolve the relative contributions of low- and high-level features to the vividness of visual memories, clarify the role of the frontal cortex during episodic recall, and challenge a strict interpretation the posterior-to-anterior visual hierarchy.


2019 ◽  
Author(s):  
Ali Pournaghdali ◽  
Bennett L Schwartz

Studies utilizing continuous flash suppression (CFS) provide valuable information regarding conscious and nonconscious perception. There are, however, crucial unanswered questions regarding the mechanisms of suppression and the level of visual processing in the absence of consciousness with CFS. Research suggests that the answers to these questions depend on the experimental configuration and how we assess consciousness in these studies. The aim of this review is to evaluate the impact of different experimental configurations and the assessment of consciousness on the results of the previous CFS studies. We review studies that evaluated the influence of different experimental configuration on the depth of suppression with CFS and discuss how different assessments of consciousness may impact the results of CFS studies. Finally, we review behavioral and brain recording studies of CFS. In conclusion, previous studies provide evidence for survival of low-level visual information and complete impairment of high-level visual information under the influence of CFS. That is, studies suggest that nonconscious perception of lower-level visual information happens with CFS but there is no evidence for nonconscious highlevel recognition with CFS.


Sign in / Sign up

Export Citation Format

Share Document