localization task
Recently Published Documents


TOTAL DOCUMENTS

105
(FIVE YEARS 36)

H-INDEX

15
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Jing Zou ◽  
Simon Trinh ◽  
Andrew Erskine ◽  
Miao Jing ◽  
Jennifer Yao ◽  
...  

Numerous cognitive functions including attention, learning, and plasticity are influenced by the dynamic patterns of acetylcholine release across the brain. How acetylcholine mediates these functions in cortex remains unclear, as the spatiotemporal relationship between cortical acetylcholine and behavioral events has not been precisely measured across task learning. To dissect this relationship, we quantified motor behavior and sub-second acetylcholine dynamics in primary somatosensory cortex during acquisition and performance of a tactile-guided object localization task. We found that acetylcholine dynamics were spatially homogenous and directly attributable to whisker motion and licking, rather than sensory cues or reward delivery. As task performance improved across training, acetylcholine release to the first lick in a trial became dramatically and specifically potentiated, paralleling the emergence of a choice-signalling basis for this motor action. These results show that acetylcholine dynamics in sensory cortex are driven by directed motor actions to gather information and act upon it.


2021 ◽  
Author(s):  
Pavel Voinov ◽  
Günther Knoblich

We investigated whether prescribing an agreement can result in optimal inter-individual integration of perceptual judgments in absence of verbal communication. Participants in pairs performed a localization task in a virtual 3D environment, where the goal was to make projections from an upper plane to the target on the bottom plane. Partners were provided with complementary viewpoints and could be optimal if each took over one orthogonal dimension. In the Revision condition partners saw each other’s individual judgments and could rely on them. In the Agreement condition they provided a joint response. In both conditions communication was not allowed. We found that participants could optimally distribute the dimensions, but only when agreement was mandated. Without the agreement requirement, participants failed to properly rely on their partner on the dimension where the latter were more accurate. We also found, that prescription of agreement exerts a general positive effect on individual performance.Our results demonstrate that even in absence of verbal communication, interacting in a shared environment can result in optimal integration of perceptual information under the condition that an agreement is reached.


2021 ◽  
Author(s):  
Bettina Voelcker ◽  
Simon P Peron

Sensory input arrives from thalamus in cortical layer (L) 4, from which it flows predominantly to superficial layers, so that L4 to L2 constitutes one of the earliest cortical feedforward networks. Despite extensive study, the transformation performed by this network remains poorly understood. We use two-photon calcium imaging in L2-4 of primary vibrissal somatosensory cortex (vS1) to record neural activity as mice perform an object localization task with two whiskers. We find that touch responses sparsen but become more reliable from L4 to L2, with superficial neurons responding to a broader range of touches. Decoding of sensory features either improves from L4 to L2 or remains unchanged. Pairwise correlations increase superficially, with L2/3 containing ensembles of mostly broadly tuned neurons responding robustly to touch. Thus, from L4 to L2, cortex transitions from a dense probabilistic code to a sparse and robust ensemble-based code that improves stimulus decoding, facilitating perception.


Author(s):  
Øystein Volden ◽  
Annette Stahl ◽  
Thor I. Fossen

AbstractThis paper presents an independent stereo-vision based positioning system for docking operations. The low-cost system consists of an object detector and different 3D reconstruction techniques. To address the challenge of robust detections in an unstructured and complex outdoor environment, a learning-based object detection model is proposed. The system employs a complementary modular approach that uses data-driven methods, utilizing data wherever required and traditional computer vision methods when the scope and complexity of the environment are reduced. Both, monocular and stereo-vision based methods are investigated for comparison. Furthermore, easily identifiable markers are utilized to obtain reference points, thus simplifying the localization task. A small unmanned surface vehicle (USV) with a LiDAR-based positioning system was exploited to verify that the proposed vision-based positioning system produces accurate measurements under various docking scenarios. Field experiments have proven that the developed system performs well and can supplement the traditional navigation system for safety-critical docking operations.


2021 ◽  
Author(s):  
kanji tanaka

Landmark-based robot self-localization has attracted recent research interest as an efficient maintenance-free approach to visual place recognition (VPR) across domains (e.g., times of the day, weathers, seasons). However, landmark-based self-localization can be an ill-posed problem for a passive observer (e.g., manual robot control), as many viewpoints may not provide effective landmark view. Here, we consider active self-localization task by an active observer, and present a novel reinforcement-learning (RL) -based next-best-view (NBV) planner. Our contributions are summarized as follows. (1) SIMBAD-based VPR: We present a landmark ranking -based compact scene descriptor by introducing a deep-learning extension of similarity-based pattern recognition (SIMBAD). (2) VPR-to-NBV knowledge transfer: We tackle the challenge of RL under uncertainty (i.e., active self-localization) by transferring the VPR's state recognition ability to NBV. (3) NNQL-based NBV: We view the available VPR as the experience database by adapting a nearest-neighbor -based approximation of Q-learning (NNQL). The result is an extremely compact data structure that compresses both the VPR and NBV modules into a single incremental inverted index. Experiments using public NCLT dataset validate the effectiveness of the proposed approach.


2021 ◽  
Author(s):  
Farzam Hejazi ◽  
Mohsen Joneidi ◽  
Nazanin Rahnavard

This paper investigates the problem of localization of co-channel transmitters or primary users (PUs) using an array mounted on a moving aerial platform. As a practical alternative for a sensor network to pursue the localization task, the proposed Phase Interferometric Source Localization (PISL) technique utilizes a moving sensor that measures phase difference between two antennas mounted on the platform. Due to the sparse nature of PUs' distribution in the region, we model the localization task as a simple basis-pursuit denoising framework and introduce a reconstruction method using a sparse recovery algorithm to discover locations of unknown PUs based on the phase difference measurements. We show that the ratio of distance between two antennas to the carrier-frequency wavelength is the critical parameter making the localization feasible. We also propose a scheme for sensor motion design in order to maximize the number of detectable PUs based on mutual coherence property. Since the motion optimization problem is very hard to address we develop a simple geometric relaxation to address the problem. The simulation results show that PISL can precisely recover the map of PUs with only few measurements and also reveal that sensor motion path can have determinate effect on the localization accuracy. PISL performance is compared with an state-of-the-art technique that utilizes adaptive beamforming and results show the superiority of PISL results in localization accuracy.


2021 ◽  
Author(s):  
Farzam Hejazi ◽  
Mohsen Joneidi ◽  
Nazanin Rahnavard

This paper investigates the problem of localization of co-channel transmitters or primary users (PUs) using an array mounted on a moving aerial platform. As a practical alternative for a sensor network to pursue the localization task, the proposed Phase Interferometric Source Localization (PISL) technique utilizes a moving sensor that measures phase difference between two antennas mounted on the platform. Due to the sparse nature of PUs' distribution in the region, we model the localization task as a simple basis-pursuit denoising framework and introduce a reconstruction method using a sparse recovery algorithm to discover locations of unknown PUs based on the phase difference measurements. We show that the ratio of distance between two antennas to the carrier-frequency wavelength is the critical parameter making the localization feasible. We also propose a scheme for sensor motion design in order to maximize the number of detectable PUs based on mutual coherence property. Since the motion optimization problem is very hard to address we develop a simple geometric relaxation to address the problem. The simulation results show that PISL can precisely recover the map of PUs with only few measurements and also reveal that sensor motion path can have determinate effect on the localization accuracy. PISL performance is compared with an state-of-the-art technique that utilizes adaptive beamforming and results show the superiority of PISL results in localization accuracy.


2021 ◽  
Author(s):  
Samuel Cheyette ◽  
Shengyi Wu ◽  
Steven T. Piantadosi

People can identify the number of objects in small sets rapidly and without error but become increasingly noisy for larger sets. However, the cognitive mechanisms underlying these ubiquitous psychophysics are poorly understood. We present a model of a limited-capacity visual system optimized to individuate and remember the location of objects in a scene which gives rise to all key aspects of number psychophysics, including error-free small number perception and scalar variability for larger numbers. We therefore propose that number psychophysics can be understood as an emergent property of primitive perceptual mechanisms --- namely, the process of identifying and representing individual objects in a scene. To test our theory, we ran two experiments: a change-localization task to measure participants' memory for the locations of objects (Experiment 1) and a numerical estimation task (Experiment 2). Our model accounts well for participants' performance in both experiments, despite only being optimized to efficiently encode where objects are present in a scene. Our results demonstrate that the key psychophysical features of numerical cognition do not arise from separate modules or capacities specific to number, but rather from lower-level constraints on perception which are manifested even in non-numerical tasks.


Sign in / Sign up

Export Citation Format

Share Document