scholarly journals Dynamic control of hippocampal spatial coding resolution by local visual cues

eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Romain Bourboulou ◽  
Geoffrey Marti ◽  
François-Xavier Michon ◽  
Elissa El Feghaly ◽  
Morgane Nouguier ◽  
...  

The ability to flexibly navigate an environment relies on a hippocampal-dependent cognitive map. External space can be internally mapped at different spatial resolutions. However, whether hippocampal spatial coding resolution can rapidly adapt to local features of an environment remains unclear. To explore this possibility, we recorded the firing of hippocampal neurons in mice navigating virtual reality environments, embedding or not local visual cues (virtual 3D objects) in specific locations. Virtual objects enhanced spatial coding resolution in their vicinity with a higher proportion of place cells, smaller place fields, increased spatial selectivity and stability. This effect was highly dynamic upon objects manipulations. Objects also improved temporal coding resolution through improved theta phase precession and theta timescale spike coordination. We propose that the fast adaptation of hippocampal spatial coding resolution to local features of an environment could be relevant for large-scale navigation.

2018 ◽  
Author(s):  
Romain Bourboulou ◽  
Geoffrey Marti ◽  
FranÇois-Xavier Michon ◽  
Morgane Nouguier ◽  
David Robbe ◽  
...  

AbstractThe ability to flexibly navigate an environment relies on a hippocampal-dependent internal cognitive map. Explored space can be internally mapped at different spatial resolutions. However, whether hippocampal spatial coding resolution can be dynamically controlled within and between environments is unknown. In this work we recorded the firing of hippocampal principal cells in mice navigating virtual reality environments, which differed by the presence of local visual cues (virtual 3D objects). Objects improved spatial coding resolution globally with a higher proportion of place cells, smaller place fields, increased spatial selectivity and stability. Spatial coding resolution was notably enhanced locally near objects and could be rapidly tuned by their manipulations. In the presence of objects, place cells also displayed improved theta phase precession and theta timescale spike coordination. These results suggest that local visual cues can rapidly tune the resolution of the hippocampal mapping system within and between environments.


2013 ◽  
Author(s):  
Zahra Aghajan ◽  
Lavanya Acharya ◽  
Jesse Cushman ◽  
Cliff Vuong ◽  
Jason Moore ◽  
...  

Dorsal Hippocampal neurons provide an allocentric map of space, characterized by three key properties. First, their firing is spatially selective, termed a rate code. Second, as animals traverse through place fields, neurons sustain elevated firing rates for long periods, however this has received little attention. Third the theta-phase of spikes within this sustained activity varies with animal's location, termed phase-precession or a temporal code. The precise relationship between these properties and the mechanisms governing them are not understood, although distal visual cues (DVC) are thought to be sufficient to reliably elicit them. Hence, we measured rat CA1 neurons' activity during random foraging in two-dimensional VR—where only DVC provide consistent allocentric location information— and compared it with their activity in real world (RW). Surprisingly, we found little spatial selectivity in VR. This is in sharp contrast to robust spatial selectivity commonly seen in one-dimensional RW and VR, or two-dimensional RW. Despite this, neurons in VR generated approximately two-second long phase precessing spike sequences, termed “hippocampal motifs”. Motifs, and “Motif-fields”, an aggregation of all motifs of a neuron, had qualitatively similar properties including theta-scale temporal coding in RW and VR, but the motifs were far less spatially localized in VR. These results suggest that intrinsic, network mechanisms generate temporally coded hippocampal motifs, which can be dissociated from their spatial selectivity. Further, DVC alone are insufficient to localize motifs spatially to generate a robust rate code.


Author(s):  
Romain Bourboulou ◽  
Geoffrey Marti ◽  
François-Xavier Michon ◽  
Elissa El Feghaly ◽  
Morgane Nouguier ◽  
...  

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Peiran Zhang ◽  
Joseph Rufo ◽  
Chuyi Chen ◽  
Jianping Xia ◽  
Zhenhua Tian ◽  
...  

AbstractThe ability to precisely manipulate nano-objects on a large scale can enable the fabrication of materials and devices with tunable optical, electromagnetic, and mechanical properties. However, the dynamic, parallel manipulation of nanoscale colloids and materials remains a significant challenge. Here, we demonstrate acoustoelectronic nanotweezers, which combine the precision and robustness afforded by electronic tweezers with versatility and large-field dynamic control granted by acoustic tweezing techniques, to enable the massively parallel manipulation of sub-100 nm objects with excellent versatility and controllability. Using this approach, we demonstrated the complex patterning of various nanoparticles (e.g., DNAs, exosomes, ~3 nm graphene flakes, ~6 nm quantum dots, ~3.5 nm proteins, and ~1.4 nm dextran), fabricated macroscopic materials with nano-textures, and performed high-resolution, single nanoparticle manipulation. Various nanomanipulation functions, including transportation, concentration, orientation, pattern-overlaying, and sorting, have also been achieved using a simple device configuration. Altogether, acoustoelectronic nanotweezers overcome existing limitations in nano-manipulation and hold great potential for a variety of applications in the fields of electronics, optics, condensed matter physics, metamaterials, and biomedicine.


2017 ◽  
Vol 14 (4) ◽  
pp. 172988141770907 ◽  
Author(s):  
Hanbo Wu ◽  
Xin Ma ◽  
Zhimeng Zhang ◽  
Haibo Wang ◽  
Yibin Li

Human daily activity recognition has been a hot spot in the field of computer vision for many decades. Despite best efforts, activity recognition in naturally uncontrolled settings remains a challenging problem. Recently, by being able to perceive depth and visual cues simultaneously, RGB-D cameras greatly boost the performance of activity recognition. However, due to some practical difficulties, the publicly available RGB-D data sets are not sufficiently large for benchmarking when considering the diversity of their activities, subjects, and background. This severely affects the applicability of complicated learning-based recognition approaches. To address the issue, this article provides a large-scale RGB-D activity data set by merging five public RGB-D data sets that differ from each other on many aspects such as length of actions, nationality of subjects, or camera angles. This data set comprises 4528 samples depicting 7 action categories (up to 46 subcategories) performed by 74 subjects. To verify the challengeness of the data set, three feature representation methods are evaluated, which are depth motion maps, spatiotemporal depth cuboid similarity feature, and curvature space scale. Results show that the merged large-scale data set is more realistic and challenging and therefore more suitable for benchmarking.


1998 ◽  
Vol 32 (2) ◽  
pp. 90-109 ◽  
Author(s):  
Warren B. Powell ◽  
Tassio A. Carvalho

History of additive manufacturing started in the 1980s in Japan. Stereolithography was invented first in 1983. After that tens of other techniques were invented under the common name 3D printing. When stereolithography was invented rapid prototyping did not exists. Tree years later new technique was invented: selective laser sintering (SLS). First commercial SLS was in 1990. At the end of 20t century, first bio-printer was developed. Using bio materials, first kidney was 3D printed. Ten years later, first 3D Printer in the kit was launched to the market. Today we have large scale printers that printed large 3D objects such are cars. 3D printing will be used for printing everything everywhere. List of pros and cons questions rising every day.


Sensor Review ◽  
2020 ◽  
Vol 40 (3) ◽  
pp. 311-328
Author(s):  
Farid Esmaeili ◽  
Hamid Ebadi ◽  
Mohammad Saadatseresht ◽  
Farzin Kalantary

Purpose Displacement measurement in large-scale structures (such as excavation walls) is one of the most important applications of close-range photogrammetry, in which achieving high precision requires extracting and accurately matching local features from convergent images. The purpose of this study is to introduce a new multi-image pointing (MIP) algorithm is introduced based on the characteristics of the geometric model generated from the initial matching. This self-adaptive algorithm is used to correct and improve the accuracy of the extracted positions from local features in the convergent images. Design/methodology/approach In this paper, the new MIP algorithm based on the geometric characteristics of the model generated from the initial matching was introduced, which in a self-adaptive way corrected the extracted image coordinates. The unique characteristics of this proposed algorithm were that the position correction was accomplished with the help of continuous interaction between the 3D model coordinates and the image coordinates and that it had the least dependency on the geometric and radiometric nature of the images. After the initial feature extraction and implementation of the MIP algorithm, the image coordinates were ready for use in the displacement measurement process. The combined photogrammetry displacement adjustment (CPDA) algorithm was used for displacement measurement between two epochs. Micro-geodesy, target-based photogrammetry and the proposed MIP methods were used in a displacement measurement project for an excavation wall in the Velenjak area in Tehran, Iran, to evaluate the proposed algorithm performance. According to the results, the measurement accuracy of the point geo-coordinates of 8 mm and the displacement accuracy of 13 mm could be achieved using the MIP algorithm. In addition to the micro-geodesy method, the accuracy of the results was matched by the cracks created behind the project’s wall. Given the maximum allowable displacement limit of 4 cm in this project, the use of the MIP algorithm produced the required accuracy to determine the critical displacement in the project. Findings Evaluation of the results demonstrated that the accuracy of 8 mm in determining the position of the points on the feature and the accuracy of 13 mm in the displacement measurement of the excavation walls could be achieved using precise positioning of local features on images using the MIP algorithm.The proposed algorithm can be used in all applications that need to achieve high accuracy in determining the 3D coordinates of local features in close-range photogrammetry. Originality/value Some advantages of the proposed MIP photogrammetry algorithm, including the ease of obtaining observations and using local features on the structure in the images rather than installing the artificial targets, make it possible to effectively replace micro-geodesy and instrumentation methods. In addition, the proposed MIP method is superior to the target-based photogrammetric method because it does not need artificial target installation and protection. Moreover, in each photogrammetric application that needs to determine the exact point coordinates on the feature, the proposed algorithm can be very effective in providing the possibility to achieve the required accuracy according to the desired objectives.


2020 ◽  
Vol 12 (23) ◽  
pp. 3978
Author(s):  
Tianyou Chu ◽  
Yumin Chen ◽  
Liheng Huang ◽  
Zhiqiang Xu ◽  
Huangyuan Tan

Street view image retrieval aims to estimate the image locations by querying the nearest neighbor images with the same scene from a large-scale reference dataset. Query images usually have no location information and are represented by features to search for similar results. The deep local features (DELF) method shows great performance in the landmark retrieval task, but the method extracts many features so that the feature file is too large to load into memory when training the features index. The memory size is limited, and removing the part of features simply causes a great retrieval precision loss. Therefore, this paper proposes a grid feature-point selection method (GFS) to reduce the number of feature points in each image and minimize the precision loss. Convolutional Neural Networks (CNNs) are constructed to extract dense features, and an attention module is embedded into the network to score features. GFS divides the image into a grid and selects features with local region high scores. Product quantization and an inverted index are used to index the image features to improve retrieval efficiency. The retrieval performance of the method is tested on a large-scale Hong Kong street view dataset, and the results show that the GFS reduces feature points by 32.27–77.09% compared with the raw feature. In addition, GFS has a 5.27–23.59% higher precision than other methods.


Sign in / Sign up

Export Citation Format

Share Document