scholarly journals Stimulus features that elicit activity in object-vector cells

2021 ◽  
Author(s):  
Sebastian O Andersson ◽  
Edvard I Moser ◽  
May-Britt Moser

Object-vector (OV) cells are cells in the medial entorhinal cortex (MEC) that track an animal's distance and direction to objects in the environment. Their firing fields are defined by vectorial relationships to free-standing 3-dimensional (3D) objects of a variety of identities and shapes. However, the natural world contains a panorama of objects, ranging from discrete 3D items to flat two-dimensional (2D) surfaces, and it remains unclear what are the most fundamental features of objects that drive vectorial responses. Here we address this question by systematically changing features of experimental objects. Using an algorithm that robustly identifies OV firing fields, we show that the cells respond to a variety of 2D surfaces, with visual contrast as the most basic visual feature to elicit neural responses. The findings suggest that OV cells use plain visual features as vectorial anchoring points, allowing vector-guided navigation to proceed in environments with few free-standing landmarks.

2019 ◽  
Author(s):  
Sushrut Thorat

A mediolateral gradation in neural responses for images spanning animals to artificial objects is observed in the ventral temporal cortex (VTC). Which information streams drive this organisation is an ongoing debate. Recently, in Proklova et al. (2016), the visual shape and category (“animacy”) dimensions in a set of stimuli were dissociated using a behavioural measure of visual feature information. fMRI responses revealed a neural cluster (extra-visual animacy cluster - xVAC) which encoded category information unexplained by visual feature information, suggesting extra-visual contributions to the organisation in the ventral visual stream. We reassess these findings using Convolutional Neural Networks (CNNs) as models for the ventral visual stream. The visual features developed in the CNN layers can categorise the shape-matched stimuli from Proklova et al. (2016) in contrast to the behavioural measures used in the study. The category organisations in xVAC and VTC are explained to a large degree by the CNN visual feature differences, casting doubt over the suggestion that visual feature differences cannot account for the animacy organisation. To inform the debate further, we designed a set of stimuli with animal images to dissociate the animacy organisation driven by the CNN visual features from the degree of familiarity and agency (thoughtfulness and feelings). Preliminary results from a new fMRI experiment designed to understand the contribution of these non-visual features are presented.


2004 ◽  
Vol 05 (03) ◽  
pp. 313-327 ◽  
Author(s):  
Akihiro Miyakawa ◽  
Kaoru Sugita ◽  
Tomoyuki Ishida ◽  
Yoshitaka Shibata

In this paper, we propose a Kansei retrieval method based on the design pattern of traditional Japanese crafting object to provide a user with the desired presentation space in digital traditional Japanese crafting system. The visual quantitative feature values are extracted by using Visual Pattern Image Coding (VPIC). These values include the total number, the frequency, the dispersion rate and the deviation rate for different edges. The quantitative feature values for traditional Japanese crafting objects are registered in the multimedia database and the relation between Kansei words and the visual feature of traditional Japanese crafting objects are analyzed by using the questionnaire. Then, the visual features are compared with the quantitative feature values. Through the above process, we can find the relation between the design pattern components and edge types using VPIC. By finding this relation, the Kansei retrieval method can be realized.


2018 ◽  
Author(s):  
Tao He ◽  
Matthias Fritsche ◽  
Floris P. de Lange

AbstractVisual stability is thought to be mediated by predictive remapping of the relevant object information from its current, pre-saccadic locations to its future, post-saccadic location on the retina. However, it is heavily debated whether and what feature information is predictively remapped during the pre-saccadic interval. Using an orientation adaptation paradigm, we investigated whether predictive remapping occurs for stimulus features and whether adaptation itself is remapped. We found strong evidence for predictive remapping of a stimulus presented shortly before saccade onset, but no remapping of adaptation. Furthermore, we establish that predictive remapping also occurs for stimuli that are not saccade targets, pointing toward a ‘forward remapping’ process operating across the whole visual field. Together, our findings suggest that predictive feature remapping of object information plays an important role in mediating visual stability.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 3555-3557

Showing a genuine 3 dimensional (3D) objects with the striking profundity data is dependably a troublesome and cost-devouring procedure. Speaking to 3D scene without a noise (raw image) is another case. With a honed technique for survey profundity measurement can be effortlessly gotten, without requiring any extraordinary instrument. In this paper, we have proposed an edge recognition process in a profundity picture dependent on the picture based smoothing and morphological activities.In this strategy, we have utilized the guideline of Median sifting, which has a prestigious element for edge safeguarding properties. The edge discovery was done dependent on the Canny Edge Detection Algorithm. Along these lines this strategy will help to identify edges powerfully from profundity pictures and add to advance applications top to bottom pictures


2020 ◽  
Vol 1 (1) ◽  
pp. 1-14
Author(s):  
Adhe Pandhu Dwi Prayogha ◽  
Mudafiq Riyan Pratama

The purpose of virtual reality is to enable a motor and cognitive sensor activity ofsomeone in the artificial world created digitally to become imaginary, symbolic orsimulate certain aspects in the real world [1]. This technology is applied to the mediaintroduction of the solar system using the Luther method. The Luther Method consistsof 6 stages, namely Concept, Design, Material Collecting, Assembly, Testing, andDistribution. Luther method has advantages compared to other methods because thereare stages of material collecting which is an important stage in the development ofmultimedia and this Luther method can be done in parallel or can go back to theprevious stage [2]. At the Assembly stage the implementation uses the Unity Engineand Google VR SDK for Unity, the result is a virtual reality application that can displaythe solar system with 3-dimensional objects and an explanation is available on eachobject. While testing the blackbox on a variety of Android devices with differentspecifications. From the results of the application of the Luther method, it is verystructured and can run well in the development of multimedia applications, while theresults of testing, this Android-based virtual reality application cannot run on devicesthat do not have Gyroscope sensors and can run on devices with a minimumspecification of 1GB RAM will but the rendering process on 3D objects is slow.


Author(s):  
Tessa Maria Guazon

Junyee, or Luis Yee, Jr., is a Filipino artist known for his large-scale and site-specific art installations, which reflect a deep awareness of ecology and environmental issues. He was born in the Philippine island of Agusan del Norte. Trained as a sculptor, Junyee has pioneered the use of materials readily available from nature for expansive, site-specific works that incorporate ephemeral material within specific locations, redefining site and space in the process. His inventive use of indigenous material—which he assembles into sprawling constellations of forms, swarms of objects, or networks of points which function like maps—conveys a concentrated appreciation of nature. His works Wood Things (1981) and Spaces and Objects (1986), for example, are sprawling assemblies of natural forms. Junyee’s installations bring the precarious state of our natural world to the fore; by incorporating natural objects into his art, he exhibits both resourcefulness and acute awareness of the finite state of natural resources. Junyee’s approach to art is characterized by a keen sense of the environment and astute knowledge of materials. Whether paintings composed with soot; free-standing and outdoor sculptures in wood or cast concrete; or sprawling site installations, Junyee’s work exhibits a feeling for form and inherent awareness of the ways art carves new spaces of experience.


2013 ◽  
Vol 683 ◽  
pp. 801-804 ◽  
Author(s):  
Ying Hou ◽  
Gui Cai Wang

Visual feature extraction was the basic of mars surface topography reconstruction. The deep research was done to extract mars surface image visual feature in the unstructured mars surface environment. On this basis, the paper gave the mars surface image visual feature extraction algorithm. The experimental results show that the algorithm has good adaptability to illumination change and rotation transformation of mars surface image. Meanwhile, the paper could extract the abundant visual features of mars surface image.


2019 ◽  
Vol 5 (7) ◽  
pp. eaaw4358 ◽  
Author(s):  
Philip A. Kragel ◽  
Marianne C. Reddan ◽  
Kevin S. LaBar ◽  
Tor D. Wager

Theorists have suggested that emotions are canonical responses to situations ancestrally linked to survival. If so, then emotions may be afforded by features of the sensory environment. However, few computational models describe how combinations of stimulus features evoke different emotions. Here, we develop a convolutional neural network that accurately decodes images into 11 distinct emotion categories. We validate the model using more than 25,000 images and movies and show that image content is sufficient to predict the category and valence of human emotion ratings. In two functional magnetic resonance imaging studies, we demonstrate that patterns of human visual cortex activity encode emotion category–related model output and can decode multiple categories of emotional experience. These results suggest that rich, category-specific visual features can be reliably mapped to distinct emotions, and they are coded in distributed representations within the human visual system.


2020 ◽  
Vol 20 (10) ◽  
pp. 1685-1691
Author(s):  
Guy Guenthner ◽  
Alexander Eddy ◽  
Jonathan Sembrano ◽  
David W. Polly ◽  
Christopher T. Martin

2020 ◽  
Vol 34 (07) ◽  
pp. 11547-11554
Author(s):  
Bo Liu ◽  
Qiulei Dong ◽  
Zhanyi Hu

Recently, many zero-shot learning (ZSL) methods focused on learning discriminative object features in an embedding feature space, however, the distributions of the unseen-class features learned by these methods are prone to be partly overlapped, resulting in inaccurate object recognition. Addressing this problem, we propose a novel adversarial network to synthesize compact semantic visual features for ZSL, consisting of a residual generator, a prototype predictor, and a discriminator. The residual generator is to generate the visual feature residual, which is integrated with a visual prototype predicted via the prototype predictor for synthesizing the visual feature. The discriminator is to distinguish the synthetic visual features from the real ones extracted from an existing categorization CNN. Since the generated residuals are generally numerically much smaller than the distances among all the prototypes, the distributions of the unseen-class features synthesized by the proposed network are less overlapped. In addition, considering that the visual features from categorization CNNs are generally inconsistent with their semantic features, a simple feature selection strategy is introduced for extracting more compact semantic visual features. Extensive experimental results on six benchmark datasets demonstrate that our method could achieve a significantly better performance than existing state-of-the-art methods by ∼1.2-13.2% in most cases.


Sign in / Sign up

Export Citation Format

Share Document