Effect of visual image features on neural activities: An fMRI study

Author(s):  
K. Kato ◽  
O. Miura ◽  
A. Shikoda ◽  
K. Sugawara ◽  
T. Kuroki ◽  
...  
eNeuro ◽  
2017 ◽  
Vol 4 (5) ◽  
pp. ENEURO.0183-17.2017 ◽  
Author(s):  
Honami Sakata ◽  
Kosuke Itoh ◽  
Yuji Suzuki ◽  
Katsuki Nakamura ◽  
Masaki Watanabe ◽  
...  

2021 ◽  
Vol 168 ◽  
pp. S206
Author(s):  
Shuyue Xu ◽  
Gan Huang ◽  
Linling Li ◽  
Li Zhang ◽  
Zhiguo Zhang ◽  
...  

2008 ◽  
Vol 433 (3) ◽  
pp. 194-198 ◽  
Author(s):  
Kwok-Keung Leung ◽  
Tatia M.C. Lee ◽  
Zhuangwei Xiao ◽  
Zhaoxin Wang ◽  
John X.X. Zhang ◽  
...  

2016 ◽  
Vol 36 (1) ◽  
pp. 24-43 ◽  
Author(s):  
Dushyant Rao ◽  
Mark De Deuge ◽  
Navid Nourani–Vatani ◽  
Stefan B Williams ◽  
Oscar Pizarro

Autonomous vehicles are often tasked to explore unseen environments, aiming to acquire and understand large amounts of visual image data and other sensory information. In such scenarios, remote sensing data may be available a priori, and can help to build a semantic model of the environment and plan future autonomous missions. In this paper, we introduce two multimodal learning algorithms to model the relationship between visual images taken by an autonomous underwater vehicle during a survey and remotely sensed acoustic bathymetry (ocean depth) data that is available prior to the survey. We present a multi-layer architecture to capture the joint distribution between the bathymetry and visual modalities. We then propose an extension based on gated feature learning models, which allows the model to cluster the input data in an unsupervised fashion and predict visual image features using just the ocean depth information. Our experiments demonstrate that multimodal learning improves semantic classification accuracy regardless of which modalities are available at classification time, allows for unsupervised clustering of either or both modalities, and can facilitate mission planning by enabling class-based or image-based queries.


Author(s):  
E. Sanchez Castillo ◽  
D. Griffiths ◽  
J. Boehm

Abstract. This paper proposes a semantic segmentation pipeline for terrestrial laser scanning data. We achieve this by combining co-registered RGB and 3D point cloud information. Semantic segmentation is performed by applying a pre-trained off-the-shelf 2D convolutional neural network over a set of projected images extracted from a panoramic photograph. This allows the network to exploit the visual image features that are learnt in a state-of-the-art segmentation models trained on very large datasets. The study focuses on the adoption of the spherical information from the laser capture and assessing the results using image classification metrics. The obtained results demonstrate that the approach is a promising alternative for asset identification in laser scanning data. We demonstrate comparable performance with spherical machine learning frameworks, however, avoid both the labelling and training efforts required with such approaches.


2020 ◽  
Author(s):  
Alexandra C. Schmid ◽  
Pascal Barla ◽  
Katja Doerschner

ABSTRACTThere is a growing body of work investigating the visual perception of material properties like gloss, yet practically nothing is known about how the brain recognises different material classes like plastic, pearl, satin, and steel, nor the precise relationship between material properties like gloss and perceived material class. We report a series of experiments that show that parametrically changing reflectance parameters leads to qualitative changes in material appearance beyond those expected by the reflectance function used. We measure visual (image) features that predict these changes in appearance, and causally manipulate these features to confirm their role in perceptual categorisation. Furthermore, our results suggest that the same visual features underlie both material recognition and surface gloss perception. However, the predictiveness of each feature to perceived gloss changes with material category, suggesting that the pockets of feature space occupied by different material classes affect the processing of those very features when estimating surface glossiness. Our results do not support a traditional feedforward view that assumes that material perception proceeds from low-level image measurements, to mid-level estimates of surface properties, to high-level material classes, nor the idea that material properties like gloss and material class are simultaneously “read out” from visual gloss features. Instead, we suggest that the perception and neural processing of material properties like surface gloss should be considered in the context of material recognition.


2021 ◽  
Author(s):  
Rohit Raja ◽  
Sandeep Kumar ◽  
Shilpa Choudhary ◽  
Hemlata Dalmia

Abstract Day by day, rapidly increasing the number of images on digital platforms and digital image databases has increased. Generally, the user requires image retrieval and it is a challenging task to search effectively from the enormous database. Mainly content-based image retrieval (CBIR) algorithm considered the visual image feature such as color, texture, shape, etc. The non-visual features also play a significant role in image retrieval, mainly in the security concern and selection of image features is an essential issue in CBIR. Performance is one of the challenging tasks in image retrieval, according to current CBIR studies. To overcome this gap, the new method used for CBIR using histogram of gradient (HOG), dominant color descriptor (DCD) & hue moment (HM) features. This work uses color features and shapes texture in-depth for CBIR. HOG is used to extract texture features. DCD on RGB and HSV are used to improve efficiency and computation. A neural network (NN) is used to extract the image features, which improves the computation using the Corel dataset. The experimental results evaluated on various standard benchmarks Corel-1k, Corel-5k datasets, and outcomes of the proposed work illustrate that the proposed CBIR is efficient for other state-of-the-art image retrieval methods. Intensive analysis of the proposed work proved that the proposed work has better precision, recall, accuracy


2012 ◽  
Vol 24 (2) ◽  
pp. 496-506 ◽  
Author(s):  
Hiroyuki Tsubomi ◽  
Takashi Ikeda ◽  
Takashi Hanakawa ◽  
Nobuyuki Hirose ◽  
Hidenao Fukuyama ◽  
...  

Recent neuroimaging evidence indicates that visual consciousness of objects is reflected by the activation in the lateral occipital cortex as well as in the frontal and parietal cortex. However, most previous studies used behavioral paradigms in which attention raised or enhanced visual consciousness (visibility or recognition performance). This co-occurrence made it difficult to reveal whether an observed cortical activation is related to visual consciousness or attention. The present fMRI study investigated the dissociability of neural activations underlying these two cognitive phenomena. Toward this aim, we used a visual backward masking paradigm in which directing attention could either enhance or reduce the object visibility. The participants' task was to report the level of subjective visibility for a briefly presented target object. The target was presented in the center with four flankers, which was followed by the same number of masks. Behavioral results showed that attention to the flankers enhanced the target visibility, whereas attention to the masks attenuated it. The fMRI results showed that the occipito-temporal sulcus increased activation in the attend flankers condition compared with the attend masks condition, and occipito-temporal sulcus activation levels positively correlated with the target visibility in both attentional conditions. On the other hand, the inferior frontal gyrus and the intraparietal sulcus increased activation in both the attend flankers and attend masks compared with an attend neither condition, and these activation levels were independent of target visibility. Taken together, present results showed a clear dissociation in neural activities between conscious visibility and attention.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Taeyang Yang ◽  
Ji-Hyun Kim ◽  
Junsuk Kim ◽  
Sung-Phil Kim

AbstractThe present study aims to investigate functional involvement of brain areas in consumers’ evaluation of brand extension that refers to the use of well-established brand for launching new offerings. During functional magnetic resonance imaging (fMRI) scanning, participants viewed a beverage brand name followed by an extension goods name selected from the beverage or household appliance categories. They responded acceptability to given brand extension. Both acceptability responses and reaction time revealed a noticeable pattern that participants responded to acceptable stimuli more carefully. General linear model (GLM) analyses revealed the involvement of insular activity in brand extension evaluation. Especially, insular activity was lateralized according to valence. Furthermore, its activity could explain behavioral response in parametric modulation model. According to these results, we speculate that insula activity is relevant to emotional processing. Finally, we divided neural activities during brand extension into separated clusters using a hierarchical clustering-based connectivity analysis. Excluding two of them related to sensorimotor functions for behavioral responses, the remaining cluster, including bilateral insula, was likely to reflect brand extension assessment. Hence, we speculate that consumers’ brand extension evaluation may involve emotional processes, shown as insular activity.


Sign in / Sign up

Export Citation Format

Share Document