perceptual similarity
Recently Published Documents


TOTAL DOCUMENTS

277
(FIVE YEARS 68)

H-INDEX

25
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Emily Weichart ◽  
Daniel Evans ◽  
Matthew Galdo ◽  
Giwon Bahg ◽  
Brandon Turner

In order to accurately categorize novel items, humans learn to selectively attend to stimulus dimensions that are most relevant to the task. Models of category learning describe the interconnected cognitive processes that contribute to selective attention as observations of stimuli and category feedback are progressively acquired. The Adaptive Attention Representation Model (AARM), for example, provides an account whereby categorization decisions are based on the perceptual similarity of a new stimulus to stored exemplars, and dimension-wise attention is updated on every trial in the direction of a feedback-based error gradient. As such, attention modulation as described by AARM requires interactions among orienting, visual perception, memory retrieval, error monitoring, and goal maintenance in order to facilitate learning across trials. The current study explored the neural bases of attention mechanisms using quantitative predictions from AARM to analyze behavioral and fMRI data collected while participants learned novel categories. GLM analyses revealed patterns of BOLD activation in the parietal cortex (orienting), visual cortex (perception), medial temporal lobe (memory retrieval), basal ganglia (error monitoring), and prefrontal cortex (goal maintenance) that covaried with the magnitude of model-predicted attentional tuning. Results are consistent with AARM’s specification of attention modulation as a dynamic property of distributed cognitive systems.


2021 ◽  
Author(s):  
Duncan Wilson ◽  
Masaki Tomonaga

Many primate studies have investigated discrimination of individual faces within the same species. However, few studies have looked at discrimination between primate species faces at the categorical level. This study systematically examined the factors important for visual discrimination between primate species faces in chimpanzees, including: colour, orientation, familiarity and perceptual similarity. Five adult female chimpanzees were tested on their ability to discriminate identical and categorical (non-identical) images of different primate species faces in a series of touchscreen matching-to-sample experiments. Discrimination performance for chimpanzee, gorilla and orangutan faces was better in colour than in greyscale. An inversion effect was also found, with higher accuracy for upright than inverted faces. Discrimination performance for unfamiliar (baboon and capuchin monkey) and highly familiar (chimpanzee and human) but perceptually different species was equally high. After excluding effects of colour and familiarity, difficulty in discriminating between different species faces can be best explained by their perceptual similarity to each other. Categorical discrimination performance for unfamiliar, perceptually similar faces (gorilla and orangutan) was significantly worse than unfamiliar, perceptually different faces (baboon and capuchin monkey). Moreover, Multidimensional Scaling analysis of the image similarity data based on local feature matching revealed greater similarity between chimpanzee, gorilla and orangutan faces than between human, baboon and capuchin monkey faces. We conclude our chimpanzees appear to perceive similarity in primate faces in a similar way to humans. Information about perceptual similarity is likely prioritised over the potential influence of previous experience or a conceptual representation of species for categorical discrimination between species faces.


2021 ◽  
Author(s):  
Adam N. Sanborn ◽  
Katherine Heller ◽  
Joseph L. Austerweil ◽  
Nick Chater

Biology ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 886
Author(s):  
David T. Liu ◽  
Gerold Besser ◽  
Karina Bayer ◽  
Bernhard Prem ◽  
Christian A. Mueller ◽  
...  

This study aimed to investigate the perceptual similarity between piperine-induced burning sensations and bitter taste using piperine-impregnated taste strips (PTS). This pilot study included 42 healthy participants. PTS of six ascending concentrations (1 mg, 5 mg, 10 mg, 15 mg, 20 mg, and 25 mg piperine/dL 96% ethanol) were presented at the anterior tongue, and participants rated perceived intensity and duration. Then, participants performed a spatial discrimination task in which they had to report which of the two strips presented to the anterior tongue contained an irritating stimulus when one strip was always a PTS while the other strip was impregnated with either a single taste quality (sweet or bitter) or a blank strip. Repeated measures one-way ANOVA revealed that burning sensations of higher concentrated PTS were perceived more intense and more prolonged compared to lower concentrated PTS. McNemar’s test showed that PTS were identified correctly significantly less often when presented with bitter strips compared to when presented with blank (p = 0.002) or sweet strips (p = 0.017). Our results showed that bitter taste disrupts the spatial discrimination of piperine-evoked burning sensations. PTS might serve as a basis for further studies on disease-specific patterns in chemosensory disorders.


Author(s):  
Loris Naspi ◽  
Paul Hoffman ◽  
Barry Devereux ◽  
Tobias Thejll-Madsen ◽  
Leonidas A. A. Doumas ◽  
...  

2021 ◽  
Vol 13 (15) ◽  
pp. 3053
Author(s):  
Jialang Xu ◽  
Chunbo Luo ◽  
Xinyue Chen ◽  
Shicai Wei ◽  
Yang Luo

Remote sensing change detection (RSCD) is an important yet challenging task in Earth observation. The booming development of convolutional neural networks (CNNs) in computer vision raises new possibilities for RSCD, and many recent RSCD methods have introduced CNNs to achieve promising improvements in performance. In this paper we propose a novel multidirectional fusion and perception network for change detection in bi-temporal very-high-resolution remote sensing images. First, we propose an elaborate feature fusion module consisting of a multidirectional fusion pathway (MFP) and an adaptive weighted fusion (AWF) strategy for RSCD to boost the way that information propagates in the network. The MFP enhances the flexibility and diversity of information paths by creating extra top-down and shortcut-connection paths. The AWF strategy conducts weight recalibration for every fusion node to highlight salient feature maps and overcome semantic gaps between different features. Second, a novel perceptual similarity module is designed to introduce perceptual loss into the RSCD task, which adds perceptual information, such as structure and semantic information, for high-quality change map generation. Extensive experiments on four challenging benchmark datasets demonstrate the superiority of the proposed network compared with eight state-of-the-art methods in terms of F1, Kappa, and visual qualities.


Author(s):  
Yajie Wang ◽  
Shangbo Wu ◽  
Wenyi Jiang ◽  
Shengang Hao ◽  
Yu-an Tan ◽  
...  

Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples. Adversarial examples are malicious images with visually imperceptible perturbations. While these carefully crafted perturbations restricted with tight Lp norm bounds are small, they are still easily perceivable by humans. These perturbations also have limited success rates when attacking black-box models or models with defenses like noise reduction filters. To solve these problems, we propose Demiguise Attack, crafting "unrestricted" perturbations with Perceptual Similarity. Specifically, we can create powerful and photorealistic adversarial examples by manipulating semantic information based on Perceptual Similarity. Adversarial examples we generate are friendly to the human visual system (HVS), although the perturbations are of large magnitudes. We extend widely-used attacks with our approach, enhancing adversarial effectiveness impressively while contributing to imperceptibility. Extensive experiments show that the proposed method not only outperforms various state-of-the-art attacks in terms of fooling rate, transferability, and robustness against defenses but can also improve attacks effectively. In addition, we also notice that our implementation can simulate illumination and contrast changes that occur in real-world scenarios, which will contribute to exposing the blind spots of DNNs.


2021 ◽  
Vol 15 ◽  
Author(s):  
Jan Krepl ◽  
Francesco Casalegno ◽  
Emilie Delattre ◽  
Csaba Erö ◽  
Huanxiang Lu ◽  
...  

The acquisition of high quality maps of gene expression in the rodent brain is of fundamental importance to the neuroscience community. The generation of such datasets relies on registering individual gene expression images to a reference volume, a task encumbered by the diversity of staining techniques employed, and by deformations and artifacts in the soft tissue. Recently, deep learning models have garnered particular interest as a viable alternative to traditional intensity-based algorithms for image registration. In this work, we propose a supervised learning model for general multimodal 2D registration tasks, trained with a perceptual similarity loss on a dataset labeled by a human expert and augmented by synthetic local deformations. We demonstrate the results of our approach on the Allen Mouse Brain Atlas (AMBA), comprising whole brain Nissl and gene expression stains. We show that our framework and design of the loss function result in accurate and smooth predictions. Our model is able to generalize to unseen gene expressions and coronal sections, outperforming traditional intensity-based approaches in aligning complex brain structures.


Sign in / Sign up

Export Citation Format

Share Document