similarity structure
Recently Published Documents


TOTAL DOCUMENTS

92
(FIVE YEARS 31)

H-INDEX

12
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Emilie Louise Josephs ◽  
Martin N Hebart ◽  
Talia Konkle

Near-scale, reach-relevant environments, like work desks, restaurant place settings or lab benches, are the interface of our hand-based interactions with the world. How are our conceptual representations of these environments organized? For navigable-scale scenes, global properties such as openness, depth or naturalness have been identified, but the analogous organizing principles for reach-scale environments are not known. To uncover such principles, we obtained 1.25 million odd-one-out behavioral judgments on image triplets assembled from 990 reachspace images. Images were selected to comprehensively sample the variation both between and within reachspace categories. Using data-driven modeling, we generated a 30-dimensional embedding which predicts human similarity judgments among the images. First, examination of the embedding dimensions revealed key properties that distinguish among reachspaces, relating to their structural layout, affordances, visual appearances and functional roles. Second, clustering analyses performed over the embedding revealed four distinct interpretable classes of reachspaces, with separate clusters for spaces related to food, electronics, analog activities, and storage or display. Finally, we found that the similarity structure among reachspace images was better predicted by the function of the spaces than their locations, suggesting that reachspaces are largely conceptualized in terms of the actions they are designed to support. Altogether, these results reveal the behaviorally-relevant principles that that structure our internal representations of reach-relevant environments.


Author(s):  
Xiao Luo ◽  
Daqing Wu ◽  
Zeyu Ma ◽  
Chong Chen ◽  
Minghua Deng ◽  
...  

Recently, hashing is widely used in approximate nearest neighbor search for its storage and computational efficiency. Most of the unsupervised hashing methods learn to map images into semantic similarity-preserving hash codes by constructing local semantic similarity structure from the pre-trained model as the guiding information, i.e., treating each point pair similar if their distance is small in feature space. However, due to the inefficient representation ability of the pre-trained model, many false positives and negatives in local semantic similarity will be introduced and lead to error propagation during the hash code learning. Moreover, few of the methods consider the robustness of models, which will cause instability of hash codes to disturbance. In this paper, we propose a new method named Comprehensive sImilarity Mining and cOnsistency learNing (CIMON). First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes. Extensive experiments on several benchmark datasets show that the proposed method outperforms a wide range of state-of-the-art methods in both retrieval performance and robustness.


2021 ◽  
Author(s):  
Haojie Chen ◽  
Shiqi Tu ◽  
Chongze Yuan ◽  
Feng Tian ◽  
Yijing Zhang ◽  
...  

With the reduction in sequencing costs, studies become prevalent that profile the chromatin landscape for tens or even hundreds of human individuals by using ChIP/ATAC-seq techniques. Identifying genomic regions with hypervariable ChIP/ATAC-seq signals across given samples is essential for such studies. In particular, the hypervariable regions (HVRs) across tumors from different patients indicate their heterogeneity and can contribute to revealing potential cancer subtypes and the associated epigenetic markers. We present HyperChIP as the first complete statistical tool for the task. HyperChIP uses scaled variances that account for the mean-variance dependence to rank genomic regions, and it increases the statistical power by diminishing the influence of true HVRs on model fitting. Applying it to a large pan-cancer ATAC-seq data set, we found that the identified HVRs not only provided a solid basis to uncover the underlying similarity structure among the involved tumor samples, but also led to the identification of transcription factors pertaining to the similarity structure when coupled with a motif-scanning analysis.


Author(s):  
Bradley S. Gibson ◽  
M. Karl Healey ◽  
Daniel Schor ◽  
Dawn M. Gondoli

2021 ◽  
Author(s):  
Jia-Qing Tong ◽  
Jeffrey R. Binder ◽  
Colin J. Humphries ◽  
Lisa L. Conant ◽  
Leonardo Fernandino

The architecture of the cortical system underlying concept representation is a topic of intense debate. Much evidence supports the claim that concept retrieval selectively engages sensory, motor, and other neural systems involved in the acquisition of the retrieved concept, yet there is also strong evidence for involvement of high-level, supramodal cortical regions. A fundamental question about the organization of this system is whether modality-specific information originating from sensory and motor areas is integrated across multiple ″convergence zones″ or in a single centralized ″hub″. We used representational similarity analysis (RSA) of fMRI data to map brain regions where the similarity structure of neural patterns elicited by large sets of concepts matched the similarity structure predicted by a high-dimensional model of concept representation based on sensory, motor, affective, and other modal aspects of experience. Across two studies involving different sets of concepts, different participants, and different tasks, searchlight RSA revealed a distributed, bihemispheric network engaged in multimodal experiential representation, composed of high-level association cortex in anterior, lateral, and ventral temporal lobe; inferior parietal lobule; posterior cingulate gyrus and precuneus; and medial, dorsal, ventrolateral, and orbital prefrontal cortex. These regions closely resemble networks previously implicated in general semantic and ″default mode″ processing and are known to be high-level hubs for convergence of multimodal processing streams. Supplemented by an exploratory cluster analysis, these results indicate that the concept representation system consists of multiple, hierarchically organized convergence zones supporting multimodal integration of experiential information.


2021 ◽  
Author(s):  
Angus F. Chapman ◽  
Viola S. Störmer

While many theories of attention highlight the importance of similarity between target and distractor items for selection, few studies have directly quantified the function underlying this relationship. Across two commonly used tasks—visual search and sustained attention—we investigated how target-distractor similarity impacts feature-based attentional selection, in particular asking whether stimulus-based or psychological similarity better explains performance. We found that both similarity measures were non-linearly related to task performance, although psychological similarity explained a big portion of the non-linearities observed in the data, suggesting that measures of psychological similarity are more appropriate when studying effects of target-distractor similarities. Importantly, we found comparable patterns of performance in both visual search and sustained feature-based attention tasks, with performance (RTs and d’, respectively) plateauing at medium target-distractor distances and exponential functions capturing the relationship between stimulus-based and psychological similarity and performance well. In contrast, visual search efficiency, as measured by search slopes, was affected by only a narrow range of similarity levels (10-20°). These findings place novel constraints on models of selective attention and emphasize the importance of considering the similarity structure of the feature space. Broadly, the non-linear effects of similarity on attention are consistent with accounts that propose attention exaggerates the distance between competing representations, possibly through enhancement of off-tuned neurons.


Author(s):  
Daniel J. Trosten ◽  
Robert Jenssen ◽  
Michael C. Kampffmeyer

Preservation of local similarity structure is a key challenge in deep clustering. Many recent deep clustering methods therefore use autoencoders to help guide the model's neural network towards an embedding which is more reflective of the input space geometry. However, recent work has shown that autoencoder-based deep clustering models can suffer from objective function mismatch (OFM). In order to improve the preservation of local similarity structure, while simultaneously having a low OFM, we develop a new auxiliary objective function for deep clustering. Our Unsupervised Companion Objective (UCO) encourages a consistent clustering structure at intermediate layers in the network -- helping the network learn an embedding which is more reflective of the similarity structure in the input space. Since a clustering-based auxiliary objective has the same goal as the main clustering objective, it is less prone to introduce objective function mismatch between itself and the main objective. Our experiments show that attaching the UCO to a deep clustering model improves the performance of the model, and exhibits a lower OFM, compared to an analogous autoencoder-based model.


2021 ◽  
Author(s):  
Leonardo Fernandino ◽  
Lisa L. Conant ◽  
Colin J. Humphries ◽  
Jeffrey R. Binder

The nature of the neural code underlying conceptual knowledge remains a major unsolved problem in cognitive neuroscience. Three main types of information have been proposed as candidates for the neural representations of lexical concepts: taxonomic (i.e., information about category membership and inter-category relations), distributional (i.e., information about patterns of word co-occurrence in natural language use), and experiential (i.e., information about sensory-motor, affective, and other features of phenomenal experience engaged during concept acquisition). In two experiments, we investigated the extent to which these three types of information are encoded in the neural activation patterns associated with hundreds of English nouns from a wide variety of conceptual categories. Participants made familiarity judgments on the meaning of written nouns while undergoing functional MRI. A high-resolution, whole-brain activation map was generated for each noun in each participant′s native space. These word-specific activation maps were used to evaluate different representational spaces corresponding to the three types of information described above. In both studies, we found a striking advantage for experience-based models in most brain areas previously associated with concept representation. Partial correlation analyses revealed that only experiential information successfully predicted concept similarity structure when inter-model correlations were taken into account. This pattern of results was found independently for object concepts and event concepts. Our findings indicate that the neural representation of conceptual knowledge primarily encodes information about features of experience, and that - to the extent that it is represented in the brain - taxonomic and distributional information may rely on such an experience-based code.


2021 ◽  
Author(s):  
Kayla M. Ferko ◽  
Anna Blumenthal ◽  
Chris B. Martin ◽  
Daria Proklova ◽  
Lisa M. Saksida ◽  
...  

AbstractObservers perceive their visual environment in unique ways. How ventral visual stream (VVS) regions represent subjectively perceived object characteristics remains poorly understood. We hypothesized that the visual similarity between objects that observers perceive is reflected with highest fidelity in neural activity patterns in perirhinal and anterolateral entorhinal cortex at the apex of the VVS object-processing hierarchy. To address this issue with fMRI, we administered a task that required discrimination between images of exemplars from real-world categories. Further, we obtained ratings of perceived visual similarities. We found that perceived visual similarities predicted discrimination performance in an observer-specific manner. As anticipated, activity patterns in perirhinal and anterolateral entorhinal cortex predicted perceived similarity structure, including those aspects that are observer-specific, with higher fidelity than any other region examined. Our findings provide new evidence that representations of the visual world at the apex of the VVS differ across observers in ways that influence behaviour.


Sign in / Sign up

Export Citation Format

Share Document