invariant representation
Recently Published Documents


TOTAL DOCUMENTS

214
(FIVE YEARS 52)

H-INDEX

24
(FIVE YEARS 4)

2022 ◽  
Author(s):  
Changjian Shui ◽  
Boyu Wang ◽  
Christian Gagné

AbstractA crucial aspect of reliable machine learning is to design a deployable system for generalizing new related but unobserved environments. Domain generalization aims to alleviate such a prediction gap between the observed and unseen environments. Previous approaches commonly incorporated learning the invariant representation for achieving good empirical performance. In this paper, we reveal that merely learning the invariant representation is vulnerable to the related unseen environment. To this end, we derive a novel theoretical analysis to control the unseen test environment error in the representation learning, which highlights the importance of controlling the smoothness of representation. In practice, our analysis further inspires an efficient regularization method to improve the robustness in domain generalization. The proposed regularization is orthogonal to and can be straightforwardly adopted in existing domain generalization algorithms that ensure invariant representation learning. Empirical results show that our algorithm outperforms the base versions in various datasets and invariance criteria.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Lina Zhang ◽  
Yu Sang ◽  
Donghai Dai

Polar harmonic transforms (PHTs) have been applied in pattern recognition and image analysis. But the current computational framework of PHTs has two main demerits. First, some significant color information may be lost during color image processing in conventional methods because they are based on RGB decomposition or graying. Second, PHTs are influenced by geometric errors and numerical integration errors, which can be seen from image reconstruction errors. This paper presents a novel computational framework of quaternion polar harmonic transforms (QPHTs), namely, accurate QPHTs (AQPHTs). First, to holistically handle color images, quaternion-based PHTs are introduced by using the algebra of quaternions. Second, the Gaussian numerical integration is adopted for geometric and numerical error reduction. When compared with CNNs (convolutional neural networks)-based methods (i.e., VGG16) on the Oxford5K dataset, our AQPHT achieves better performance of scaling invariant representation. Moreover, when evaluated on standard image retrieval benchmarks, our AQPHT using smaller dimension of feature vector achieves comparable results with CNNs-based methods and outperforms the hand craft-based methods by 9.6% w.r.t mAP on the Holidays dataset.


2021 ◽  
Vol 118 (46) ◽  
pp. e2104779118
Author(s):  
T. Hannagan ◽  
A. Agrawal ◽  
L. Cohen ◽  
S. Dehaene

The visual word form area (VWFA) is a region of human inferotemporal cortex that emerges at a fixed location in the occipitotemporal cortex during reading acquisition and systematically responds to written words in literate individuals. According to the neuronal recycling hypothesis, this region arises through the repurposing, for letter recognition, of a subpart of the ventral visual pathway initially involved in face and object recognition. Furthermore, according to the biased connectivity hypothesis, its reproducible localization is due to preexisting connections from this subregion to areas involved in spoken-language processing. Here, we evaluate those hypotheses in an explicit computational model. We trained a deep convolutional neural network of the ventral visual pathway, first to categorize pictures and then to recognize written words invariantly for case, font, and size. We show that the model can account for many properties of the VWFA, particularly when a subset of units possesses a biased connectivity to word output units. The network develops a sparse, invariant representation of written words, based on a restricted set of reading-selective units. Their activation mimics several properties of the VWFA, and their lesioning causes a reading-specific deficit. The model predicts that, in literate brains, written words are encoded by a compositional neural code with neurons tuned either to individual letters and their ordinal position relative to word start or word ending or to pairs of letters (bigrams).


2021 ◽  
Author(s):  
Ziwei Xie ◽  
Jinbo Xu

Motivation: Inter-protein (interfacial) contact prediction is very useful for in silico structural characterization of protein-protein interactions. Although deep learning has been applied to this problem, its accuracy is not as good as intra-protein contact prediction. Results: We propose a new deep learning method GLINTER (Graph Learning of INTER-protein contacts) for interfacial contact prediction of dimers, leveraging a rotational invariant representation of protein tertiary structures and a pretrained language model of multiple sequence alignments (MSAs). Tested on the 13th and 14th CASP-CAPRI datasets, the average top L/10 precision achieved by GLINTER is 54.35% on the homodimers and 51.56% on all the dimers, much higher than 30.43% obtained by the latest deep learning method DeepHomo on the homodimers and 14.69% obtained by BIPSPI on all the dimers. Our experiments show that GLINTER-predicted contacts help improve selection of docking decoys.


Author(s):  
Jaeguk Hyun ◽  
ChanYong Lee ◽  
Hoseong Kim ◽  
Hyunjung Yoo ◽  
Eunjin Koh

Unsupervised domain adaptation often gives impressive solutions to handle domain shift of data. Most of current approaches assume that unlabeled target data to train is abundant. This assumption is not always true in practices. To tackle this issue, we propose a general solution to solve the domain gap minimization problem without any target data. Our method consists of two regularization steps. The first step is a pixel regularization by arbitrary style transfer. Recently, some methods bring style transfer algorithms to domain adaptation and domain generalization process. They use style transfer algorithms to remove texture bias in source domain data. We also use style transfer algorithms for removing texture bias, but our method depends on neither domain adaptation nor domain generalization paradigm. The second regularization step is a feature regularization by feature alignment. Adding a feature alignment loss term to the model loss, the model learns domain invariant representation more efficiently. We evaluate our regularization methods from several experiments both on small dataset and large dataset. From the experiments, we show that our model can learn domain invariant representation as much as unsupervised domain adaptation methods.


Author(s):  
Vincentius Ewald ◽  
Ramanan Sridaran Venkat ◽  
Aadhik Asokkumar ◽  
Rinze Benedictus ◽  
Christian Boller ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document