embedding problem
Recently Published Documents


TOTAL DOCUMENTS

265
(FIVE YEARS 33)

H-INDEX

19
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Massinissa Ait aba ◽  
Maxime Elkael ◽  
Badii Jouaber ◽  
Hind Castel-Taleb ◽  
Andrea Araldo ◽  
...  

2021 ◽  
Vol 83 (3) ◽  
Author(s):  
Muhammad Ardiyansyah ◽  
Dimitra Kosta ◽  
Kaie Kubjas

AbstractWe study model embeddability, which is a variation of the famous embedding problem in probability theory, when apart from the requirement that the Markov matrix is the matrix exponential of a rate matrix, we additionally ask that the rate matrix follows the model structure. We provide a characterisation of model embeddable Markov matrices corresponding to symmetric group-based phylogenetic models. In particular, we provide necessary and sufficient conditions in terms of the eigenvalues of symmetric group-based matrices. To showcase our main result on model embeddability, we provide an application to hachimoji models, which are eight-state models for synthetic DNA. Moreover, our main result on model embeddability enables us to compute the volume of the set of model embeddable Markov matrices relative to the volume of other relevant sets of Markov matrices within the model.


2021 ◽  
Vol 1187 (1) ◽  
pp. 012035
Author(s):  
Preksha Satish ◽  
Deeksha Lingraj ◽  
S Anjan Kumar ◽  
T G Keerthan Kumar

Author(s):  
Chengji Wang ◽  
Zhiming Luo ◽  
Yaojin Lin ◽  
Shaozi Li

Most existing text-based person search methods highly depend on exploring the corresponding relations between the regions of the image and the words in the sentence. However, these methods correlated image regions and words in the same semantic granularity. It 1) results in irrelevant corresponding relations between image and text, 2) causes an ambiguity embedding problem. In this study, we propose a novel multi-granularity embedding learning model for text-based person search. It generates multi-granularity embeddings of partial person bodies in a coarse-to-fine manner by revisiting the person image at different spatial scales. Specifically, we distill the partial knowledge from image scrips to guide the model to select the semantically relevant words from the text description. It can learn discriminative and modality-invariant visual-textual embeddings. In addition, we integrate the partial embeddings at each granularity and perform multi-granularity image-text matching. Extensive experiments validate the effectiveness of our method, which can achieve new state-of-the-art performance by the learned discriminative partial embeddings.


Sign in / Sign up

Export Citation Format

Share Document