continuous vector
Recently Published Documents


TOTAL DOCUMENTS

145
(FIVE YEARS 28)

H-INDEX

11
(FIVE YEARS 2)

Author(s):  
Claudianor O. Alves ◽  
Vincenzo Ambrosio ◽  
César E. Torres Ledesma

AbstractIn this paper we deal with the existence of solutions for the following class of magnetic semilinear Schrödinger equation $$\begin{aligned} (P) \qquad \qquad \left\{ \begin{aligned}&(-i\nabla + A(x))^2u +u = |u|^{p-2}u,\;\;\text{ in }\;\;\Omega ,\\&u=0\;\;\text{ on }\;\;\partial \Omega , \end{aligned} \right. \end{aligned}$$ ( P ) ( - i ∇ + A ( x ) ) 2 u + u = | u | p - 2 u , in Ω , u = 0 on ∂ Ω , where $$N \ge 3$$ N ≥ 3 , $$\Omega \subset {\mathbb {R}}^N$$ Ω ⊂ R N is an exterior domain, $$p\in (2, 2^*)$$ p ∈ ( 2 , 2 ∗ ) with $$2^*=\frac{2N}{N-2}$$ 2 ∗ = 2 N N - 2 , and $$A: {\mathbb {R}}^N\rightarrow {\mathbb {R}}^N$$ A : R N → R N is a continuous vector potential verifying $$A(x) \rightarrow 0\;\;\text{ as }\;\;|x|\rightarrow \infty .$$ A ( x ) → 0 as | x | → ∞ .


2021 ◽  
Vol 78 (1) ◽  
pp. 139-156
Author(s):  
Antonio Boccuto

Abstract We give some versions of Hahn-Banach, sandwich, duality, Moreau--Rockafellar-type theorems, optimality conditions and a formula for the subdifferential of composite functions for order continuous vector lattice-valued operators, invariant or equivariant with respect to a fixed group G of homomorphisms. As applications to optimization problems with both convex and linear constraints, we present some Farkas and Kuhn-Tucker-type results.


2021 ◽  
Vol 15 (4) ◽  
pp. 1-27
Author(s):  
Daokun Zhang ◽  
Jie Yin ◽  
Xingquan Zhu ◽  
Chengqi Zhang

Traditional network embedding primarily focuses on learning a continuous vector representation for each node, preserving network structure and/or node content information, such that off-the-shelf machine learning algorithms can be easily applied to the vector-format node representations for network analysis. However, the learned continuous vector representations are inefficient for large-scale similarity search, which often involves finding nearest neighbors measured by distance or similarity in a continuous vector space. In this article, we propose a search efficient binary network embedding algorithm called BinaryNE to learn a binary code for each node, by simultaneously modeling node context relations and node attribute relations through a three-layer neural network. BinaryNE learns binary node representations using a stochastic gradient descent-based online learning algorithm. The learned binary encoding not only reduces memory usage to represent each node, but also allows fast bit-wise comparisons to support faster node similarity search than using Euclidean or other distance measures. Extensive experiments and comparisons demonstrate that BinaryNE not only delivers more than 25 times faster search speed, but also provides comparable or better search quality than traditional continuous vector based network embedding methods. The binary codes learned by BinaryNE also render competitive performance on node classification and node clustering tasks. The source code of the BinaryNE algorithm is available at https://github.com/daokunzhang/BinaryNE.


2021 ◽  
Vol 7 (17) ◽  
pp. eabb9004
Author(s):  
Hao Peng ◽  
Qing Ke ◽  
Ceren Budak ◽  
Daniel M. Romero ◽  
Yong-Yeol Ahn

Understanding the structure of knowledge domains is one of the foundational challenges in the science of science. Here, we propose a neural embedding technique that leverages the information contained in the citation network to obtain continuous vector representations of scientific periodicals. We demonstrate that our periodical embeddings encode nuanced relationships between periodicals and the complex disciplinary and interdisciplinary structure of science, allowing us to make cross-disciplinary analogies between periodicals. Furthermore, we show that the embeddings capture meaningful “axes” that encompass knowledge domains, such as an axis from “soft” to “hard” sciences or from “social” to “biological” sciences, which allow us to quantitatively ground periodicals on a given dimension. By offering novel quantification in the science of science, our framework may, in turn, facilitate the study of how knowledge is created and organized.


2021 ◽  
Author(s):  
Djork-Arné Clevert ◽  
Tuan Le ◽  
Robin Winter ◽  
Floriane Montanari

<p>Automatic recognition of the molecular content of a molecule’s graphical depiction is an extremely challenging problem that remains largely unsolved despite decades of research. Recent advances in neural machine translation enable the auto-encoding of molecular structures in a continuous vector space of fixed size (latent representation) with low reconstruction errors. In this paper, we present a fast and accurate model combining a deep convolutional neural network learning from molecule depictions and a pre-trained decoder that translates the latent representation into the SMILES representation of the molecules. This combination allows to precisely infer a molecular structure from an image. Our rigorous evaluation show that Img2Mol is able to correctly translate up to 88% of the molecular depictions into their SMILES representation. A pretrained version of Img2Mol is made publicly available on GitHub for non-commercial users.</p>


2021 ◽  
Author(s):  
Djork-Arné Clevert ◽  
Tuan Le ◽  
Robin Winter ◽  
Floriane Montanari

<p>Automatic recognition of the molecular content of a molecule’s graphical depiction is an extremely challenging problem that remains largely unsolved despite decades of research. Recent advances in neural machine translation enable the auto-encoding of molecular structures in a continuous vector space of fixed size (latent representation) with low reconstruction errors. In this paper, we present a fast and accurate model combining a deep convolutional neural network learning from molecule depictions and a pre-trained decoder that translates the latent representation into the SMILES representation of the molecules. This combination allows to precisely infer a molecular structure from an image. Our rigorous evaluation show that Img2Mol is able to correctly translate up to 88% of the molecular depictions into their SMILES representation. A pretrained version of Img2Mol is made publicly available on GitHub for non-commercial users.</p>


Author(s):  
Konrawut Khammahawong ◽  
Poom Kumam ◽  
Parin Chaipunya ◽  
Somyot Plubtieng

AbstractWe propose Tseng’s extragradient methods for finding a solution of variational inequality problems associated with pseudomonotone vector fields in Hadamard manifolds. Under standard assumptions such as pseudomonotone and Lipschitz continuous vector fields, we prove that any sequence generated by the proposed methods converges to a solution of variational inequality problem, whenever it exits. Moreover, we give some numerical experiments to illustrate our main results.


Author(s):  
StanisŁaw PurgaŁ ◽  
Julian Parsert ◽  
Cezary Kaliszyk

Abstract Applying machine learning to mathematical terms and formulas requires a suitable representation of formulas that is adequate for AI methods. In this paper, we develop an encoding that allows for logical properties to be preserved and is additionally reversible. This means that the tree shape of a formula including all symbols can be reconstructed from the dense vector representation. We do that by training two decoders: one that extracts the top symbol of the tree and one that extracts embedding vectors of subtrees. The syntactic and semantic logical properties that we aim to preserve include both structural formula properties, applicability of natural deduction steps and even more complex operations like unifiability. We propose datasets that can be used to train these syntactic and semantic properties. We evaluate the viability of the developed encoding across the proposed datasets as well as for the practical theorem proving problem of premise selection in the Mizar corpus.


2021 ◽  
Author(s):  
Alfredo Silva ◽  
Marcelo Mendoza

Word embeddings are vital descriptors of words in unigram representations of documents for many tasks in natural language processing and information retrieval. The representation of queries has been one of the most critical challenges in this area because it consists of a few terms and has little descriptive capacity. Strategies such as average word embeddings can enrich the queries' descriptive capacity since they favor the identification of related terms from the continuous vector representations that characterize these approaches. We propose a datadriven strategy to combine word embeddings. We use Idf combinations of embeddings to represent queries, showing that these representations outperform the average word embeddings recently proposed in the literature. Experimental results on benchmark data show that our proposal performs well, suggesting that data-driven combinations of word embeddings are a promising line of research in ad-hoc information retrieval.


Sign in / Sign up

Export Citation Format

Share Document