Playing the Large Margin Preference Game

Author(s):  
Mirko Polato ◽  
Guglielmo Faggioli ◽  
Ivano Lauriola ◽  
Fabio Aiolli
Keyword(s):  
1977 ◽  
Vol 16 (04) ◽  
pp. 230-233 ◽  
Author(s):  
R. H. Greenfield
Keyword(s):  

The results of an experiment to measure the performance of the Davidson and Soundex phonetic key compression schemes in finding sets of records representing the same individual in a moderately large radiology patient file, as compared to exact surname matches, are presented. This is similar to the problem of retrieving a record by name under the assumption that neither the search key nor the recorded key is accurately known. Both phonetic schemes perform similarly in obtaining extra matches and both outperform by a large margin the exact, name match. The results also indicate that the Davidson scheme is superior to the Soundex because it produces significantly fewer mismatches.


2019 ◽  
Author(s):  
Wengong Jin ◽  
Regina Barzilay ◽  
Tommi S Jaakkola

The problem of accelerating drug discovery relies heavily on automatic tools to optimize precursor molecules to afford them with better biochemical properties. Our work in this paper substantially extends prior state-of-the-art on graph-to-graph translation methods for molecular optimization. In particular, we realize coherent multi-resolution representations by interweaving trees over substructures with the atom-level encoding of the original molecular graph. Moreover, our graph decoder is fully autoregressive, and interleaves each step of adding a new substructure with the process of resolving its connectivity to the emerging molecule. We evaluate our model on multiple molecular optimization tasks and show that our model outperforms previous state-of-the-art baselines by a large margin.


2019 ◽  
Author(s):  
Peidong Wang ◽  
Jia Cui ◽  
Chao Weng ◽  
Dong Yu

2020 ◽  
pp. 1-11
Author(s):  
Dawei Yu ◽  
Jie Yang ◽  
Yun Zhang ◽  
Shujuan Yu

The Densely Connected Network (DenseNet) has been widely recognized as a highly competitive architecture in Deep Neural Networks. And its most outstanding property is called Dense Connections, which represent each layer’s input by concatenating all the preceding layers’ outputs and thus improve the performance by encouraging feature reuse to the extreme. However, it is Dense Connections that cause the challenge of dimension-enlarging, making DenseNet very resource-intensive and low efficiency. In the light of this, inspired by the Residual Network (ResNet), we propose an improved DenseNet named Additive DenseNet, which features replacing concatenation operations (used in Dense Connections) with addition operations (used in ResNet), and in terms of feature reuse, it upgrades addition operations to accumulating operations (namely ∑ (·)), thus enables each layer’s input to be the summation of all the preceding layers’ outputs. Consequently, Additive DenseNet can not only preserve the dimension of input from enlarging, but also retain the effect of Dense Connections. In this paper, Additive DenseNet is applied to text classification task. The experimental results reveal that compared to DenseNet, our Additive DenseNet can reduce the model complexity by a large margin, such as GPU memory usage and quantity of parameters. And despite its high resource economy, Additive DenseNet can still outperform DenseNet on 6 text classification datasets in terms of accuracy and show competitive performance for model training.


Sign in / Sign up

Export Citation Format

Share Document