structured output prediction
Recently Published Documents


TOTAL DOCUMENTS

21
(FIVE YEARS 5)

H-INDEX

4
(FIVE YEARS 1)

Author(s):  
Waleed Mustafa ◽  
Yunwen Lei ◽  
Antoine Ledent ◽  
Marius Kloft

In machine learning we often encounter structured output prediction problems (SOPPs), i.e. problems where the output space admits a rich internal structure. Application domains where SOPPs naturally occur include natural language processing, speech recognition, and computer vision. Typical SOPPs have an extremely large label set, which grows exponentially as a function of the size of the output. Existing generalization analysis implies generalization bounds with at least a square-root dependency on the cardinality d of the label set, which can be vacuous in practice. In this paper, we significantly improve the state of the art by developing novel high-probability bounds with a logarithmic dependency on d. Furthermore, we leverage the lens of algorithmic stability to develop generalization bounds in expectation without any dependency on d. Our results therefore build a solid theoretical foundation for learning in large-scale SOPPs. Furthermore, we extend our results to learning with weakly dependent data.


2020 ◽  
Vol 34 (04) ◽  
pp. 5347-5354
Author(s):  
Pingbo Pan ◽  
Ping Liu ◽  
Yan Yan ◽  
Tianbao Yang ◽  
Yi Yang

This paper focuses on energy model based structured output prediction. Though inheriting the benefits from energy-based models to handle the sophisticated cases, previous deep energy-based methods suffered from the substantial computation cost introduced by the enormous amounts of gradient steps in the inference process. To boost the efficiency and accuracy of the energy-based models on structured output prediction, we propose a novel method analogous to the adversarial learning framework. Specifically, in our proposed framework, the generator consists of an inference network while the discriminator is comprised of an energy network. The two sub-modules, i.e., the inference network and the energy network, can benefit each other mutually during the whole computation process. On the one hand, our modified inference network can boost the efficiency by predicting good initializations and reducing the searching space for the inference process; On the other hand, inheriting the benefits of the energy network, the energy module in our network can evaluate the quality of the generated output from the inference network and correspondingly provides a resourceful guide to the training of the inference network. In the ideal case, the adversarial learning strategy makes sure the two sub-modules can achieve an equilibrium state after steps. We conduct extensive experiments to verify the effectiveness and efficiency of our proposed method.


2019 ◽  
Vol 503 ◽  
pp. 551-573 ◽  
Author(s):  
Gjorgji Madjarov ◽  
Vedrana Vidulin ◽  
Ivica Dimitrovski ◽  
Dragi Kocev

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 106065-106074
Author(s):  
Chunhua Zhang ◽  
Shiding Sun ◽  
Yingjie Tian ◽  
Zeyuan Wang

2018 ◽  
Vol 281 ◽  
pp. 169-177 ◽  
Author(s):  
Soufiane Belharbi ◽  
Romain Hérault ◽  
Clément Chatelain ◽  
Sébastien Adam

Sign in / Sign up

Export Citation Format

Share Document