scholarly journals Adversarial Localized Energy Network for Structured Prediction

2020 ◽  
Vol 34 (04) ◽  
pp. 5347-5354
Author(s):  
Pingbo Pan ◽  
Ping Liu ◽  
Yan Yan ◽  
Tianbao Yang ◽  
Yi Yang

This paper focuses on energy model based structured output prediction. Though inheriting the benefits from energy-based models to handle the sophisticated cases, previous deep energy-based methods suffered from the substantial computation cost introduced by the enormous amounts of gradient steps in the inference process. To boost the efficiency and accuracy of the energy-based models on structured output prediction, we propose a novel method analogous to the adversarial learning framework. Specifically, in our proposed framework, the generator consists of an inference network while the discriminator is comprised of an energy network. The two sub-modules, i.e., the inference network and the energy network, can benefit each other mutually during the whole computation process. On the one hand, our modified inference network can boost the efficiency by predicting good initializations and reducing the searching space for the inference process; On the other hand, inheriting the benefits of the energy network, the energy module in our network can evaluate the quality of the generated output from the inference network and correspondingly provides a resourceful guide to the training of the inference network. In the ideal case, the adversarial learning strategy makes sure the two sub-modules can achieve an equilibrium state after steps. We conduct extensive experiments to verify the effectiveness and efficiency of our proposed method.

2018 ◽  
Vol 26 (6) ◽  
pp. 323-333
Author(s):  
Matt Grove

There is a growing interest in the relative benefits of the different social learning strategies used to transmit information between conspecifics and in the extent to which they require input from asocial learning. Two strategies in particular, conformist and payoff-based social learning, have been subject to considerable theoretical analysis, yet previous models have tended to examine their efficacy in relation to specific parameters or circumstances. This study employs individual-based simulations to derive the optimal proportion of individual learning that coexists with conformist and payoff-based strategies in populations experiencing wide-ranging variation in levels of environmental change, reproductive turnover, learning error and individual learning costs. Results demonstrate that conformity coexists with a greater proportion of asocial learning under all parameter combinations, and that payoff-based social learning is more adaptive in 97.43% of such combinations. These results are discussed in relation to the conjecture that the most successful social learning strategy will be the one that can persist with the lowest frequency of asocial learning, and the possibility that punishment of non-conformists may be required for conformity to confer adaptive benefits over payoff-based strategies in temporally heterogeneous environments.


2019 ◽  
Author(s):  
Oluwatobi Olabiyi ◽  
Alan O Salimov ◽  
Anish Khazane ◽  
Erik Mueller

Axioms ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 84 ◽  
Author(s):  
Sopo Pkhakadze ◽  
Hans Tompits

Default logic is one of the basic formalisms for nonmonotonic reasoning, a well-established area from logic-based artificial intelligence dealing with the representation of rational conclusions, which are characterised by the feature that the inference process may require to retract prior conclusions given additional premisses. This nonmonotonic aspect is in contrast to valid inference relations, which are monotonic. Although nonmonotonic reasoning has been extensively studied in the literature, only few works exist dealing with a proper proof theory for specific logics. In this paper, we introduce sequent-type calculi for two variants of default logic, viz., on the one hand, for three-valued default logic due to Radzikowska, and on the other hand, for disjunctive default logic, due to Gelfond, Lifschitz, Przymusinska, and Truszczyński. The first variant of default logic employs Łukasiewicz’s three-valued logic as the underlying base logic and the second variant generalises defaults by allowing a selection of consequents in defaults. Both versions have been introduced to address certain representational shortcomings of standard default logic. The calculi we introduce axiomatise brave reasoning for these versions of default logic, which is the task of determining whether a given formula is contained in some extension of a given default theory. Our approach follows the sequent method first introduced in the context of nonmonotonic reasoning by Bonatti, which employs a rejection calculus for axiomatising invalid formulas, taking care of expressing the consistency condition of defaults.


2022 ◽  
pp. 65-82
Author(s):  
Emily Art ◽  
Tasia A. Chatman ◽  
Lauren LeBental

Structural conditions in schools limit diverse exceptional learners' academic and social-emotional development and inhibit the professional growth of their teachers. Teachers and students are restricted by the current instructional model, which suggests that effective teachers lead all students through a uniform set of instructional experiences in service of objective mastery. This model assumes that diverse exceptional learners' success depends on access to the teacher-designed, one-right-way approach to the learning objective. This inflexible model prevents both the teacher and the student from co-constructing learning experiences that leverage their mutual strengths and support their mutual development. In this chapter, the authors argue that the Universal Design for Learning framework challenges the one-right-way approach, empowering teachers and students to leverage their strengths in the learning process. The authors recommend training teachers to use the Universal Design for Learning framework to design flexible instruction for diverse exceptional learners.


2020 ◽  
Vol 9 (9) ◽  
pp. 527
Author(s):  
Jiantao Liu ◽  
Quanlong Feng ◽  
Ying Wang ◽  
Bayartungalag Batsaikhan ◽  
Jianhua Gong ◽  
...  

With the rapid process of both urban sprawl and urban renewal, large numbers of old buildings have been demolished in China, leading to wide spread construction sites, which could cause severe dust contamination. To alleviate the accompanied dust pollution, green plastic mulch has been widely used by local governments of China. Therefore, timely and accurate mapping of urban green plastic covered regions is of great significance to both urban environmental management and the understanding of urban growth status. However, the complex spatial patterns of the urban landscape make it challenging to accurately identify these areas of green plastic cover. To tackle this issue, we propose a deep semi-supervised learning framework for green plastic cover mapping using very high resolution (VHR) remote sensing imagery. Specifically, a multi-scale deformable convolution neural network (CNN) was exploited to learn representative and discriminative features under complex urban landscapes. Afterwards, a semi-supervised learning strategy was proposed to integrate the limited labeled data and massive unlabeled data for model co-training. Experimental results indicate that the proposed method could accurately identify green plastic-covered regions in Jinan with an overall accuracy (OA) of 91.63%. An ablation study indicated that, compared with supervised learning, the semi-supervised learning strategy in this study could increase the OA by 6.38%. Moreover, the multi-scale deformable CNN outperforms several classic CNN models in the computer vision field. The proposed method is the first attempt to map urban green plastic-covered regions based on deep learning, which could serve as a baseline and useful reference for future research.


Author(s):  
Yang Zhao ◽  
Jianyi Zhang ◽  
Changyou Chen

Scalable Bayesian sampling is playing an important role in modern machine learning, especially in the fast-developed unsupervised-(deep)-learning models. While tremendous progresses have been achieved via scalable Bayesian sampling such as stochastic gradient MCMC (SG-MCMC) and Stein variational gradient descent (SVGD), the generated samples are typically highly correlated. Moreover, their sample-generation processes are often criticized to be inefficient. In this paper, we propose a novel self-adversarial learning framework that automatically learns a conditional generator to mimic the behavior of a Markov kernel (transition kernel). High-quality samples can be efficiently generated by direct forward passes though a learned generator. Most importantly, the learning process adopts a self-learning paradigm, requiring no information on existing Markov kernels, e.g., knowledge of how to draw samples from them. Specifically, our framework learns to use current samples, either from the generator or pre-provided training data, to update the generator such that the generated samples progressively approach a target distribution, thus it is called self-learning. Experiments on both synthetic and real datasets verify advantages of our framework, outperforming related methods in terms of both sampling efficiency and sample quality.


Sign in / Sign up

Export Citation Format

Share Document