scholarly journals Learning by Ignoring, with Application to Domain Adaptation

Author(s):  
Xingchen Zhao ◽  
Xuehai He ◽  
Pengtao Xie

Learning by ignoring, which identifies less important things and excludes them from the learning process, is broadly practiced in human learning and has shown ubiquitous effectiveness. There has been psychological studies showing that learning to ignore certain things is a powerful tool for helping people focus. In this paper, we explore whether this useful human learning methodology can be borrowed to improve machine learning. We propose a novel machine learning framework referred to as learning by ignoring (LBI). Our framework automatically identifies pretraining data examples that have large domain shift from the target distribution by learning an ignoring variable for each example and excludes them from the pretraining process. We formulate LBI as a three-level optimization framework where three learning stages are involved: pretraining by minimizing the losses weighed by ignoring variables; finetuning; updating the ignoring variables by minimizing the validation loss. A gradient-based algorithm is developed to efficiently solve the three-level optimization problem in LBI. Experiments on various datasets demonstrate the effectiveness of our framework.

Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-10
Author(s):  
Qi Zhu ◽  
Ning Yuan ◽  
Donghai Guan

In recent years, self-paced learning (SPL) has attracted much attention due to its improvement to nonconvex optimization based machine learning algorithms. As a methodology introduced from human learning, SPL dynamically evaluates the learning difficulty of each sample and provides the weighted learning model against the negative effects from hard-learning samples. In this study, we proposed a cognitive driven SPL method, i.e., retrospective robust self-paced learning (R2SPL), which is inspired by the following two issues in human learning process: the misclassified samples are more impressive in upcoming learning, and the model of the follow-up learning process based on large number of samples can be used to reduce the risk of poor generalization in initial learning phase. We simultaneously estimated the degrees of learning-difficulty and misclassified in each step of SPL and proposed a framework to construct multilevel SPL for improving the robustness of the initial learning phase of SPL. The proposed method can be viewed as a multilayer model and the output of the previous layer can guide constructing robust initialization model of the next layer. The experimental results show that the R2SPL outperforms the conventional self-paced learning models in classification task.


2021 ◽  
Vol 248 ◽  
pp. 01012
Author(s):  
Anton Starodub ◽  
Natalia Eliseeva ◽  
Milen Georgiev

The research conducted in this paper is in the field of machine learning. The main object of the research is the learning process of an artificial neural network in order to increase its efficiency. The algorithm based on the analysis of retrospective learning data. The dynamics of changes in the values of the weights of an artificial neural network during training is an important indicator of training efficiency. The algorithm proposed in this work is based on changing the weight gradients values. Changing of the gradients weights makes it possible to understand how actively the network weights change during training. This knowledge helps to diagnose the training process and makes an adjusting the training parameters. The results of the algorithm can be used to train an artificial neural network. The network will help to determine the set of measures (actions) needed to optimize the learning process by the algorithm results.


2020 ◽  
Author(s):  
Pengtao Xie ◽  
Xingchen Zhao

Learning by ignoring, which identifies less important things and excludes them from the learning process, is an effective learning technique in human learning. There has been psychological studies showing that learning to ignore certain things is a powerful tool for helping people focus. We are interested in investigating whether this powerful learning technique can be borrowed from humans to improve the learning abilities of machines. We propose a novel learning approach called learning by ignoring (LBI). Our approach automatically identifies pretraining data examples that have large domain shift from the target distribution by learning an ignoring variable for each example and excludes them from the pretraining process. We propose a three-level optimization framework to formulate LBI which involves three stages of learning: pretraining by minimizing the losses weighed by ignoring variables; finetuning; updating the ignoring variables by minimizing the validation loss. We develop an efficient algorithm to solve the LBI problem. Experiments on various datasets demonstrate the effectiveness of our method.


Author(s):  
Tomohiro Yamaguchi ◽  
Yuki Tamai ◽  
Keiki Takadama

This chapter reports the authors' experimental results on analyzing the human goal-finding process in continuous learning. The objective of this research is to make clear the mechanism of continuous learning. To fill in the missing piece of reinforcement learning framework for the learning robot, the authors focus on two human mental learning processes, awareness as pre-learning process and reflection as post-learning process. To observe mental learning processes of a human, the authors propose a new method for visualizing them by the reflection subtask for human to be aware of the goal-finding process in continuous learning with invisible mazes. The two-layered task is introduced. The first layer is the main task of continuous learning designing the environmental mastery task to accomplish the goal for any environment. The second layer is the reflection subtask to make clear the goal-finding process in continuous learning. The reflection cost is evaluated to analyze it.


2020 ◽  
Author(s):  
Pengtao Xie ◽  
Xingchen Zhao

Learning by ignoring, which identifies less important things and excludes them from the learning process, is an effective learning technique in human learning. There has been psychological studies showing that learning to ignore certain things is a powerful tool for helping people focus. We are interested in investigating whether this powerful learning technique can be borrowed from humans to improve the learning abilities of machines. We propose a novel learning approach called learning by ignoring (LBI). Our approach automatically identifies pretraining data examples that have large domain shift from the target distribution by learning an ignoring variable for each example and excludes them from the pretraining process. We propose a three-level optimization framework to formulate LBI which involves three stages of learning: pretraining by minimizing the losses weighed by ignoring variables; finetuning; updating the ignoring variables by minimizing the validation loss. We develop an efficient algorithm to solve the LBI problem. Experiments on various datasets demonstrate the effectiveness of our method.


2021 ◽  
pp. 4978-4987
Author(s):  
Nada Hussain Ali ◽  
Matheel Emaduldeen Abdulmunem ◽  
Akbas Ezaldeen Ali

     Learning is the process of gaining knowledge and implementing this knowledge on behavior. The concept of learning is not strict to just human being, it expanded to include machine also. Now the machines can behave based on the gained knowledge learned from the environment. The learning process is evolving in both human and machine, to keep up with the technology in the world, the human learning evolved into micro-learning and the machine learning evolved to deep learning. In this paper, the evolution of learning is discussed as a formal survey accomplished with the foundation of machine learning and its evolved version of learning which is deep learning and micro-learning as a new learning technology can be implemented on human and machine learning. A procedural comparison is achieved to declare the purpose of this survey, also a related discussion integrates the aim of this study. Finally a concluded points are illustrated as outcome which summarized the practical evolution intervals of the machine learning different concepts.


2012 ◽  
Vol 24 (5) ◽  
pp. 1297-1328 ◽  
Author(s):  
R. Savitha ◽  
S. Suresh ◽  
N. Sundararajan

Recent studies on human learning reveal that self-regulated learning in a metacognitive framework is the best strategy for efficient learning. As the machine learning algorithms are inspired by the principles of human learning, one needs to incorporate the concept of metacognition to develop efficient machine learning algorithms. In this letter we present a metacognitive learning framework that controls the learning process of a fully complex-valued radial basis function network and is referred to as a metacognitive fully complex-valued radial basis function (Mc-FCRBF) network. Mc-FCRBF has two components: a cognitive component containing the FC-RBF network and a metacognitive component, which regulates the learning process of FC-RBF. In every epoch, when a sample is presented to Mc-FCRBF, the metacognitive component decides what to learn, when to learn, and how to learn based on the knowledge acquired by the FC-RBF network and the new information contained in the sample. The Mc-FCRBF learning algorithm is described in detail, and both its approximation and classification abilities are evaluated using a set of benchmark and practical problems. Performance results indicate the superior approximation and classification performance of Mc-FCRBF compared to existing methods in the literature.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Idris Kharroubi ◽  
Thomas Lim ◽  
Xavier Warin

AbstractWe study the approximation of backward stochastic differential equations (BSDEs for short) with a constraint on the gains process. We first discretize the constraint by applying a so-called facelift operator at times of a grid. We show that this discretely constrained BSDE converges to the continuously constrained one as the mesh grid converges to zero. We then focus on the approximation of the discretely constrained BSDE. For that we adopt a machine learning approach. We show that the facelift can be approximated by an optimization problem over a class of neural networks under constraints on the neural network and its derivative. We then derive an algorithm converging to the discretely constrained BSDE as the number of neurons goes to infinity. We end by numerical experiments.


Sign in / Sign up

Export Citation Format

Share Document