scholarly journals Effective Use of Learning Knowledge by FEERL

Author(s):  
Yukinobu Hoshino ◽  
◽  
Katsuari Kamei

The machine learning is proposed to learning techniques of spcialists. A machine has to learn techniques by trial and error when there are no training examples. Reinforcement learning is a powerful machine learning system, which is able to learn without giving training examples to a learning unit. But it is impossible for the reinforcement learning to support large environments because the number of if-then rules is a huge combination of a relationship between one environment and one action. We have proposed new reinforcement learning system for the large environment, Fuzzy Environment Evaluation Reinforcement Learning (FEERL). In this paper, we proposed to reuse of the acquired rules by FEERL.

Author(s):  
Ali Fakhry

The applications of Deep Q-Networks are seen throughout the field of reinforcement learning, a large subsect of machine learning. Using a classic environment from OpenAI, CarRacing-v0, a 2D car racing environment, alongside a custom based modification of the environment, a DQN, Deep Q-Network, was created to solve both the classic and custom environments. The environments are tested using custom made CNN architectures and applying transfer learning from Resnet18. While DQNs were state of the art years ago, using it for CarRacing-v0 appears somewhat unappealing and not as effective as other reinforcement learning techniques. Overall, while the model did train and the agent learned various parts of the environment, attempting to reach the reward threshold for the environment with this reinforcement learning technique seems problematic and difficult as other techniques would be more useful.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Samar Ali Shilbayeh ◽  
Sunil Vadera

Purpose This paper aims to describe the use of a meta-learning framework for recommending cost-sensitive classification methods with the aim of answering an important question that arises in machine learning, namely, “Among all the available classification algorithms, and in considering a specific type of data and cost, which is the best algorithm for my problem?” Design/methodology/approach This paper describes the use of a meta-learning framework for recommending cost-sensitive classification methods for the aim of answering an important question that arises in machine learning, namely, “Among all the available classification algorithms, and in considering a specific type of data and cost, which is the best algorithm for my problem?” The framework is based on the idea of applying machine learning techniques to discover knowledge about the performance of different machine learning algorithms. It includes components that repeatedly apply different classification methods on data sets and measures their performance. The characteristics of the data sets, combined with the algorithms and the performance provide the training examples. A decision tree algorithm is applied to the training examples to induce the knowledge, which can then be used to recommend algorithms for new data sets. The paper makes a contribution to both meta-learning and cost-sensitive machine learning approaches. Those both fields are not new, however, building a recommender that recommends the optimal case-sensitive approach for a given data problem is the contribution. The proposed solution is implemented in WEKA and evaluated by applying it on different data sets and comparing the results with existing studies available in the literature. The results show that a developed meta-learning solution produces better results than METAL, a well-known meta-learning system. The developed solution takes the misclassification cost into consideration during the learning process, which is not available in the compared project. Findings The proposed solution is implemented in WEKA and evaluated by applying it to different data sets and comparing the results with existing studies available in the literature. The results show that a developed meta-learning solution produces better results than METAL, a well-known meta-learning system. Originality/value The paper presents a major piece of new information in writing for the first time. Meta-learning work has been done before but this paper presents a new meta-learning framework that is costs sensitive.


Author(s):  
Jonathan Becker ◽  
Aveek Purohit ◽  
Zheng Sun

USARSim group at NIST developed a simulated robot that operated in the Unreal Tournament 3 (UT3) gaming environment. They used a software PID controller to control the robot in UT3 worlds. Unfortunately, the PID controller did not work well, so NIST asked us to develop a better controller using machine learning techniques. In the process, we characterized the software PID controller and the robot’s behavior in UT3 worlds. Using data collected from our simulations, we compared different machine learning techniques including linear regression and reinforcement learning (RL). Finally, we implemented a RL based controller in Matlab and ran it in the UT3 environment via a TCP/IP link between Matlab and UT3.


Author(s):  
Chang-Shing Lee ◽  
Mei-Hui Wang ◽  
Yi-Lin Tsai ◽  
Wei-Shan Chang ◽  
Marek Reformat ◽  
...  

The currently observed developments in Artificial Intelligence (AI) and its influence on different types of industries mean that human-robot cooperation is of special importance. Various types of robots have been applied to the so-called field of Edutainment, i.e., the field that combines education with entertainment. This paper introduces a novel fuzzy-based system for a human-robot cooperative Edutainment. This co-learning system includes a brain-computer interface (BCI) ontology model and a Fuzzy Markup Language (FML)-based Reinforcement Learning Agent (FRL-Agent). The proposed FRL-Agent is composed of (1) a human learning agent, (2) a robotic teaching agent, (3) a Bayesian estimation agent, (4) a robotic BCI agent, (5) a fuzzy machine learning agent, and (6) a fuzzy BCI ontology. In order to verify the effectiveness of the proposed system, the FRL-Agent is used as a robot teacher in a number of elementary schools, junior high schools, and at a university to allow robot teachers and students to learn together in the classroom. The participated students use handheld devices to indirectly or directly interact with the robot teachers to learn English. Additionally, a number of university students wear a commercial EEG device with eight electrode channels to learn English and listen to music. In the experiments, the robotic BCI agent analyzes the collected signals from the EEG device and transforms them into five physiological indices when the students are learning or listening. The Bayesian estimation agent and fuzzy machine learning agent optimize the parameters of the FRL agent and store them in the fuzzy BCI ontology. The experimental results show that the robot teachers motivate students to learn and stimulate their progress. The fuzzy machine learning agent is able to predict the five physiological indices based on the eight-channel EEG data and the trained model. In addition, we also train the model to predict the other students’ feelings based on the analyzed physiological indices and labeled feelings. The FRL agent is able to provide personalized learning content based on the developed human and robot cooperative edutainment approaches. To our knowledge, the FRL agent has not applied to the teaching fields such as elementary schools before and it opens up a promising new line of research in human and robot co-learning. In the future, we hope the FRL agent will solve such an existing problem in the classroom that the high-performing students feel the learning contents are too simple to motivate their learning or the low-performing students are unable to keep up with the learning progress to choose to give up learning.


2020 ◽  
Vol 6 (1) ◽  
pp. 72-103 ◽  
Author(s):  
Nicolas Ballier ◽  
Stéphane Canu ◽  
Caroline Petitjean ◽  
Gilles Gasso ◽  
Carlos Balhana ◽  
...  

Abstract This paper discusses machine learning techniques for the prediction of Common European Framework of Reference (CEFR) levels in a learner corpus. We summarise the CAp 2018 Machine Learning (ML) competition, a classification task of the six CEFR levels, which map linguistic competence in a foreign language onto six reference levels. The goal of this competition was to produce a machine learning system to predict learners’ competence levels from written productions comprising between 20 and 300 words and a set of characteristics computed for each text extracted from the French component of the EFCAMDAT data (Geertzen et al., 2013). Together with the description of the competition, we provide an analysis of the results and methods proposed by the participants and discuss the benefits of this kind of competition for the learner corpus research (LCR) community. The main findings address the methods used and lexical bias introduced by the task.


Author(s):  
Sergio A. Serrano

Reinforcement learning (RL) is a learning paradigm in which an agent interacts with the environment it inhabits to learn in a trial-and-error way. By letting the agent acquire knowledge from its own experience, RL has been successfully applied to complex domains such as robotics. However, for non-trivial problems, training an RL agent can take very long periods of time. Lifelong machine learning (LML) is a learning setting in which the agent learns to solve tasks sequentially, by leveraging knowledge accumulated from previously solved tasks to learn better/faster in a new one. Most LML works heavily rely on the assumption that tasks are similar to each other. However, this may not be true for some domains with a high degree of task-diversity that could benefit from adopting a lifelong learning approach, e.g., service robotics. Therefore, in this research we will address the problem of learning to solve a sequence of RL heterogeneous tasks (i.e., tasks that differ in their state-action space).


2018 ◽  
Vol 16 (06) ◽  
pp. 1840027 ◽  
Author(s):  
Wen Juan Hou ◽  
Bamfa Ceesay

Information on changes in a drug’s effect when taken in combination with a second drug, known as drug–drug interaction (DDI), is relevant in the pharmaceutical industry. DDIs can delay, decrease, or enhance absorption of either drug and thus decrease or increase their action or cause adverse effects. Information Extraction (IE) can be of great benefit in allowing identification and extraction of relevant information on DDIs. We here propose an approach for the extraction of DDI from text using neural word embedding to train a machine learning system. Results show that our system is competitive against other systems for the task of extracting DDIs, and that significant improvements can be achieved by learning from word features and using a deep-learning approach. Our study demonstrates that machine learning techniques such as neural networks and deep learning methods can efficiently aid in IE from text. Our proposed approach is well suited to play a significant role in future research.


2021 ◽  
Author(s):  
U. Savitha ◽  
Kodali Lahari Chandana ◽  
A. Cathrin Sagayam ◽  
S. Bhuvaneswari

Different eye disease has clinical use in defining of the actual status of eye, in the outcome of the medication and other alternatives in the curative phase. Mainly simplicity, clinical nature are the most important requirements for any classification system. In the existing they used different machine learning techniques to detect only single disease. Whereas deep learning system, which is named as Convolutional neural networks (CNNs) can show hierarchical representing of images between disease eye and normal eye pattern.


Author(s):  
Naaima Suroor ◽  
Imran Hussain ◽  
Aqeel Khalique ◽  
Tabrej Ahamad Khan

Reinforcement learning is a flourishing machine learning concept that has greatly influenced how robots are designed and taught to solve problems without human intervention. Robotics is not an alien discipline anymore, and we have several great innovations in this field that promise to impact lives for the better. However, humanoid robots are still a baffling concept for scientists, although we have managed to develop a few great inventions which look, talk, work, and behave very similarly to humans. But, can these machines actually exhibit the cognitive abilities of judgment, problem-solving, and perception as well as humans? In this article, the authors analyzed the probable impact and aspects of robots and their potential to behave like humans in every possible way through reinforcement learning techniques. The paper also discusses the gap between 'natural' and 'artificial' knowledge.


2021 ◽  
Vol 11 (18) ◽  
pp. 8589
Author(s):  
José D. Martín-Guerrero ◽  
Lucas Lamata

Machine learning techniques provide a remarkable tool for advancing scientific research, and this area has significantly grown in the past few years. In particular, reinforcement learning, an approach that maximizes a (long-term) reward by means of the actions taken by an agent in a given environment, can allow one for optimizing scientific discovery in a variety of fields such as physics, chemistry, and biology. Morover, physical systems, in particular quantum systems, may allow one for more efficient reinforcement learning protocols. In this review, we describe recent results in the field of reinforcement learning and physics. We include standard reinforcement learning techniques in the computer science community for enhancing physics research, as well as the more recent and emerging area of quantum reinforcement learning, inside quantum machine learning, for improving reinforcement learning computations.


Sign in / Sign up

Export Citation Format

Share Document