scholarly journals Aprender como una máquina: introduciendo la Inteligencia Artificial en la enseñanza secundaria

2021 ◽  
Vol 14 (1) ◽  
pp. 5
Author(s):  
J.M. Calabuig ◽  
L.M. Garcia-Raffi ◽  
E.A. Sánchez-Pérez

<p class="p1">La inteligencia artificial está presente en el entorno habitual de todos los estudiantes de secundaria. Sin embargo, la población general -y los alumnos en particular- no conocen cómo funcionan estas técnicas algorítmicas, que muchas veces tienen mecanismos muy sencillos y que pueden explicarse a nivel elemental en las clases de matemáticas o de tecnología en los Institutos de Enseñanza Secundaria (IES). Posiblemente estos contenidos tardarán muchos años en formar parte de los currículos de estas asignaturas, pero se pueden introducir como parte de los contenidos de álgebra que se explican en matemáticas, o de los relacionados con los algoritmos, en las clases de informática. Sobre todo si se plantean en forma de juego, en los que pueden competir diferentes grupos de estudiantes, tal y como proponemos en este artículo. Así, presentamos un ejemplo muy simple de un algoritmo de aprendizaje por refuerzo (Machine Learning-Reinforcement Learning), que sintetiza en una actividad lúdica los elementos fundamentales que constituyen un algoritmo de inteligencia artificial.</p>

2021 ◽  
Vol 3 (2) ◽  
Author(s):  
A. Hamann ◽  
V. Dunjko ◽  
S. Wölk

AbstractIn recent years, quantum-enhanced machine learning has emerged as a particularly fruitful application of quantum algorithms, covering aspects of supervised, unsupervised and reinforcement learning. Reinforcement learning offers numerous options of how quantum theory can be applied, and is arguably the least explored, from a quantum perspective. Here, an agent explores an environment and tries to find a behavior optimizing some figure of merit. Some of the first approaches investigated settings where this exploration can be sped-up, by considering quantum analogs of classical environments, which can then be queried in superposition. If the environments have a strict periodic structure in time (i.e. are strictly episodic), such environments can be effectively converted to conventional oracles encountered in quantum information. However, in general environments, we obtain scenarios that generalize standard oracle tasks. In this work, we consider one such generalization, where the environment is not strictly episodic, which is mapped to an oracle identification setting with a changing oracle. We analyze this case and show that standard amplitude-amplification techniques can, with minor modifications, still be applied to achieve quadratic speed-ups. In addition, we prove that an algorithm based on Grover iterations is optimal for oracle identification even if the oracle changes over time in a way that the “rewarded space” is monotonically increasing. This result constitutes one of the first generalizations of quantum-accessible reinforcement learning.


2014 ◽  
Vol 11 (92) ◽  
pp. 20131091 ◽  
Author(s):  
Oren Kolodny ◽  
Shimon Edelman ◽  
Arnon Lotem

Continuous, ‘always on’, learning of structure from a stream of data is studied mainly in the fields of machine learning or language acquisition, but its evolutionary roots may go back to the first organisms that were internally motivated to learn and represent their environment. Here, we study under what conditions such continuous learning (CL) may be more adaptive than simple reinforcement learning and examine how it could have evolved from the same basic associative elements. We use agent-based computer simulations to compare three learning strategies: simple reinforcement learning; reinforcement learning with chaining (RL-chain) and CL that applies the same associative mechanisms used by the other strategies, but also seeks statistical regularities in the relations among all items in the environment, regardless of the initial association with food. We show that a sufficiently structured environment favours the evolution of both RL-chain and CL and that CL outperforms the other strategies when food is relatively rare and the time for learning is limited. This advantage of internally motivated CL stems from its ability to capture statistical patterns in the environment even before they are associated with food, at which point they immediately become useful for planning.


Author(s):  
Ahmad Roihan ◽  
Po Abas Sunarya ◽  
Ageng Setiani Rafika

Abstrak - Pembelajaran mesin merupakan bagian dari kecerdasan buatan yang banyak digunakan untuk memecahkan berbagai masalah. Artikel ini menyajikan ulasan pemecahan masalah dari penelitian-penelitian terkini dengan mengklasifikasikan machine learning menjadi tiga kategori: pembelajaran terarah, pembelajaran tidak terarah, dan pembelajaran reinforcement. Hasil ulasan menunjukkan ketiga kategori masih berpeluang digunakan dalam beberapa kasus terkini dan dapat ditingkatkan untuk mengurangi beban komputasi dan mempercepat kinerja untuk mendapatkan tingkat akurasi dan presisi yang tinggi. Tujuan ulasan artikel ini diharapkan dapat menemukan celah dan dijadikan pedoman untuk penelitian pada masa yang akan datang.Katakunci: pembelajaran mesin, pembelajaran reinforcement, pembelajaran terarah, pembelajaran tidak terarahAbstract - Machine learning is part of artificial intelligence that is widely used to solve various problems. This article reviews problem solving from the latest studies by classifying machine learning into three categories: supervised learning, unsupervised learning, and reinforcement learning. The results of the review show that the three categories are still likely to be used in some of the latest cases and can be improved to reduce computational costs and accelerate performance to get a high level of accuracy and precision. The purpose of this article review is expected to be able to find a gap and it is used as a guideline for future research.Keywords: machine learning, reinforcement learning, supervised learning, unsupervised learning


2020 ◽  
Vol 4 (1) ◽  
Author(s):  
Maria Hügle ◽  
Patrick Omoumi ◽  
Jacob M van Laar ◽  
Joschka Boedecker ◽  
Thomas Hügle

Abstract Machine learning as a field of artificial intelligence is increasingly applied in medicine to assist patients and physicians. Growing datasets provide a sound basis with which to apply machine learning methods that learn from previous experiences. This review explains the basics of machine learning and its subfields of supervised learning, unsupervised learning, reinforcement learning and deep learning. We provide an overview of current machine learning applications in rheumatology, mainly supervised learning methods for e-diagnosis, disease detection and medical image analysis. In the future, machine learning will be likely to assist rheumatologists in predicting the course of the disease and identifying important disease factors. Even more interestingly, machine learning will probably be able to make treatment propositions and estimate their expected benefit (e.g. by reinforcement learning). Thus, in future, shared decision-making will not only include the patient’s opinion and the rheumatologist’s empirical and evidence-based experience, but it will also be influenced by machine-learned evidence.


Author(s):  
Ivan Herreros

This chapter discusses basic concepts from control theory and machine learning to facilitate a formal understanding of animal learning and motor control. It first distinguishes between feedback and feed-forward control strategies, and later introduces the classification of machine learning applications into supervised, unsupervised, and reinforcement learning problems. Next, it links these concepts with their counterparts in the domain of the psychology of animal learning, highlighting the analogies between supervised learning and classical conditioning, reinforcement learning and operant conditioning, and between unsupervised and perceptual learning. Additionally, it interprets innate and acquired actions from the standpoint of feedback vs anticipatory and adaptive control. Finally, it argues how this framework of translating knowledge between formal and biological disciplines can serve us to not only structure and advance our understanding of brain function but also enrich engineering solutions at the level of robot learning and control with insights coming from biology.


Photonics ◽  
2021 ◽  
Vol 8 (2) ◽  
pp. 33
Author(s):  
Lucas Lamata

Quantum machine learning has emerged as a promising paradigm that could accelerate machine learning calculations. Inside this field, quantum reinforcement learning aims at designing and building quantum agents that may exchange information with their environment and adapt to it, with the aim of achieving some goal. Different quantum platforms have been considered for quantum machine learning and specifically for quantum reinforcement learning. Here, we review the field of quantum reinforcement learning and its implementation with quantum photonics. This quantum technology may enhance quantum computation and communication, as well as machine learning, via the fruitful marriage between these previously unrelated fields.


Minerals ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 587
Author(s):  
Joao Pedro de Carvalho ◽  
Roussos Dimitrakopoulos

This paper presents a new truck dispatching policy approach that is adaptive given different mining complex configurations in order to deliver supply material extracted by the shovels to the processors. The method aims to improve adherence to the operational plan and fleet utilization in a mining complex context. Several sources of operational uncertainty arising from the loading, hauling and dumping activities can influence the dispatching strategy. Given a fixed sequence of extraction of the mining blocks provided by the short-term plan, a discrete event simulator model emulates the interaction arising from these mining operations. The continuous repetition of this simulator and a reward function, associating a score value to each dispatching decision, generate sample experiences to train a deep Q-learning reinforcement learning model. The model learns from past dispatching experience, such that when a new task is required, a well-informed decision can be quickly taken. The approach is tested at a copper–gold mining complex, characterized by uncertainties in equipment performance and geological attributes, and the results show improvements in terms of production targets, metal production, and fleet management.


2021 ◽  
pp. 027836492098785
Author(s):  
Julian Ibarz ◽  
Jie Tan ◽  
Chelsea Finn ◽  
Mrinal Kalakrishnan ◽  
Peter Pastor ◽  
...  

Deep reinforcement learning (RL) has emerged as a promising approach for autonomously acquiring complex behaviors from low-level sensor observations. Although a large portion of deep RL research has focused on applications in video games and simulated control, which does not connect with the constraints of learning in real environments, deep RL has also demonstrated promise in enabling physical robots to learn complex skills in the real world. At the same time, real-world robotics provides an appealing domain for evaluating such algorithms, as it connects directly to how humans learn: as an embodied agent in the real world. Learning to perceive and move in the real world presents numerous challenges, some of which are easier to address than others, and some of which are often not considered in RL research that focuses only on simulated domains. In this review article, we present a number of case studies involving robotic deep RL. Building off of these case studies, we discuss commonly perceived challenges in deep RL and how they have been addressed in these works. We also provide an overview of other outstanding challenges, many of which are unique to the real-world robotics setting and are not often the focus of mainstream RL research. Our goal is to provide a resource both for roboticists and machine learning researchers who are interested in furthering the progress of deep RL in the real world.


Sign in / Sign up

Export Citation Format

Share Document