scholarly journals Teaching robots social autonomy from in situ human guidance

2019 ◽  
Vol 4 (35) ◽  
pp. eaat1186 ◽  
Author(s):  
Emmanuel Senft ◽  
Séverin Lemaignan ◽  
Paul E. Baxter ◽  
Madeleine Bartlett ◽  
Tony Belpaeme

Striking the right balance between robot autonomy and human control is a core challenge in social robotics, in both technical and ethical terms. On the one hand, extended robot autonomy offers the potential for increased human productivity and for the off-loading of physical and cognitive tasks. On the other hand, making the most of human technical and social expertise, as well as maintaining accountability, is highly desirable. This is particularly relevant in domains such as medical therapy and education, where social robots hold substantial promise, but where there is a high cost to poorly performing autonomous systems, compounded by ethical concerns. We present a field study in which we evaluate SPARC (supervised progressively autonomous robot competencies), an innovative approach addressing this challenge whereby a robot progressively learns appropriate autonomous behavior from in situ human demonstrations and guidance. Using online machine learning techniques, we demonstrate that the robot could effectively acquire legible and congruent social policies in a high-dimensional child-tutoring situation needing only a limited number of demonstrations while preserving human supervision whenever desirable. By exploiting human expertise, our technique enables rapid learning of autonomous social and domain-specific policies in complex and nondeterministic environments. Last, we underline the generic properties of SPARC and discuss how this paradigm is relevant to a broad range of difficult human-robot interaction scenarios.

2021 ◽  
Vol 5 (1) ◽  
pp. 38
Author(s):  
Chiara Giola ◽  
Piero Danti ◽  
Sandro Magnani

In the age of AI, companies strive to extract benefits from data. In the first steps of data analysis, an arduous dilemma scientists have to cope with is the definition of the ’right’ quantity of data needed for a certain task. In particular, when dealing with energy management, one of the most thriving application of AI is the consumption’s optimization of energy plant generators. When designing a strategy to improve the generators’ schedule, a piece of essential information is the future energy load requested by the plant. This topic, in the literature it is referred to as load forecasting, has lately gained great popularity; in this paper authors underline the problem of estimating the correct size of data to train prediction algorithms and propose a suitable methodology. The main characters of this methodology are the Learning Curves, a powerful tool to track algorithms performance whilst data training-set size varies. At first, a brief review of the state of the art and a shallow analysis of eligible machine learning techniques are offered. Furthermore, the hypothesis and constraints of the work are explained, presenting the dataset and the goal of the analysis. Finally, the methodology is elucidated and the results are discussed.


Author(s):  
Melika Sajadian ◽  
Ana Teixeira ◽  
Faraz S. Tehrani ◽  
Mathias Lemmens

Abstract. Built environments developed on compressible soils are susceptible to land deformation. The spatio-temporal monitoring and analysis of these deformations are necessary for sustainable development of cities. Techniques such as Interferometric Synthetic Aperture Radar (InSAR) or predictions based on soil mechanics using in situ characterization, such as Cone Penetration Testing (CPT) can be used for assessing such land deformations. Despite the combined advantages of these two methods, the relationship between them has not yet been investigated. Therefore, the major objective of this study is to reconcile InSAR measurements and CPT measurements using machine learning techniques in an attempt to better predict land deformation.


Author(s):  
Todor D. Ganchev

In this chapter we review various computational models of locally recurrent neurons and deliberate the architecture of some archetypal locally recurrent neural networks (LRNNs) that are based on them. Generalizations of these structures are discussed as well. Furthermore, we point at a number of realworld applications of LRNNs that have been reported in past and recent publications. These applications involve classification or prediction of temporal sequences, discovering and modeling of spatial and temporal correlations, process identification and control, etc. Validation experiments reported in these developments provide evidence that locally recurrent architectures are capable of identifying and exploiting temporal and spatial correlations (i.e., the context in which events occur), which is the main reason for their advantageous performance when compared with the one of their non-recurrent counterparts or other reasonable machine learning techniques.


Author(s):  
Stijn Hoppenbrouwers ◽  
Bart Schotten ◽  
Peter Lucas

Many model-based methods in AI require formal representation of knowledge as input. For the acquisition of highly structured, domain-specific knowledge, machine learning techniques still fall short, and knowledge elicitation and modelling is then the standard. However, obtaining formal models from informants who have few or no formal skills is a non-trivial aspect of knowledge acquisition, which can be viewed as an instance of the well-known “knowledge acquisition bottleneck”. Based on the authors’ work in conceptual modelling and method engineering, this paper casts methods for knowledge modelling in the framework of games. The resulting games-for-modelling approach is illustrated by a first prototype of such a game. The authors’ long-term goal is to lower the threshold for formal knowledge acquisition and modelling.


2019 ◽  
Vol 624 ◽  
pp. A45 ◽  
Author(s):  
Y. Alibert

Context. Planet formation models now often consider the formation of planetary systems with more than one planet per system. This raises the question of how to represent planetary systems in a convenient way (e.g. for visualisation purpose) and how to define the similarity between two planetary systems, for example to compare models and observations. Aims. We define a new metric to infer the similarity between two planetary systems, based on the properties of planets that belong to these systems. We then compare the similarity of planetary systems with the similarity of protoplanetary discs in which they form. Methods. We first define a new metric based on mixture of Gaussians, and then use this metric to apply a dimensionality reduction technique in order to represent planetary systems (which should be represented in a high-dimensional space) in a two-dimensional space. This allows us study the structure of a population of planetary systems and its relation with the characteristics of protoplanetary discs in which planetary systems form. Results. We show that the new metric can help to find the underlying structure of populations of planetary systems. In addition, the similarity between planetary systems, as defined in this paper, is correlated with the similarity between the protoplanetary discs in which these systems form. We finally compare the distribution of inter-system distances for a set of observed exoplanets with the distributions obtained from two models: a population synthesis model and a model where planetary systems are constructed by randomly picking synthetic planets. The observed distribution is shown to be closer to the one derived from the population synthesis model than from the random systems. Conclusions. The new metric can be used in a variety of unsupervised machine learning techniques, such as dimensionality reduction and clustering, to understand the results of simulations and compare them with the properties of observed planetary systems.


2020 ◽  
Vol 12 (4) ◽  
pp. 1606 ◽  
Author(s):  
Vincenzo Barrile ◽  
Antonino Fotia ◽  
Giovanni Leonardi ◽  
Raffaele Pucinotti

Structural Health Monitoring (SHM) allows us to have information about the structure under investigation and thus to create analytical models for the assessment of its state or structural behavior. Exceeded a predetermined danger threshold, the possibility of an early warning would allow us, on the one hand, to suspend risky activities and, on the other, to reduce maintenance costs. The system proposed in this paper represents an integration of multiple traditional systems that integrate data of a different nature (used in the preventive phase to define the various behavior scenarios on the structural model), and then reworking them through machine learning techniques, in order to obtain values to compare with limit thresholds. The risk level depends on several variables, specifically, the paper wants to evaluate the possibility of predicting the structure behavior monitoring only displacement data, transmitted through an experimental transmission control unit. In order to monitor and to make our cities more “sustainable”, the paper describes some tests on road infrastructure, in this contest through the combination of geomatics techniques and soft computing.


Computers ◽  
2019 ◽  
Vol 8 (4) ◽  
pp. 73 ◽  
Author(s):  
Rossi ◽  
Rubattino ◽  
Viscusi

Big data and analytics have received great attention from practitioners and academics, nowadays representing a key resource for the renewed interest in artificial intelligence, especially for machine learning techniques. In this article we explore the use of big data and analytics by different types of organizations, from various countries and industries, including the ones with a limited size and capabilities compared to corporations or new ventures. In particular, we are interested in organizations where the exploitation of big data and analytics may have social value in terms of, e.g., public and personal safety. Hence, this article discusses the results of two multi-industry and multi-country surveys carried out on a sample of public and private organizations. The results show a low rate of utilization of the data collected due to, among other issues, privacy and security, as well as the lack of staff trained in data analysis. Also, the two surveys show a challenge to reach an appropriate level of effectiveness in the use of big data and analytics, due to the shortage of the right tools and, again, capabilities, often related to a low rate of digital transformation.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
José Carlos Castillo ◽  
Diego Álvarez-Fernández ◽  
Fernando Alonso-Martín ◽  
Sara Marques-Villarroya ◽  
Miguel A. Salichs

Apraxia of speech is a motor speech disorder in which messages from the brain to the mouth are disrupted, resulting in an inability for moving lips or tongue to the right place to pronounce sounds correctly. Current therapies for this condition involve a therapist that in one-on-one sessions conducts the exercises. Our aim is to work in the line of robotic therapies in which a robot is able to perform partially or autonomously a therapy session, endowing a social robot with the ability of assisting therapists in apraxia of speech rehabilitation exercises. Therefore, we integrate computer vision and machine learning techniques to detect the mouth pose of the user and, on top of that, our social robot performs autonomously the different steps of the therapy using multimodal interaction.


Predicting the academic performance of students has been an important research topic in the Educational field. The main aim of a higher education institution is to provide quality education for students. One way to accomplish a higher level of quality of education is by predicting student’s academic performance and there by taking earlyre- medial actions to improve the same. This paper presents a system which utilizes machine learning techniques to classify and predict the academic performance of the students at the right time before the drop out occurs. The system first accepts the performance parameters of the basic level courses which the student had already passed as these parameters also influence the further study. To pre- dict the performance of the current program, the system continuously accepts the academic performance parame- ters after each academic evaluation process. The system employs machine learning techniques to study the aca- demic performance of the students after each evaluation process. The system also learns the basic rules followed by the University for assessing the students. Based on the present performance of the students, the system classifies the students into different levels and identify the students at high risk. Earlier prediction can help the students to adopt suitable measures in advance to improve the per for- man ce. The systems can also identify the factor saffecting the performance of the same students which helps them to take remedial measures in advance.


Sign in / Sign up

Export Citation Format

Share Document