scholarly journals Making High-Performance Robots Safe and Easy to Use For an Introduction to Computing

2020 ◽  
Vol 34 (09) ◽  
pp. 13412-13419
Author(s):  
Joseph Spitzer ◽  
Joydeep Biswas ◽  
Arjun Guha

Robots are a popular platform for introducing computing and artificial intelligence to novice programmers. However, programming state-of-the-art robots is very challenging, and requires knowledge of concurrency, operation safety, and software engineering skills, which can take years to teach. In this paper, we present an approach to introducing computing that allows students to safely and easily program high-performance robots. We develop a platform for students to program RoboCup Small Size League robots using JavaScript. The platform 1) ensures physical safety at several levels of abstraction, 2) allows students to program robots using JavaScript in the browser, without the need to install software, and 3) presents a simplified JavaScript semantics that shields students from confusing language features. We discuss our experience running a week-long workshop using this platform, and analyze over 3,000 student-written program revisions to provide empirical evidence that our approach does help students.

2021 ◽  
Vol 46 (2) ◽  
pp. 28-29
Author(s):  
Benoît Vanderose ◽  
Julie Henry ◽  
Benoît Frénay ◽  
Xavier Devroey

In the past years, with the development and widespread of digi- tal technologies, everyday life has been profoundly transformed. The general public, as well as specialized audiences, have to face an ever-increasing amount of knowledge and learn new abilities. The EASEAI workshop series addresses that challenge by look- ing at software engineering, education, and arti cial intelligence research elds to explore how they can be combined. Speci cally, this workshop brings together researchers, teachers, and practi- tioners who use advanced software engineering tools and arti cial intelligence techniques in the education eld and through a trans- generational and transdisciplinary range of students to discuss the current state of the art and practices, and establish new future directions. More information at https://easeai.github.io.


2021 ◽  
Vol 46 (1) ◽  
pp. 23-24
Author(s):  
Shin Yoo ◽  
Aldeida Aleti ◽  
Burak Turhan ◽  
Leandro L. Minku ◽  
Andriy Miranskyy ◽  
...  

The International Workshop on Realizing Arti cial Intelligence Synergies in Software Engineering (RAISE) aims to present the state of the art in the crossover between Software Engineering and Arti cial Intelligence. This workshop explored not only the appli- cation of AI techniques to SE problems but also the application of SE techniques to AI problems. Software has become critical for realizing functions central to our society. For example, software is essential for nancial and transport systems, energy generation and distribution systems, and safety-critical medical applications. Software development costs trillions of dollars each year yet, still, many of our software engineering methods remain mostly man- ual. If we can improve software production by smarter AI-based methods, even by small margins, then this would improve a crit- ical component of the international infrastructure, while freeing up tens of billions of dollars for other tasks.


Author(s):  
Sourav Chakraborty ◽  
Kuldeep S. Meel

Recent years have seen an unprecedented adoption of artificial intelligence in a wide variety of applications ranging from medical diagnosis, automobile industry, security to aircraft collision avoidance. Probabilistic reasoning is a key component of such modern artificial intelligence systems. Sampling techniques form the core of the state of the art probabilistic reasoning systems. The divide between the existence of sampling techniques that have strong theoretical guarantees but fail to scale and scalable techniques with weak or no theoretical guarantees mirrors the gap in software engineering between poor scalability of classical program synthesis techniques and billions of programs that are routinely used by practitioners. One bridge connecting the two extremes in the context of software engineering has been program testing. In contrast to testing for deterministic programs, where one trace is sufficient to prove the existence of a bug, in case of samplers one sample is typically not sufficient to prove non-conformity of the sampler to the desired distribution. This makes one wonder whether it is possible to design testing methodology to test whether a sampler under test generates samples close to a given distribution. The primary contribution of this paper is an affirmative answer to the above question when the given distribution is a uniform distribution: We design, to the best of our knowledge, the first algorithmic framework, Barbarik, to test whether the distribution generated is ε−close or η−far from the uniform distribution. In contrast to the sampling techniques that require an exponential or sub-exponential number of samples for sampler whose support can be represented by n bits, Barbarik requires only O(1/(η−ε)4) samples. We present a prototype implementation of Barbarik and use it to test three state of the art uniform samplers over the support defined by combinatorial constraints. Barbarik can provide a certificate of uniformity to one sampler and demonstrate nonuniformity for the other two samplers.


2021 ◽  
Author(s):  
◽  
Abubakar Siddique

<p><b>Artificial intelligence systems have become proficient at linking environmental features to targets to describe simple patterns in data. However, these systems can struggle with many real-world problems that entail hierarchical patterns within patterns, for example, in recognizing object ontologies where one object is made-up of other objects. Although it is possible to capture such complex structures by utilizing state-of-the-art deep networks, the knowledge is often stored in layers that do not take advantage of the potential benefits provided by reusing patterns within a layer of the system.</b></p> <p>Biological nervous systems can learn knowledge from simple and small-scale problems and then apply it to resolve more complex and large-scale problems in similar and related domains. However, rudimentary attempts to apply this transfer learning in artificial intelligence systems have struggled. This may be due to the homogeneous nature of their knowledge representation. The current understanding of the learning mechanisms in the brains of human and non-human animals can be used as inspiration to improve learning in artificial agents. Research into lateral asymmetry of the brain shows that it enables modular learning at different levels of abstraction that facilitate transfer between tasks.</p> <p>The proposed thesis is that an artificial intelligence system that enables lateralization and modular learning at different levels of abstraction has the ability to solve complex hierarchical problems that a similar homogeneous system can not. The comprehensive goal of this thesis is to accomplish lateralized learning, inspired by the principles of biological intelligence, in artificial intelligence systems. The objectives are to show that lateralization and modular learning assist the novel systems to encapsulate the underlying knowledge patterns in the form of building blocks of knowledge. These building blocks of knowledge are to be tested on analyzable Boolean tasks as well as practical computer vision and navigation tasks. Academic contributions are related to the novel methods of the linking, transfer, and sharing of learned knowledge which are based on the analogous strategies of the brain.</p> <p>This thesis proposes a general framework for lateralized artificial intelligence systems. The novel lateralized framework spans key aspects of knowledge perception, knowledge representation and utilization, and patterns of connectivity. It determines the essential functionality, critical methods, and associated parameters that are required to be incorporated into an artificial intelligence system to behave as a lateralized artificial intelligence system.</p> <p>This thesis creates a novel evolutionary machine learning system, by adapting the lateralized framework, to obtain a proof-of-concept of the lateralized approach. Considering the same problem at different levels of abstraction enables the novel system to reframe a complex problem as a simple problem and efficiently resolve it. The results on analyzable Boolean tasks show that the problems that contain a natural hierarchy of patterns are solved to a scale that exceeds previous work (i.e. 18-bit hierarchical multiplexer problem), and reusing learned general patterns as constituents for future problems advances transfer learning (e.g. n-bit parity problem effectively becomes a sequence of 2-bit parity problems). </p> <p>This thesis creates a novel lateralized artificial intelligence system, by adapting the lateralized framework, that shows robustness in a real-world domain that includes uncertainty, noise, and irrelevant and redundant data. The results of image classification tasks show that the lateralized system efficiently learns hierarchical distributions of knowledge, demonstrating performance that is similar to (or better than) other state-of-the-art deep systems as it reasons using multiple representations. Crucially, the novel system outperformed all the state-of-the-art deep models for the classification (binary classes) of normal and adversarial images by 0.43%-2.56% and 2.15%-25.84%, respectively. This thesis creates another novel multi-class lateralized system for computer vision problems to show that the lateralized approach can be scaled and not limited to learning classifier systems.</p> <p>Both the Boolean and computer vision problems are single step problems in the spatial domain. However, most biological tasks, which exhibit heterogeneity, are temporal in nature. This thesis creates a novel frame-of-reference based artificial intelligence system, by adapting the lateralized framework, to address perceptual aliasing in multi-step decision making tasks. Considering aliased states at a constituent level enables the novel system to place them appropriately in holistic level policies. Consequently, the novel system transforms a non-Markov environment into a deterministic environment and efficiently resolves it. Experimental results show that the novel system effectively solves complex aliasing patterns in non-Markov environments that have been challenging to artificial agents. For example, the novel system utilizes only 6.5, 3.71, and 3.22 steps to resolve Maze10, Littman57, and Woods102, respectively.</p> <p>A final contribution of this work is to obtain evidence of the benefits/costs of lateralization from artificial intelligence in order to inform cognitive neuroscience. Given that lateralization is ubiquitous in brains, evolutionary benefits can be assumed, at least in some domains. But that does not mean those benefits extend to all domains. The cognitive neuroscience research community has been struggling to determine the trade-off between the benefits and costs of lateralization. It has been hypothesized that lateralization has benefits that may counterbalance its costs. Lateralization has been associated with both poor and good performance. This thesis demonstrates the value of viable artificial systems for testing the costs and benefits of lateralization in biological systems.</p>


2021 ◽  
Author(s):  
◽  
Abubakar Siddique

<p><b>Artificial intelligence systems have become proficient at linking environmental features to targets to describe simple patterns in data. However, these systems can struggle with many real-world problems that entail hierarchical patterns within patterns, for example, in recognizing object ontologies where one object is made-up of other objects. Although it is possible to capture such complex structures by utilizing state-of-the-art deep networks, the knowledge is often stored in layers that do not take advantage of the potential benefits provided by reusing patterns within a layer of the system.</b></p> <p>Biological nervous systems can learn knowledge from simple and small-scale problems and then apply it to resolve more complex and large-scale problems in similar and related domains. However, rudimentary attempts to apply this transfer learning in artificial intelligence systems have struggled. This may be due to the homogeneous nature of their knowledge representation. The current understanding of the learning mechanisms in the brains of human and non-human animals can be used as inspiration to improve learning in artificial agents. Research into lateral asymmetry of the brain shows that it enables modular learning at different levels of abstraction that facilitate transfer between tasks.</p> <p>The proposed thesis is that an artificial intelligence system that enables lateralization and modular learning at different levels of abstraction has the ability to solve complex hierarchical problems that a similar homogeneous system can not. The comprehensive goal of this thesis is to accomplish lateralized learning, inspired by the principles of biological intelligence, in artificial intelligence systems. The objectives are to show that lateralization and modular learning assist the novel systems to encapsulate the underlying knowledge patterns in the form of building blocks of knowledge. These building blocks of knowledge are to be tested on analyzable Boolean tasks as well as practical computer vision and navigation tasks. Academic contributions are related to the novel methods of the linking, transfer, and sharing of learned knowledge which are based on the analogous strategies of the brain.</p> <p>This thesis proposes a general framework for lateralized artificial intelligence systems. The novel lateralized framework spans key aspects of knowledge perception, knowledge representation and utilization, and patterns of connectivity. It determines the essential functionality, critical methods, and associated parameters that are required to be incorporated into an artificial intelligence system to behave as a lateralized artificial intelligence system.</p> <p>This thesis creates a novel evolutionary machine learning system, by adapting the lateralized framework, to obtain a proof-of-concept of the lateralized approach. Considering the same problem at different levels of abstraction enables the novel system to reframe a complex problem as a simple problem and efficiently resolve it. The results on analyzable Boolean tasks show that the problems that contain a natural hierarchy of patterns are solved to a scale that exceeds previous work (i.e. 18-bit hierarchical multiplexer problem), and reusing learned general patterns as constituents for future problems advances transfer learning (e.g. n-bit parity problem effectively becomes a sequence of 2-bit parity problems). </p> <p>This thesis creates a novel lateralized artificial intelligence system, by adapting the lateralized framework, that shows robustness in a real-world domain that includes uncertainty, noise, and irrelevant and redundant data. The results of image classification tasks show that the lateralized system efficiently learns hierarchical distributions of knowledge, demonstrating performance that is similar to (or better than) other state-of-the-art deep systems as it reasons using multiple representations. Crucially, the novel system outperformed all the state-of-the-art deep models for the classification (binary classes) of normal and adversarial images by 0.43%-2.56% and 2.15%-25.84%, respectively. This thesis creates another novel multi-class lateralized system for computer vision problems to show that the lateralized approach can be scaled and not limited to learning classifier systems.</p> <p>Both the Boolean and computer vision problems are single step problems in the spatial domain. However, most biological tasks, which exhibit heterogeneity, are temporal in nature. This thesis creates a novel frame-of-reference based artificial intelligence system, by adapting the lateralized framework, to address perceptual aliasing in multi-step decision making tasks. Considering aliased states at a constituent level enables the novel system to place them appropriately in holistic level policies. Consequently, the novel system transforms a non-Markov environment into a deterministic environment and efficiently resolves it. Experimental results show that the novel system effectively solves complex aliasing patterns in non-Markov environments that have been challenging to artificial agents. For example, the novel system utilizes only 6.5, 3.71, and 3.22 steps to resolve Maze10, Littman57, and Woods102, respectively.</p> <p>A final contribution of this work is to obtain evidence of the benefits/costs of lateralization from artificial intelligence in order to inform cognitive neuroscience. Given that lateralization is ubiquitous in brains, evolutionary benefits can be assumed, at least in some domains. But that does not mean those benefits extend to all domains. The cognitive neuroscience research community has been struggling to determine the trade-off between the benefits and costs of lateralization. It has been hypothesized that lateralization has benefits that may counterbalance its costs. Lateralization has been associated with both poor and good performance. This thesis demonstrates the value of viable artificial systems for testing the costs and benefits of lateralization in biological systems.</p>


2020 ◽  
Vol 96 (3s) ◽  
pp. 585-588
Author(s):  
С.Е. Фролова ◽  
Е.С. Янакова

Предлагаются методы построения платформ прототипирования высокопроизводительных систем на кристалле для задач искусственного интеллекта. Изложены требования к платформам подобного класса и принципы изменения проекта СнК для имплементации в прототип. Рассматриваются методы отладки проектов на платформе прототипирования. Приведены результаты работ алгоритмов компьютерного зрения с использованием нейросетевых технологий на FPGA-прототипе семантических ядер ELcore. Methods have been proposed for building prototyping platforms for high-performance systems-on-chip for artificial intelligence tasks. The requirements for platforms of this class and the principles for changing the design of the SoC for implementation in the prototype have been described as well as methods of debugging projects on the prototyping platform. The results of the work of computer vision algorithms using neural network technologies on the FPGA prototype of the ELcore semantic cores have been presented.


Sign in / Sign up

Export Citation Format

Share Document