RobotNEST: Towards a Viable Testbed for IoT-enabled Environments and Connected and Autonomous Robots

2022 ◽  
pp. 1-1
Author(s):  
Shunsuke Aoki ◽  
Takuro Yonezawa ◽  
Nobuo Kawaguchi
Keyword(s):  
Author(s):  
PAUL A. BOXER

Autonomous robots are unsuccessful at operating in complex, unconstrained environments. They lack the ability to learn about the physical behavior of different objects through the use of vision. We combine Bayesian networks and qualitative spatial representation to learn general physical behavior by visual observation. We input training scenarios that allow the system to observe and learn normal physical behavior. The position and velocity of the visible objects are represented as qualitative states. Transitions between these states over time are entered as evidence into a Bayesian network. The network provides probabilities of future transitions to produce predictions of future physical behavior. We use test scenarios to determine how well the approach discriminates between normal and abnormal physical behavior and actively predicts future behavior. We examine the ability of the system to learn three naive physical concepts, "no action at a distance", "solidity" and "movement on continuous paths". We conclude that the combination of qualitative spatial representations and Bayesian network techniques is capable of learning these three rules of naive physics.


Author(s):  
Stamatis Karnouskos

AbstractThe rapid advances in Artificial Intelligence and Robotics will have a profound impact on society as they will interfere with the people and their interactions. Intelligent autonomous robots, independent if they are humanoid/anthropomorphic or not, will have a physical presence, make autonomous decisions, and interact with all stakeholders in the society, in yet unforeseen manners. The symbiosis with such sophisticated robots may lead to a fundamental civilizational shift, with far-reaching effects as philosophical, legal, and societal questions on consciousness, citizenship, rights, and legal entity of robots are raised. The aim of this work is to understand the broad scope of potential issues pertaining to law and society through the investigation of the interplay of law, robots, and society via different angles such as law, social, economic, gender, and ethical perspectives. The results make it evident that in an era of symbiosis with intelligent autonomous robots, the law systems, as well as society, are not prepared for their prevalence. Therefore, it is now the time to start a multi-disciplinary stakeholder discussion and derive the necessary policies, frameworks, and roadmaps for the most eminent issues.


2021 ◽  
Vol 10 (3) ◽  
pp. 1-31
Author(s):  
Zhao Han ◽  
Daniel Giger ◽  
Jordan Allspaw ◽  
Michael S. Lee ◽  
Henny Admoni ◽  
...  

As autonomous robots continue to be deployed near people, robots need to be able to explain their actions. In this article, we focus on organizing and representing complex tasks in a way that makes them readily explainable. Many actions consist of sub-actions, each of which may have several sub-actions of their own, and the robot must be able to represent these complex actions before it can explain them. To generate explanations for robot behavior, we propose using Behavior Trees (BTs), which are a powerful and rich tool for robot task specification and execution. However, for BTs to be used for robot explanations, their free-form, static structure must be adapted. In this work, we add structure to previously free-form BTs by framing them as a set of semantic sets {goal, subgoals, steps, actions} and subsequently build explanation generation algorithms that answer questions seeking causal information about robot behavior. We make BTs less static with an algorithm that inserts a subgoal that satisfies all dependencies. We evaluate our BTs for robot explanation generation in two domains: a kitting task to assemble a gearbox, and a taxi simulation. Code for the behavior trees (in XML) and all the algorithms is available at github.com/uml-robotics/robot-explanation-BTs.


2021 ◽  
Vol 101 (3) ◽  
Author(s):  
Korbinian Nottensteiner ◽  
Arne Sachtler ◽  
Alin Albu-Schäffer

AbstractRobotic assembly tasks are typically implemented in static settings in which parts are kept at fixed locations by making use of part holders. Very few works deal with the problem of moving parts in industrial assembly applications. However, having autonomous robots that are able to execute assembly tasks in dynamic environments could lead to more flexible facilities with reduced implementation efforts for individual products. In this paper, we present a general approach towards autonomous robotic assembly that combines visual and intrinsic tactile sensing to continuously track parts within a single Bayesian framework. Based on this, it is possible to implement object-centric assembly skills that are guided by the estimated poses of the parts, including cases where occlusions block the vision system. In particular, we investigate the application of this approach for peg-in-hole assembly. A tilt-and-align strategy is implemented using a Cartesian impedance controller, and combined with an adaptive path executor. Experimental results with multiple part combinations are provided and analyzed in detail.


Electronics ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 675
Author(s):  
Józef Lisowski

This paper describes and illustrates the optimization of a safe mobile robot control process in collision situations using the model of a multistep matrix game of many participants in the form of a dual linear programming problem. The synthesis of non-cooperative and cooperative game control software was performed in Matlab/Simulink software to determine the safe path of the robot when passing a greater number of other robots and obstacles. The operation of the game motion control algorithm of a mobile robot is illustrated by computer simulations made in the Matlab/Simulink program of two real previously recorded navigation situations while passing dozens of other autonomous mobile robots.


Author(s):  
Vyacheslav G. Rybin ◽  
Nikolai P. Kobyzev ◽  
Petr S. Fedoseev ◽  
Valerii M. Vatnik ◽  
Georgii Yu. Kolev

Author(s):  
Riichi Kudo ◽  
Kahoko Takahashi ◽  
Takeru Inoue ◽  
Kohei Mizuno

Abstract Various smart connected devices are emerging like automated driving cars, autonomous robots, and remote-controlled construction vehicles. These devices have vision systems to conduct their operations without collision. Machine vision technology is becoming more accessible to perceive self-position and/or the surrounding environment thanks to the great advances in deep learning technologies. The accurate perception information of these smart connected devices makes it possible to predict wireless link quality (LQ). This paper proposes an LQ prediction scheme that applies machine learning to HD camera output to forecast the influence of surrounding mobile objects on LQ. The proposed scheme utilizes object detection based on deep learning and learns the relationship between the detected object position information and the LQ. Outdoor experiments show that LQ prediction proposal can well predict the throughput for around 1 s into the future in a 5.6-GHz wireless LAN channel.


Sign in / Sign up

Export Citation Format

Share Document