scholarly journals Parallel control model for navigation tasks on service robots

2021 ◽  
Vol 2135 (1) ◽  
pp. 012002
Author(s):  
Holman Montiel ◽  
Fernando Martínez ◽  
Fredy Martínez

Abstract Autonomous mobility remains an open research problem in robotics. This is a complex problem that has its characteristics according to the type of task and environment intended for the robot’s activity. Service robotics has in this sense problems that have not been solved satisfactorily. These robots must interact with human beings in environments designed for human beings, which implies that one of the basic sensors for structuring motion control and navigation schemes are those that replicate the human optical sense. In their normal activity, robots are expected to interpret visual information in the environment while following a certain motion policy that allows them to move from one point to another in the environment, consistent with their tasks. A good optical sensing system can be structured around digital cameras, with which it can apply visual identification routines of both the trajectory and its environment. This research proposes a parallel control scheme (with two loops) for the definition of movements of a service robot from images. On the one hand, there is a control loop based on a visual memory strategy using a convolutional neural network. This system contemplates a deep learning model that is trained from images of the environment containing characteristic elements of the navigation environment (various types of obstacles and different cases of free trajectories with and without navigation path). To this first loop is connected in parallel a second loop in charge of defining the specific distances to the obstacles using a stereo vision system. The objective of this parallel loop is to quickly identify the obstacle points in front of the robot from the images using a bacterial interaction model. These two loops form an information feedback motion control framework that quickly analyzes the environment and defines motion strategies from digital images, achieving real-time control driven by visual information. Among the advantages of our scheme are the low processing and memory costs in the robot, and the no need to modify the environment to facilitate the navigation of the robot. The performance of the system is validated by simulation and laboratory experiments.

2011 ◽  
Vol 55-57 ◽  
pp. 877-880
Author(s):  
Qin Jun Du ◽  
Chao Sun ◽  
Xing Guo Huang

Vision is an important means of the humanoid robot to get external environmental information; vision system is an important part of humanoid robot. The system of a humanoid robot with the functions of visual perception and object manipulation is very complex because the body of the humanoid robot possesses many joint units and sensors. Two computers linked by Memolink communication unit is adopted to meet the needs of real time motion control and visual information processing tasks. The motion control system included coordination control computer, the distributed DSP joint controllers, DC motor drivers and sensors. Linux and real-time RT-Linux OS are used as the operating system to achieve the real-time control capability.


2012 ◽  
pp. 229-246
Author(s):  
Jwu-Sheng Hu ◽  
Yung-Jung Chang

The geometrical relationships among robot arm, camera, and workspace are important to carry out visual servo tasks. For industrial robots, the relationships are usually fixed and well calibrated by experienced operators. However, for service robots, particularly in mobile applications, the relationships might be changed. For example, when a mobile robot attempts to use the visual information from environmental cameras to perform grasping, it is necessary to know the relationships before taking actions. Moreover, the calibration should be done automatically. This chapter proposes a self-calibration method using a laser distance sensor mounted on the robot arm. The advantage of the method, as compared with pattern-based one, is that the workspace coordinate is also obtained at the same time using the projected laser spot. Further, it is not necessary for the robot arm to enter the view scope of the camera for calibration. This increases the safety when the workspace is unknown initially.


2016 ◽  
Vol 28 (4) ◽  
pp. 579-590 ◽  
Author(s):  
Yoshinobu Akimoto ◽  
◽  
Eri Sato-Shimokawara ◽  
Yasunari Fujimoto ◽  
Toru Yamaguchi

[abstFig src='/00280004/15.jpg' width='300' text='Model-based development with user model' ] The service robots now common in daily life are expected to provide highly useful services with high usability and high user experience (UX) to users and human beings around the service robots. Human-centered design (HCD) became used in a variety of industries, and HCD will be required in the service robot industry for improving usability and UX. However, no practical process of HCD has, to our knowledge, been proposed for service robots. So, we have been studying a development process for the service robots to improve a quality in use – a process we call model-based development with user models (MBD/UM). We certified the possibility of MBD/UM as a practical process of HCD by applying MBD/UM to approach function research on a telepresence robot.


2011 ◽  
Vol 308-310 ◽  
pp. 2084-2094
Author(s):  
Rong Xiong ◽  
Xin Feng Du ◽  
Wen Fei Wang ◽  
Yong Hai Wu ◽  
Jian Chu ◽  
...  

This paper describes the integrated design and techniques of the HAIBAO robot, an interactive service robot developed for Shanghai World Expo 2010. Compared with previous exhibition service robots, the HAIBAO robot has improved flexible motion ability, anthropomorphic interaction ability and intelligent cognitive and decision-making ability. In addition to both hardware and software system design, some key techniques including a four-wheeled omnidirectional mechanism and its motion control and compensatory algorithm, and multitask scheduling are introduced. During the Expo, which lasted 184 days, 37 HAIBAO robots successfully served tourists by providing information, photography services, hall guidance, conversation and various entertainment. Their robustness, stability, flexibility and friendliness were greatly commended.


2021 ◽  
Vol 12 ◽  
Author(s):  
Koichi Yamagata ◽  
Jinhwan Kwon ◽  
Takuya Kawashima ◽  
Wataru Shimoda ◽  
Maki Sakamoto

The major goals of texture research in computer vision are to understand, model, and process texture and ultimately simulate human visual information processing using computer technologies. The field of computer vision has witnessed remarkable advancements in material recognition using deep convolutional neural networks (DCNNs), which have enabled various computer vision applications, such as self-driving cars, facial and gesture recognition, and automatic number plate recognition. However, for computer vision to “express” texture like human beings is still difficult because texture description has no correct or incorrect answer and is ambiguous. In this paper, we develop a computer vision method using DCNN that expresses texture of materials. To achieve this goal, we focus on Japanese “sound-symbolic” words, which can describe differences in texture sensation at a fine resolution and are known to have strong and systematic sensory-sound associations. Because the phonemes of Japanese sound-symbolic words characterize categories of texture sensations, we develop a computer vision method to generate the phonemes and structure comprising sound-symbolic words that probabilistically correspond to the input images. It was confirmed that the sound-symbolic words output by our system had about 80% accuracy rate in our evaluation.


Robotics ◽  
2013 ◽  
pp. 1482-1499
Author(s):  
Jwu-Sheng Hu ◽  
Yung-Jung Chang

The geometrical relationships among robot arm, camera, and workspace are important to carry out visual servo tasks. For industrial robots, the relationships are usually fixed and well calibrated by experienced operators. However, for service robots, particularly in mobile applications, the relationships might be changed. For example, when a mobile robot attempts to use the visual information from environmental cameras to perform grasping, it is necessary to know the relationships before taking actions. Moreover, the calibration should be done automatically. This chapter proposes a self-calibration method using a laser distance sensor mounted on the robot arm. The advantage of the method, as compared with pattern-based one, is that the workspace coordinate is also obtained at the same time using the projected laser spot. Further, it is not necessary for the robot arm to enter the view scope of the camera for calibration. This increases the safety when the workspace is unknown initially.


2018 ◽  
Vol 15 (3) ◽  
pp. 172988141878082 ◽  
Author(s):  
Chien Van Dang ◽  
Mira Jun ◽  
Yong-Bin Shin ◽  
Jae-Won Choi ◽  
Jong-Wook Kim

This study aims to interpret and apply Asimov’s Three Laws of Robotics to home service robots. An agent is developed herein with the ability to focus its attention on human beings’ health, particularly the elderly and the diseased, by delivering food. The agent is developed on a cognitive agent architecture, state, operator, and result (Soar), to enable effective reasoning and decision-making skills. This study deals with basic home care services, such as food delivery and emergency response; therefore, common food care and emergency rules are newly proposed based on the priority values that correspond to a family’s circumstances and/or emergency levels. Asimov’s Three Laws are modified to aid the home service robot to follow a predetermined order in selecting a food item or recommending an alternative food item suitable for its user’s prevailing health condition. Experimental results confirm that reasoning and decision-making of the proposed agent are logically and ethically valid for a home service robot and ensure compliance with both the original and modified Asimov’s Three Laws.


2021 ◽  
Vol 18 (3) ◽  
pp. 172988142110121
Author(s):  
David Portugal ◽  
André G Araújo ◽  
Micael S Couceiro

To move out of the lab, service robots must reveal a proven robustness so they can be deployed in operational environments. This means that they should function steadily for long periods of time in real-world areas under uncertainty, without any human intervention, and exhibiting a mature technology readiness level. In this work, we describe an incremental methodology for the implementation of an innovative service robot, entirely developed from the outset, to monitor large indoor areas shared by humans and other obstacles. Focusing especially on the reliability of the fundamental localization system of the robot in the long term, we discuss all the incremental software and hardware features, design choices, and adjustments conducted, and show their impact on the performance of the robot in the real world, in three distinct 24-h long trials, with the ultimate goal of validating the proposed mobile robot solution for indoor monitoring.


2021 ◽  
Vol 11 (8) ◽  
pp. 3397
Author(s):  
Gustavo Assunção ◽  
Nuno Gonçalves ◽  
Paulo Menezes

Human beings have developed fantastic abilities to integrate information from various sensory sources exploring their inherent complementarity. Perceptual capabilities are therefore heightened, enabling, for instance, the well-known "cocktail party" and McGurk effects, i.e., speech disambiguation from a panoply of sound signals. This fusion ability is also key in refining the perception of sound source location, as in distinguishing whose voice is being heard in a group conversation. Furthermore, neuroscience has successfully identified the superior colliculus region in the brain as the one responsible for this modality fusion, with a handful of biological models having been proposed to approach its underlying neurophysiological process. Deriving inspiration from one of these models, this paper presents a methodology for effectively fusing correlated auditory and visual information for active speaker detection. Such an ability can have a wide range of applications, from teleconferencing systems to social robotics. The detection approach initially routes auditory and visual information through two specialized neural network structures. The resulting embeddings are fused via a novel layer based on the superior colliculus, whose topological structure emulates spatial neuron cross-mapping of unimodal perceptual fields. The validation process employed two publicly available datasets, with achieved results confirming and greatly surpassing initial expectations.


Sign in / Sign up

Export Citation Format

Share Document