scholarly journals Self-Calibration of Eye-to-Hand and Workspace for Mobile Service Robot

Robotics ◽  
2013 ◽  
pp. 1482-1499
Author(s):  
Jwu-Sheng Hu ◽  
Yung-Jung Chang

The geometrical relationships among robot arm, camera, and workspace are important to carry out visual servo tasks. For industrial robots, the relationships are usually fixed and well calibrated by experienced operators. However, for service robots, particularly in mobile applications, the relationships might be changed. For example, when a mobile robot attempts to use the visual information from environmental cameras to perform grasping, it is necessary to know the relationships before taking actions. Moreover, the calibration should be done automatically. This chapter proposes a self-calibration method using a laser distance sensor mounted on the robot arm. The advantage of the method, as compared with pattern-based one, is that the workspace coordinate is also obtained at the same time using the projected laser spot. Further, it is not necessary for the robot arm to enter the view scope of the camera for calibration. This increases the safety when the workspace is unknown initially.

2012 ◽  
pp. 229-246
Author(s):  
Jwu-Sheng Hu ◽  
Yung-Jung Chang

The geometrical relationships among robot arm, camera, and workspace are important to carry out visual servo tasks. For industrial robots, the relationships are usually fixed and well calibrated by experienced operators. However, for service robots, particularly in mobile applications, the relationships might be changed. For example, when a mobile robot attempts to use the visual information from environmental cameras to perform grasping, it is necessary to know the relationships before taking actions. Moreover, the calibration should be done automatically. This chapter proposes a self-calibration method using a laser distance sensor mounted on the robot arm. The advantage of the method, as compared with pattern-based one, is that the workspace coordinate is also obtained at the same time using the projected laser spot. Further, it is not necessary for the robot arm to enter the view scope of the camera for calibration. This increases the safety when the workspace is unknown initially.


2020 ◽  
pp. 1831
Author(s):  
Abbas Zedan Khalaf ◽  
Bashar H Alyasery

In this study, an approach inspired by a standardized calibration method was used to test a laser distance meter (LDM). A laser distance sensor (LDS) was tested with respect to an LDM and then a statistical indicator explained that the former functions in a similar manner as the latter. Also, regression terms were used to estimate the additive error and scale the correction of the sensors. The specified distance was divided into several parts with percent of longest one and observed using two sensors, left and right. These sensors were evaluated by using the regression between the measured and the reference values. The results were computed using MINITAB 17 package software and excel office package. The accuracy of the results in this work was ± 4.4mm + 50.89 ppm and ± 4.96mm + 99.88 ppm for LDS1 and LDS2, respectively, depending on the LDM accuracy which was computed to the full range (100 m). Using these sensors can be very effective for industrial, 3D modeling purposes, and many other applications, especially that it is inexpensive and available in many versions.


Robotica ◽  
2012 ◽  
Vol 31 (2) ◽  
pp. 217-224 ◽  
Author(s):  
Seungbin Moon ◽  
Sungsoo Rhim ◽  
Young-Jo Cho ◽  
Kwang-Ho Park ◽  
Gurvinder S. Virk

SUMMARYThis paper summarizes the recent standardization activities in the field of robotics by ISO (International Organization for Standardization), IEC (International Electrotechnical Commission), OMG (Object Management Group), and other organizations. While the standards in industrial robots have been mainly developed by ISO, the standards on the emerging service robots are initiated by many organizations. One of the goals of this paper is to coordinate the efforts among these groups so that more effective standardization activity can be executed. Standardization in the emerging service robots will eventually promote the proliferation of service robot markets in the near future.


Author(s):  
Ali Gürcan Özkil ◽  
Thomas Howard

This paper presents a new and practical method for mapping and annotating indoor environments for mobile robot use. The method makes use of 2D occupancy grid maps for metric representation, and topology maps to indicate the connectivity of the ‘places-of-interests’ in the environment. Novel use of 2D visual tags allows encoding information physically at places-of-interest. Moreover, using physical characteristics of the visual tags (i.e. paper size) is exploited to recover relative poses of the tags in the environment using a simple camera. This method extends tag encoding to simultaneous localization and mapping in topology space, and fuses camera and robot pose estimations to build an automatically annotated global topo-metric map. It is developed as a framework for a hospital service robot and tested in a real hospital. Experiments show that the method is capable of producing globally consistent, automatically annotated hybrid metric-topological maps that is needed by mobile service robots.


Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 722 ◽  
Author(s):  
Steffen Müller ◽  
Tim Wengefeld ◽  
Thanh Quang Trinh ◽  
Dustin Aganian ◽  
Markus Eisenbach ◽  
...  

In order to meet the increasing demands of mobile service robot applications, a dedicated perception module is an essential requirement for the interaction with users in real-world scenarios. In particular, multi sensor fusion and human re-identification are recognized as active research fronts. Through this paper we contribute to the topic and present a modular detection and tracking system that models position and additional properties of persons in the surroundings of a mobile robot. The proposed system introduces a probability-based data association method that besides the position can incorporate face and color-based appearance features in order to realize a re-identification of persons when tracking gets interrupted. The system combines the results of various state-of-the-art image-based detection systems for person recognition, person identification and attribute estimation. This allows a stable estimate of a mobile robot’s user, even in complex, cluttered environments with long-lasting occlusions. In our benchmark, we introduce a new measure for tracking consistency and show the improvements when face and appearance-based re-identification are combined. The tracking system was applied in a real world application with a mobile rehabilitation assistant robot in a public hospital. The estimated states of persons are used for the user-centered navigation behaviors, e.g., guiding or approaching a person, but also for realizing a socially acceptable navigation in public environments.


2014 ◽  
Vol 651-653 ◽  
pp. 831-834
Author(s):  
Xi Pei Ma ◽  
Bing Feng Qian ◽  
Song Jie Zhang ◽  
Ye Wang

The autonomous navigation process of a mobile service robot is usually in uncertain environment. The information only given by sensors has been unable to meet the demand of the modern mobile robots, so multi-sensor data fusion has been widely used in the field of robots. The platform of this project is the achievement of the important 863 Program national research project-a prototype nursing robot. The aim is to study a mobile service robot’s multi-sensor information fusion, path planning and movement control method. It can provide a basis and practical use’s reference for the study of an indoor robot’s localization.


2013 ◽  
Vol 394 ◽  
pp. 448-455 ◽  
Author(s):  
A.A. Nippun Kumaar ◽  
T.S.B. Sudarshan

Learning from Demonstration (LfD) is a technique for teaching a system through demonstration. In areas like service robotics the robot should be user friendly in terms of coding, so LfD techniques will be of greater advantage in this domain. In this paper two novel approaches, counter based technique and encoder based technique is proposed for teaching a mobile service robot to navigate from one point to another with a novel state based obstacle avoidance technique. The main aim of the work is to develop an LfD Algorithm which is less complex in terms of hardware and software. Both the proposed methods along with obstacle avoidance have been implemented and tested using Player/Stage robotics simulator.


2021 ◽  
Vol 2135 (1) ◽  
pp. 012002
Author(s):  
Holman Montiel ◽  
Fernando Martínez ◽  
Fredy Martínez

Abstract Autonomous mobility remains an open research problem in robotics. This is a complex problem that has its characteristics according to the type of task and environment intended for the robot’s activity. Service robotics has in this sense problems that have not been solved satisfactorily. These robots must interact with human beings in environments designed for human beings, which implies that one of the basic sensors for structuring motion control and navigation schemes are those that replicate the human optical sense. In their normal activity, robots are expected to interpret visual information in the environment while following a certain motion policy that allows them to move from one point to another in the environment, consistent with their tasks. A good optical sensing system can be structured around digital cameras, with which it can apply visual identification routines of both the trajectory and its environment. This research proposes a parallel control scheme (with two loops) for the definition of movements of a service robot from images. On the one hand, there is a control loop based on a visual memory strategy using a convolutional neural network. This system contemplates a deep learning model that is trained from images of the environment containing characteristic elements of the navigation environment (various types of obstacles and different cases of free trajectories with and without navigation path). To this first loop is connected in parallel a second loop in charge of defining the specific distances to the obstacles using a stereo vision system. The objective of this parallel loop is to quickly identify the obstacle points in front of the robot from the images using a bacterial interaction model. These two loops form an information feedback motion control framework that quickly analyzes the environment and defines motion strategies from digital images, achieving real-time control driven by visual information. Among the advantages of our scheme are the low processing and memory costs in the robot, and the no need to modify the environment to facilitate the navigation of the robot. The performance of the system is validated by simulation and laboratory experiments.


2020 ◽  
Vol 17 (6) ◽  
pp. 172988142096852
Author(s):  
Wang Yugang ◽  
Zhou Fengyu ◽  
Zhao Yang ◽  
Li Ming ◽  
Yin Lei

A novel iterative learning control (ILC) for perspective dynamic system (PDS) is designed and illustrated in detail in this article to overcome the uncertainties in path tracking of mobile service robots. PDS, which transmits the motion information of mobile service robots to image planes (such as a camera), provides a good control theoretical framework to estimate the robot motion problem. The proposed ILC algorithm is applied in accordance with the observed motion information to increase the robustness of the system in path tracking. The convergence of the presented learning algorithm is derived as the number of iterations tends to infinity under a specified condition. Simulation results show that the designed framework performs efficiently and satisfies the requirements of trajectory precision for path tracking of mobile service robots.


Author(s):  
Guixiu Qiao ◽  
Guangkun Li

Abstract Industrial robots play important roles in manufacturing automation for smart manufacturing. Some high-precision applications, for example, robot drilling, robot machining, robot high-precision assembly, and robot inspection, require higher robot accuracy compared with traditional part handling operations. The monitoring and assessment of robot accuracy degradation become critical for these applications. A novel vision-based sensing system for 6-D measurement (six-dimensional x, y, z, yaw, pitch, and roll) is developed at the National Institute of Standards and Technology (NIST) to measure the dynamic high accuracy movement of a robot arm. The measured 6-D information is used for robot accuracy degradation assessment and improvement. This paper presents an automatic calibration method for a vision-based 6-D sensing system. The stereo calibration is separated from the distortion calibration to speed up the on-site adjustment. Optimization algorithms are developed to achieve high calibration accuracy. The vision-based 6-D sensing system is used on a Universal Robots (UR5) to demonstrate the feasibility of using the system to assess the robot’s accuracy degradation.


Sign in / Sign up

Export Citation Format

Share Document