Enhanced Reliability of Mobile Robots with Sensor Data Estimation at Edge

Author(s):  
Victor Kathan Sarker ◽  
Prateeti Mukherjee ◽  
Tomi Westerlund
1990 ◽  
Vol 36 (9) ◽  
pp. 1544-1550 ◽  
Author(s):  
W S Lob

Abstract Mobile robots perform fetch-and-carry tasks autonomously. An intelligent, sensor-equipped mobile robot does not require dedicated pathways or extensive facility modification. In the hospital, mobile robots can be used to carry specimens, pharmaceuticals, meals, etc. between supply centers, patient areas, and laboratories. The HelpMate (Transitions Research Corp.) mobile robot was developed specifically for hospital environments. To reach a desired destination, Help-Mate navigates with an on-board computer that continuously polls a suite of sensors, matches the sensor data against a pre-programmed map of the environment, and issues drive commands and path corrections. A sender operates the robot with a user-friendly menu that prompts for payload insertion and desired destination(s). Upon arrival at its selected destination, the robot prompts the recipient for a security code or physical key and awaits acknowledgement of payload removal. In the future, the integration of HelpMate with robot manipulators, test equipment, and central institutional information systems will open new applications in more localized areas and should help overcome difficulties in filling transport staff positions.


Robotica ◽  
1997 ◽  
Vol 15 (6) ◽  
pp. 609-615 ◽  
Author(s):  
Mahieddine Benreguieg ◽  
Philippe Hoppenot ◽  
Hichem Maaref ◽  
Etienne Colle ◽  
Claude Barret

Most motion controls of mobile robots are based on the classical scheme of planning-navigation-piloting. The navigation function, the main part of which consists in obstacle avoidance, has to react with the shortest response time. The real-time constraint hardly limits the complexity of sensor data processing. The described navigator is built around fuzzy logic controllers. Besides the well-known possibility of taking into account human know-how, the approach provides several contributions: a low sensitivity to erroneous or inaccurate measures and, if the inputs of the controllers are normalised, an effective portability on various platform. To show these advantages, the same fuzzy navigator has been implemented on two mobile robots. Their mechanical structures are close, except for size and the sensing system.


2008 ◽  
Vol 20 (2) ◽  
pp. 213-220 ◽  
Author(s):  
Kimitoshi Yamazaki ◽  
◽  
Takashi Tsubouchi ◽  
Masahiro Tomono ◽  
◽  
...  

In this paper, a modeling method to handle furniture is proposed. Real-life environments are crowded with objects such as drawers and cabinets that, while easily dealt with by people, present mobile robots with problems. While it is to be hoped that robots will assist in multiple daily tasks such as putting objects in into drawers, the major problems lies in providing robots with knowledge about the environment efficiently and, if possible, autonomously.If mobile robots can handle these furniture autonomously, it is expected that multiple daily jobs, for example, storing a small object in a drawer, can be performed by the robots. However, it is a perplexing process to give several pieces of knowledge about the furniture to the robots manually. In our approach, by utilizing sensor data from a camera and a laser range finder which are combined with direct teaching, a handling model can be created not only how to handle the furniture but also an appearance and 3D shape. Experimental results show the effectiveness of our methods.


2013 ◽  
Vol 765-767 ◽  
pp. 1259-1262
Author(s):  
Feng Liu ◽  
Jian Yong Wang ◽  
Ming Liu

Nowadays, Internet of Things (IoT) has been becoming a hot research topic. Being an important part of Internet of Things, the wireless sensor networks collect various types of environmental data and construct the fundamental structure of the IoT applications. In order to find out the characteristics of the environmental data, in this paper, we focus on four types of these sensor data: temperature, humidity, light and voltage, and employ statistical methods to analyze and model these sensor data. The results of our research can be used to solve the missing sensor data estimation problem which is inevitable in the wireless sensor networks.


2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
Zhipeng Gao ◽  
Weijing Cheng ◽  
Xuesong Qiu ◽  
Luoming Meng

In wireless sensor network, data loss is inevitable due to its inherent characteristics. This phenomenon is even serious in some situation which brings a big challenge to the applications of sensor data. However, the traditional data estimation methods can not be directly used in wireless sensor network and existing estimation algorithms fail to provide a satisfactory accuracy or have high complexity. To address this problem,Temporal and Spatial Correlation Algorithm(TSCA) is proposed to estimate missing data as accurately as possible in this paper. Firstly, it saves all the data sensed at the same time as a time series, and the most relevant series are selected as the analysis sample, which improves efficiency and accuracy of the algorithm significantly. Secondly, it estimates missing values from temporal and spatial dimensions. Different weights are assigned to these two dimensions. Thirdly, there are two strategies to deal with severe data loss, which improves the applicability of the algorithm. Simulation results on different sensor datasets verify that the proposed approach outperforms existing solutions in terms of estimation accuracy.


Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2730 ◽  
Author(s):  
Varuna De Silva ◽  
Jamie Roche ◽  
Ahmet Kondoz

Autonomous robots that assist humans in day to day living tasks are becoming increasingly popular. Autonomous mobile robots operate by sensing and perceiving their surrounding environment to make accurate driving decisions. A combination of several different sensors such as LiDAR, radar, ultrasound sensors and cameras are utilized to sense the surrounding environment of autonomous vehicles. These heterogeneous sensors simultaneously capture various physical attributes of the environment. Such multimodality and redundancy of sensing need to be positively utilized for reliable and consistent perception of the environment through sensor data fusion. However, these multimodal sensor data streams are different from each other in many ways, such as temporal and spatial resolution, data format, and geometric alignment. For the subsequent perception algorithms to utilize the diversity offered by multimodal sensing, the data streams need to be spatially, geometrically and temporally aligned with each other. In this paper, we address the problem of fusing the outputs of a Light Detection and Ranging (LiDAR) scanner and a wide-angle monocular image sensor for free space detection. The outputs of LiDAR scanner and the image sensor are of different spatial resolutions and need to be aligned with each other. A geometrical model is used to spatially align the two sensor outputs, followed by a Gaussian Process (GP) regression-based resolution matching algorithm to interpolate the missing data with quantifiable uncertainty. The results indicate that the proposed sensor data fusion framework significantly aids the subsequent perception steps, as illustrated by the performance improvement of a uncertainty aware free space detection algorithm.


1996 ◽  
Vol 5 (2) ◽  
pp. 224-240 ◽  
Author(s):  
Robin R. Murphy ◽  
Erika Rogers

This paper describes current work on a cooperative teleassistance system for semiautonomous control of mobile robots. This system combines a robot architecture for limited autonomous perceptual and motor control with a knowledge-based operator assistant that provides strategic selection and enhancement of relevant data. It extends recent developments in artificial intelligence in modeling the role of visual interactions in problem solving for application to an interface permitting the human and remote to cooperate in cognitively demanding tasks such as recovering from execution failures, mission planning, and learning. The design of the system is presented, together with a number of exception-handling scenarios that were constructed as a result of experiments with actual sensor data collected from two mobile robots.


2020 ◽  
Vol 17 (2) ◽  
pp. 172988142091376
Author(s):  
Yanyan Dai ◽  
Suk Gyu Lee

A combination of Internet of Things and multiple robots with sensors has been an attractive research topic over the past years. This article proposes an Internet of Robotic Things system structure to monitor events, fuse sensor data, use local robots to determine a best action, and then act to control multiple mobile robots. The Internet of Robotic Things system includes two main layers: the host controller layer and the multiple robots layer. The controller layer communicates with the multiple robots layer by Wi-Fi module. The Internet of Robotic Things system helps finish five tasks: localizing robots, planning paths, avoiding obstacles, moving to waypoint stable, and creating a map. Based on depth data from depth camera and robot posture, a mapping algorithm is proposed to create map. Based on light detection and ranging sensor data and google cartographer, simultaneously localization and mapping (SLAM) is also processed in this article. The fuzzy sliding mode tracking control method is proposed for each robot to guarantee the robot stable moves. Simulation results show the effectiveness of the proposed algorithm and are used to compare with the experiment result. In the experiment, one host computer and two Kobuki mobile robots with light detection and ranging and depth camera sensors are integrated as an Internet of Robotic Things system. Two robots successfully localize themselves and avoid obstacles. The follower robot simultaneously builds a map.


Sign in / Sign up

Export Citation Format

Share Document