scholarly journals Visual Sensor Fusion Based Autonomous Robotic System for Assistive Drinking

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5419
Author(s):  
Pieter Try ◽  
Steffen Schöllmann ◽  
Lukas Wöhle ◽  
Marion Gebhard

People with severe motor impairments like tetraplegia are restricted in activities of daily living (ADL) and are dependent on continuous human assistance. Assistive robots perform physical tasks in the context of ADLs to support people in need of assistance. In this work a sensor fusion algorithm and a robot control algorithm for localizing the user’s mouth and autonomously navigating a robot arm are proposed for the assistive drinking task. The sensor fusion algorithm is implemented in a visual tracking system which consists of a 2-D camera and a single point time-of-flight distance sensor. The sensor fusion algorithm utilizes computer vision to combine camera images and distance measurements to achieve reliable localization of the user’s mouth. The robot control algorithm uses visual servoing to navigate a robot-handled drinking cup to the mouth and establish physical contact with the lips. This system features an abort command that is triggered by turning the head and unambiguous tracking of multiple faces which enable safe human robot interaction. A study with nine able-bodied test subjects shows that the proposed system reliably localizes the mouth and is able to autonomously navigate the cup to establish physical contact with the mouth.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yipeng Zhu ◽  
Tao Wang ◽  
Shiqiang Zhu

Purpose This paper aims to develop a robust person tracking method for human following robots. The tracking system adopts the multimodal fusion results of millimeter wave (MMW) radars and monocular cameras for perception. A prototype of human following robot is developed and evaluated by using the proposed tracking system. Design/methodology/approach Limited by angular resolution, point clouds from MMW radars are too sparse to form features for human detection. Monocular cameras can provide semantic information for objects in view, but cannot provide spatial locations. Considering the complementarity of the two sensors, a sensor fusion algorithm based on multimodal data combination is proposed to identify and localize the target person under challenging conditions. In addition, a closed-loop controller is designed for the robot to follow the target person with expected distance. Findings A series of experiments under different circumstances are carried out to validate the fusion-based tracking method. Experimental results show that the average tracking errors are around 0.1 m. It is also found that the robot can handle different situations and overcome short-term interference, continually track and follow the target person. Originality/value This paper proposed a robust tracking system with the fusion of MMW radars and cameras. Interference such as occlusion and overlapping are well handled with the help of the velocity information from the radars. Compared to other state-of-the-art plans, the sensor fusion method is cost-effective and requires no additional tags with people. Its stable performance shows good application prospects in human following robots.


2014 ◽  
Vol 8 (2) ◽  
pp. 216-221 ◽  
Author(s):  
Jianhai Han ◽  
◽  
Xiangpan Li ◽  
Qi Qin

A two-wheeled, self-balancing robot is proposed using 6-axis MEMS sensors MPU6050 to measure its posture. The sensors integrated with a 3-axis gyroscope and a 3-axis accelerometer, can output the inclination of the robot based on sensor fusion algorithm. A handheld remote controller sends out commands to the robot such as forward, back, and turning around. According to the inclination and orientation commands, a 16-bit MCU using the PID control algorithm calculates the required control voltage for the motors, to adjust the robot’s posture and keep the body balanced. In this paper, the principle of the sensor fusion algorithm is fully described, and its effects are verified through related experiments. The experimental results show that the proposed robot is practical and able to balance using inexpensive MEMS sensors.


Actuators ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 105
Author(s):  
Thinh Huynh ◽  
Minh-Thien Tran ◽  
Dong-Hun Lee ◽  
Soumayya Chakir ◽  
Young-Bok Kim

This paper proposes a new method to control the pose of a camera mounted on a two-axis gimbal system for visual servoing applications. In these applications, the camera should be stable while its line-of-sight points at a target located within the camera’s field of view. One of the most challenging aspects of these systems is the coupling in the gimbal kinematics as well as the imaging geometry. Such factors must be considered in the control system design process to achieve better control performances. The novelty of this study is that the couplings in both mechanism’s kinematics and imaging geometry are decoupled simultaneously by a new technique, so popular control methods can be easily implemented, and good tracking performances are obtained. The proposed control configuration includes a calculation of the gimbal’s desired motion taking into account the coupling influence, and a control law derived by the backstepping procedure. Simulation and experimental studies were conducted, and their results validate the efficiency of the proposed control system. Moreover, comparison studies are conducted between the proposed control scheme, the image-based pointing control, and the decoupled control. This proves the superiority of the proposed approach that requires fewer measurements and results in smoother transient responses.


2015 ◽  
Vol 764-765 ◽  
pp. 1319-1323
Author(s):  
Rong Shue Hsiao ◽  
Ding Bing Lin ◽  
Hsin Piao Lin ◽  
Jin Wang Zhou

Pyroelectric infrared (PIR) sensors can detect the presence of human without the need to carry any device, which are widely used for human presence detection in home/office automation systems in order to improve energy efficiency. However, PIR detection is based on the movement of occupants. For occupancy detection, PIR sensors have inherent limitation when occupants remain relatively still. Multisensor fusion technology takes advantage of redundant, complementary, or more timely information from different modal sensors, which is considered an effective approach for solving the uncertainty and unreliability problems of sensing. In this paper, we proposed a simple multimodal sensor fusion algorithm, which is very suitable to be manipulated by the sensor nodes of wireless sensor networks. The inference algorithm was evaluated for the sensor detection accuracy and compared to the multisensor fusion using dynamic Bayesian networks. The experimental results showed that a detection accuracy of 97% in room occupancy can be achieved. The accuracy of occupancy detection is very close to that of the dynamic Bayesian networks.


2011 ◽  
Vol 2011 ◽  
pp. 1-11 ◽  
Author(s):  
Matthew Rhudy ◽  
Yu Gu ◽  
Jason Gross ◽  
Marcello R. Napolitano

Using an Unscented Kalman Filter (UKF) as the nonlinear estimator within a Global Positioning System/Inertial Navigation System (GPS/INS) sensor fusion algorithm for attitude estimation, various methods of calculating the matrix square root were discussed and compared. Specifically, the diagonalization method, Schur method, Cholesky method, and five different iterative methods were compared. Additionally, a different method of handling the matrix square root requirement, the square-root UKF (SR-UKF), was evaluated. The different matrix square root calculations were compared based on computational requirements and the sensor fusion attitude estimation performance, which was evaluated using flight data from an Unmanned Aerial Vehicle (UAV). The roll and pitch angle estimates were compared with independently measured values from a high quality mechanical vertical gyroscope. This manuscript represents the first comprehensive analysis of the matrix square root calculations in the context of UKF. From this analysis, it was determined that the best overall matrix square root calculation for UKF applications in terms of performance and execution time is the Cholesky method.


2021 ◽  
Author(s):  
Langping An ◽  
Xianfei Pan ◽  
Ze Chen ◽  
Mang Wang ◽  
Zheming Tu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document