Design of a three-dimensional capacitor-based six-axis force sensor for human-robot interaction

2021 ◽  
pp. 112939
Author(s):  
Zexia he ◽  
Tao Liu
Micromachines ◽  
2018 ◽  
Vol 9 (11) ◽  
pp. 576 ◽  
Author(s):  
Gaoyang Pang ◽  
Jia Deng ◽  
Fangjinhua Wang ◽  
Junhui Zhang ◽  
Zhibo Pang ◽  
...  

For industrial manufacturing, industrial robots are required to work together with human counterparts on certain special occasions, where human workers share their skills with robots. Intuitive human–robot interaction brings increasing safety challenges, which can be properly addressed by using sensor-based active control technology. In this article, we designed and fabricated a three-dimensional flexible robot skin made by the piezoresistive nanocomposite based on the need for enhancement of the security performance of the collaborative robot. The robot skin endowed the YuMi robot with a tactile perception like human skin. The developed sensing unit in the robot skin showed the one-to-one correspondence between force input and resistance output (percentage change in impedance) in the range of 0–6.5 N. Furthermore, the calibration result indicated that the developed sensing unit is capable of offering a maximum force sensitivity (percentage change in impedance per Newton force) of 18.83% N−1 when loaded with an external force of 6.5 N. The fabricated sensing unit showed good reproducibility after loading with cyclic force (0–5.5 N) under a frequency of 0.65 Hz for 3500 cycles. In addition, to suppress the bypass crosstalk in robot skin, we designed a readout circuit for sampling tactile data. Moreover, experiments were conducted to estimate the contact/collision force between the object and the robot in a real-time manner. The experiment results showed that the implemented robot skin can provide an efficient approach for natural and secure human–robot interaction.


Sensors ◽  
2017 ◽  
Vol 17 (6) ◽  
pp. 1294 ◽  
Author(s):  
Victor Grosu ◽  
Svetlana Grosu ◽  
Bram Vanderborght ◽  
Dirk Lefeber ◽  
Carlos Rodriguez-Guerrero

2020 ◽  
Vol 17 (3) ◽  
pp. 172988142092529
Author(s):  
Junhao Xiao ◽  
Pan Wang ◽  
Huimin Lu ◽  
Hui Zhang

Human–robot interaction is a vital part of human–robot collaborative space exploration, which bridges the high-level decision and path planning intelligence of human and the accurate sensing and modelling ability of the robot. However, most conventional human–robot interaction approaches rely on video streams for the operator to understand the robot’s surrounding, which lacks situational awareness and force the operator to be stressed and fatigued. This research aims to improve efficiency and promote the natural level of interaction for human–robot collaboration. We present a human–robot interaction method based on real-time mapping and online virtual reality visualization, which is implemented and verified for rescue robotics. At the robot side, a dense point cloud map is built in real-time by LiDAR-IMU tightly fusion; the resulting map is further transformed into three-dimensional normal distributions transform representation. Wireless communication is employed to transmit the three-dimensional normal distributions transform map to the remote control station in an incremental manner. At the remote control station, the received map is rendered in virtual reality using parameterized ellipsoid cells. The operator controls the robot with three modes. In complex areas, the operator can use interactive devices to give low-level motion commands. In the less unstructured region, the operator can specify a path or even a target point. Afterwards, the robot follows the path or navigates to the target point autonomously. In other words, these two modes rely more on the robot’s autonomy. By virtue of virtual reality visualization, the operator can have a more comprehensive understanding of the space to be explored. In this case, the high-level decision and path planning intelligence of human and the accurate sensing and modelling ability of the robot can be well integrated as a whole. Although the method is proposed for rescue robots, it can also be used in other out-of-sight teleoperation-based human–robot collaboration systems, including but not limited to manufacturing, space, undersea, surgery, agriculture and military operations.


2014 ◽  
Author(s):  
Mitchell S. Dunfee ◽  
Tracy Sanders ◽  
Peter A. Hancock

Sign in / Sign up

Export Citation Format

Share Document