Application of the Bridge Inspection Robotic Camera to Monitoring

2018 ◽  
Vol 56 (1) ◽  
pp. 100-105
Author(s):  
Y. Fujiwara ◽  
K. Umezu ◽  
K. Tamaki ◽  
K. Tanno
10.29007/zw9k ◽  
2020 ◽  
Author(s):  
Kazuhide Nakata ◽  
Kazuki Umemoto ◽  
Kenji Kaneko ◽  
Ryusuke Fujisawa

This study addresses the development of a robot for inspection of old bridges. By suspending the robot with a wire and controlling the wire length, the movement of the robot is realized. The robot mounts a high-definition camera and aims to detect cracks on the concrete surface of the bridge using this camera. An inspection method using an unmanned aerial vehicle (UAV) has been proposed. Compared to the method using an unmanned aerial vehicle, the wire suspended robot system has the advantage of insensitivity to wind and ability to carry heavy equipments, this makes it possible to install a high-definition camera and a cleaning function to find cracks that are difficult to detect due to dirt.


Author(s):  
Nobuhide ISHIDA ◽  
Shigeo HIROSE ◽  
Michele GUARNIERI

Author(s):  
Martin Wagner ◽  
Andreas Bihlmaier ◽  
Hannes Götz Kenngott ◽  
Patrick Mietkowski ◽  
Paul Maria Scheikl ◽  
...  

Abstract Background We demonstrate the first self-learning, context-sensitive, autonomous camera-guiding robot applicable to minimally invasive surgery. The majority of surgical robots nowadays are telemanipulators without autonomous capabilities. Autonomous systems have been developed for laparoscopic camera guidance, however following simple rules and not adapting their behavior to specific tasks, procedures, or surgeons. Methods The herein presented methodology allows different robot kinematics to perceive their environment, interpret it according to a knowledge base and perform context-aware actions. For training, twenty operations were conducted with human camera guidance by a single surgeon. Subsequently, we experimentally evaluated the cognitive robotic camera control. A VIKY EP system and a KUKA LWR 4 robot were trained on data from manual camera guidance after completion of the surgeon’s learning curve. Second, only data from VIKY EP were used to train the LWR and finally data from training with the LWR were used to re-train the LWR. Results The duration of each operation decreased with the robot’s increasing experience from 1704 s ± 244 s to 1406 s ± 112 s, and 1197 s. Camera guidance quality (good/neutral/poor) improved from 38.6/53.4/7.9 to 49.4/46.3/4.1% and 56.2/41.0/2.8%. Conclusions The cognitive camera robot improved its performance with experience, laying the foundation for a new generation of cognitive surgical robots that adapt to a surgeon’s needs.


Author(s):  
Anil Kumar Agrawal ◽  
Glenn Washer ◽  
Sreenivas Alampalli ◽  
Xu Gong ◽  
Ran Cao

2021 ◽  
Vol 102 (4) ◽  
Author(s):  
Son Thanh Nguyen ◽  
Hung Manh La

2010 ◽  
Vol 15 (4) ◽  
pp. 439-444 ◽  
Author(s):  
Robert A. P. Sweeney ◽  
John F. Unsworth

Sign in / Sign up

Export Citation Format

Share Document