A method of automatic sensor placement for robot vision in inspection tasks

Author(s):  
S.Y. Chen ◽  
Y.F. Li
2020 ◽  
Vol 14 (1) ◽  
pp. 69-81
Author(s):  
C.H. Li ◽  
Q.W. Yang

Background: Structural damage identification is a very important subject in the field of civil, mechanical and aerospace engineering according to recent patents. Optimal sensor placement is one of the key problems to be solved in structural damage identification. Methods: This paper presents a simple and convenient algorithm for optimizing sensor locations for structural damage identification. Unlike other algorithms found in the published papers, the optimization procedure of sensor placement is divided into two stages. The first stage is to determine the key parts in the whole structure by their contribution to the global flexibility perturbation. The second stage is to place sensors on the nodes associated with those key parts for monitoring possible damage more efficiently. With the sensor locations determined by the proposed optimization process, structural damage can be readily identified by using the incomplete modes yielded from these optimized sensor measurements. In addition, an Improved Ridge Estimate (IRE) technique is proposed in this study to effectively resist the data errors due to modal truncation and measurement noise. Two truss structures and a frame structure are used as examples to demonstrate the feasibility and efficiency of the presented algorithm. Results: From the numerical results, structural damages can be successfully detected by the proposed method using the partial modes yielded by the optimal measurement with 5% noise level. Conclusion: It has been shown that the proposed method is simple to implement and effective for structural damage identification.


Author(s):  
Giorgio Metta

This chapter outlines a number of research lines that, starting from the observation of nature, attempt to mimic human behavior in humanoid robots. Humanoid robotics is one of the most exciting proving grounds for the development of biologically inspired hardware and software—machines that try to recreate billions of years of evolution with some of the abilities and characteristics of living beings. Humanoids could be especially useful for their ability to “live” in human-populated environments, occupying the same physical space as people and using tools that have been designed for people. Natural human–robot interaction is also an important facet of humanoid research. Finally, learning and adapting from experience, the hallmark of human intelligence, may require some approximation to the human body in order to attain similar capacities to humans. This chapter focuses particularly on compliant actuation, soft robotics, biomimetic robot vision, robot touch, and brain-inspired motor control in the context of the iCub humanoid robot.


2021 ◽  
Vol 11 (9) ◽  
pp. 4269
Author(s):  
Kamil Židek ◽  
Ján Piteľ ◽  
Michal Balog ◽  
Alexander Hošovský ◽  
Vratislav Hladký ◽  
...  

The assisted assembly of customized products supported by collaborative robots combined with mixed reality devices is the current trend in the Industry 4.0 concept. This article introduces an experimental work cell with the implementation of the assisted assembly process for customized cam switches as a case study. The research is aimed to design a methodology for this complex task with full digitalization and transformation data to digital twin models from all vision systems. Recognition of position and orientation of assembled parts during manual assembly are marked and checked by convolutional neural network (CNN) model. Training of CNN was based on a new approach using virtual training samples with single shot detection and instance segmentation. The trained CNN model was transferred to an embedded artificial processing unit with a high-resolution camera sensor. The embedded device redistributes data with parts detected position and orientation into mixed reality devices and collaborative robot. This approach to assisted assembly using mixed reality, collaborative robot, vision systems, and CNN models can significantly decrease assembly and training time in real production.


Sign in / Sign up

Export Citation Format

Share Document