scholarly journals Vision-Based Intelligent Perceiving and Planning System of a 7-DoF Collaborative Robot

2021 ◽  
Vol 2021 ◽  
pp. 1-25
Author(s):  
Linfeng Xu ◽  
Gang Li ◽  
Peiheng Song ◽  
Weixiang Shao

In this paper, an intelligent perceiving and planning system based on deep learning is proposed for a collaborative robot consisting of a 7-DoF (7-degree-of-freedom) manipulator, a three-finger robot hand, and a vision system, known as IPPS (intelligent perceiving and planning system). The lack of intelligence has been limiting the application of collaborative robots for a long time. A system to realize “eye-brain-hand” process is crucial for the true intelligence of robots. In this research, a more stable and accurate perceiving process was proposed. A well-designed camera system as the vision system and a new hand tracking method were proposed for operation perceiving and recording set establishment to improve the applicability. A visual process was designed to improve the accuracy of environment perceiving. Besides, a faster and more precise planning process was proposed. Deep learning based on a new CNN (convolution neural network) was designed to realize intelligent grasping planning for robot hand. A new trajectory planning method of the manipulator was proposed to improve efficiency. The performance of the IPPS was tested with simulations and experiments in a real environment. The results show that IPPS could effectively realize intelligent perceiving and planning for the robot, which could realize higher intelligence and great applicability for collaborative robots.

Author(s):  
Giuseppe Placidi ◽  
Danilo Avola ◽  
Luigi Cinque ◽  
Matteo Polsinelli ◽  
Eleni Theodoridou ◽  
...  

AbstractVirtual Glove (VG) is a low-cost computer vision system that utilizes two orthogonal LEAP motion sensors to provide detailed 4D hand tracking in real–time. VG can find many applications in the field of human-system interaction, such as remote control of machines or tele-rehabilitation. An innovative and efficient data-integration strategy, based on the velocity calculation, for selecting data from one of the LEAPs at each time, is proposed for VG. The position of each joint of the hand model, when obscured to a LEAP, is guessed and tends to flicker. Since VG uses two LEAP sensors, two spatial representations are available each moment for each joint: the method consists of the selection of the one with the lower velocity at each time instant. Choosing the smoother trajectory leads to VG stabilization and precision optimization, reduces occlusions (parts of the hand or handling objects obscuring other hand parts) and/or, when both sensors are seeing the same joint, reduces the number of outliers produced by hardware instabilities. The strategy is experimentally evaluated, in terms of reduction of outliers with respect to a previously used data selection strategy on VG, and results are reported and discussed. In the future, an objective test set has to be imagined, designed, and realized, also with the help of an external precise positioning equipment, to allow also quantitative and objective evaluation of the gain in precision and, maybe, of the intrinsic limitations of the proposed strategy. Moreover, advanced Artificial Intelligence-based (AI-based) real-time data integration strategies, specific for VG, will be designed and tested on the resulting dataset.


2021 ◽  
Vol 11 (9) ◽  
pp. 4269
Author(s):  
Kamil Židek ◽  
Ján Piteľ ◽  
Michal Balog ◽  
Alexander Hošovský ◽  
Vratislav Hladký ◽  
...  

The assisted assembly of customized products supported by collaborative robots combined with mixed reality devices is the current trend in the Industry 4.0 concept. This article introduces an experimental work cell with the implementation of the assisted assembly process for customized cam switches as a case study. The research is aimed to design a methodology for this complex task with full digitalization and transformation data to digital twin models from all vision systems. Recognition of position and orientation of assembled parts during manual assembly are marked and checked by convolutional neural network (CNN) model. Training of CNN was based on a new approach using virtual training samples with single shot detection and instance segmentation. The trained CNN model was transferred to an embedded artificial processing unit with a high-resolution camera sensor. The embedded device redistributes data with parts detected position and orientation into mixed reality devices and collaborative robot. This approach to assisted assembly using mixed reality, collaborative robot, vision systems, and CNN models can significantly decrease assembly and training time in real production.


Materials ◽  
2020 ◽  
Vol 14 (1) ◽  
pp. 67
Author(s):  
Rodrigo Pérez Ubeda ◽  
Santiago C. Gutiérrez Rubert ◽  
Ranko Zotovic Stanisic ◽  
Ángel Perles Ivars

The rise of collaborative robots urges the consideration of them for different industrial tasks such as sanding. In this context, the purpose of this article is to demonstrate the feasibility of using collaborative robots in processing operations, such as orbital sanding. For the demonstration, the tools and working conditions have been adjusted to the capacity of the robot. Materials with different characteristics have been selected, such as aluminium, steel, brass, wood, and plastic. An inner/outer control loop strategy has been used, complementing the robot’s motion control with an outer force control loop. After carrying out an explanatory design of experiments, it was observed that it is possible to perform the operation in all materials, without destabilising the control, with a mean force error of 0.32%. Compared with industrial robots, collaborative ones can perform the same sanding task with similar results. An important outcome is that unlike what might be thought, an increase in the applied force does not guarantee a better finish. In fact, an increase in the feed rate does not produce significant variation in the finish—less than 0.02 µm; therefore, the process is in a “saturation state” and it is possible to increase the feed rate to increase productivity.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 343
Author(s):  
Kim Bjerge ◽  
Jakob Bonde Nielsen ◽  
Martin Videbæk Sepstrup ◽  
Flemming Helsing-Nielsen ◽  
Toke Thomas Høye

Insect monitoring methods are typically very time-consuming and involve substantial investment in species identification following manual trapping in the field. Insect traps are often only serviced weekly, resulting in low temporal resolution of the monitoring data, which hampers the ecological interpretation. This paper presents a portable computer vision system capable of attracting and detecting live insects. More specifically, the paper proposes detection and classification of species by recording images of live individuals attracted to a light trap. An Automated Moth Trap (AMT) with multiple light sources and a camera was designed to attract and monitor live insects during twilight and night hours. A computer vision algorithm referred to as Moth Classification and Counting (MCC), based on deep learning analysis of the captured images, tracked and counted the number of insects and identified moth species. Observations over 48 nights resulted in the capture of more than 250,000 images with an average of 5675 images per night. A customized convolutional neural network was trained on 2000 labeled images of live moths represented by eight different classes, achieving a high validation F1-score of 0.93. The algorithm measured an average classification and tracking F1-score of 0.71 and a tracking detection rate of 0.79. Overall, the proposed computer vision system and algorithm showed promising results as a low-cost solution for non-destructive and automatic monitoring of moths.


Author(s):  
Ahmad Jahanbakhshi ◽  
Yousef Abbaspour-Gilandeh ◽  
Kobra Heidarbeigi ◽  
Mohammad Momeny

Author(s):  
Robert Bogue

Purpose – This paper aims to provide a European perspective on the collaborative robot business and to consider the factors governing future market development. Design/methodology/approach – Following an introduction, this first describes the collaborative robots launched recently by European manufacturers and their applications. It then discusses major European research activities and finally considers the factors stimulating the market. Findings – This article shows that collaborative robots are being commercialised by the major European robot manufacturers as well as by several smaller specialists. Although most have low payload capacities they are inexpensive and offer a number of operational benefits, making them well suited to a range of existing and emerging applications. Europe has a strong research base and several EU-funded programmes aim to stimulate collaborative robot development and use. Rapid market development is anticipated, driven in the main by applications in electronic product manufacture and assembly; new applications in the automotive industry; uses by small to medium-sized manufacturers; and companies seeking robots to support agile production methods. Originality/value – This paper provides a timely review of the rapidly developing European collaborative robot industry.


2021 ◽  
Vol 11 (13) ◽  
pp. 6017
Author(s):  
Gerivan Santos Junior ◽  
Janderson Ferreira ◽  
Cristian Millán-Arias ◽  
Ramiro Daniel ◽  
Alberto Casado Junior ◽  
...  

Cracks are pathologies whose appearance in ceramic tiles can cause various damages due to the coating system losing water tightness and impermeability functions. Besides, the detachment of a ceramic plate, exposing the building structure, can still reach people who move around the building. Manual inspection is the most common method for addressing this problem. However, it depends on the knowledge and experience of those who perform the analysis and demands a long time and a high cost to map the entire area. This work focuses on automated optical inspection to find faults in ceramic tiles performing the segmentation of cracks in ceramic images using deep learning to segment these defects. We propose an architecture for segmenting cracks in facades with Deep Learning that includes an image pre-processing step. We also propose the Ceramic Crack Database, a set of images to segment defects in ceramic tiles. The proposed model can adequately identify the crack even when it is close to or within the grout.


Author(s):  
Fahad Iqbal Khawaja ◽  
Akira Kanazawa ◽  
Jun Kinugawa ◽  
Kazuhiro Kosuge

Human-Robot Interaction (HRI) for collaborative robots has become an active research topic recently. Collaborative robots assist the human workers in their tasks and improve their efficiency. But the worker should also feel safe and comfortable while interacting with the robot. In this paper, we propose a human-following motion planning and control scheme for a collaborative robot which supplies the necessary parts and tools to a worker in an assembly process in a factory. In our proposed scheme, a 3-D sensing system is employed to measure the skeletal data of the worker. At each sampling time of the sensing system, an optimal delivery position is estimated using the real-time worker data. At the same time, the future positions of the worker are predicted as probabilistic distributions. A Model Predictive Control (MPC) based trajectory planner is used to calculate a robot trajectory that supplies the required parts and tools to the worker and follows the predicted future positions of the worker. We have installed our proposed scheme in a collaborative robot system with a 2-DOF planar manipulator. Experimental results show that the proposed scheme enables the robot to provide anytime assistance to a worker who is moving around in the workspace while ensuring the safety and comfort of the worker.


Author(s):  
Dimitrios Chrysostomou ◽  
Antonios Gasteratos

The production of 3D models has been a popular research topic already for a long time, and important progress has been made since the early days. During the last decades, vision systems have established to become the standard and one of the most efficient sensorial assets in industrial and everyday applications. Due to the fact that vision provides several vital attributes, many applications tend to use novel vision systems into domestic, working, industrial, and any other environments. To achieve such goals, a vision system should robustly and effectively reconstruct the 3D surface and the working space. This chapter discusses different methods for capturing the three-dimensional surface of a scene. Geometric approaches to three-dimensional scene reconstruction are generally based on the knowledge of the scene structure from the camera’s internal and external parameters. Another class of methods encompasses the photometric approaches, which evaluate the pixels’ intensity to understand the three-dimensional scene structure. The third and final category of approaches, the so-called real aperture approaches, includes methods that use the physical properties of the visual sensors for image acquisition in order to reproduce the depth information of a scene.


Sign in / Sign up

Export Citation Format

Share Document