scholarly journals Center of Gravity Coordinates Estimation Based on an Overall Brightness Average Determined from the 3D Vision System

2021 ◽  
Vol 12 (1) ◽  
pp. 286
Author(s):  
Radovan Holubek ◽  
Marek Vagaš

In advanced manufacturing technologies (including complex automated processes) and their branches of industry, perception and evaluation of the object parameters are the most critical factors. Many production machines and workplaces are currently equipped as standard with high-quality special sensing devices based on vision systems to detect these parameters. This article focuses on designing a reachable and fully functional vision system based on two standard CCD cameras usage, while the emphasis is on the RS 232C communication interface between two sites (vision and robotic systems). To this, we combine principles of the 1D photogrammetric calibration method from two known points at a stable point field and the available packages inside the processing unit of the vision system (as filtering, enhancing and extracting edges, weak and robust smoothing, etc.). A correlation factor at camera system (for reliable recognition of the sensed object) was set from 84 to 100%. Then, the pilot communication between both systems was proposed and then tested through CREAD/CWRITE commands according to protocol 3964R (used for the data transfer). Moreover, the system was proven by successful transition of the data into the robotic system. Since research gaps in this field still exist and many vision systems are based on PC processing or intelligent cameras, our potential research topic tries to provide the price–performance ratio solution for those who cannot regularly invest in the newest vision technology; however, they could still do so to stay competitive.

Author(s):  
Mirko Sgarbi ◽  
Valentina Colla ◽  
Gianluca Bioli

Computer vision is nowadays a key factor in many manufacturing processes. Among all possible applications like quality control, assembly verification and component tracking, the robot guidance for pick and place operations can assume an important role in increasing the automation level of production lines. While 3D vision systems are now emerging as valid solutions in bin-picking applications, where objects are randomly placed inside a box, 2D vision systems are widely and successfully adopted when objects are placed on a conveyor belt and the robot manipulator can grasp the object by exploiting only the 2D information. On the other hand, there are many real-world applications where the 3rd dimension is required by the picking system. For example, the objects can differ in their height or they can be manually placed in front of the camera without any constraint on the distance between the object and the camera itself. Although a 3D vision system could represent a possible solution, 3D systems are more complex, more expensive and less compact than 2D vision systems. This chapter describes a monocular system useful for picking applications. It can estimate the 3D position of a single marker attached to the target object assuming that the orientation of the object is approximately known.


2020 ◽  
Vol 1550 ◽  
pp. 022021
Author(s):  
Lei Qin ◽  
Zhenxing Zheng ◽  
Shipu Diao

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Qian-Bing Zhu ◽  
Bo Li ◽  
Dan-Dan Yang ◽  
Chi Liu ◽  
Shun Feng ◽  
...  

AbstractThe challenges of developing neuromorphic vision systems inspired by the human eye come not only from how to recreate the flexibility, sophistication, and adaptability of animal systems, but also how to do so with computational efficiency and elegance. Similar to biological systems, these neuromorphic circuits integrate functions of image sensing, memory and processing into the device, and process continuous analog brightness signal in real-time. High-integration, flexibility and ultra-sensitivity are essential for practical artificial vision systems that attempt to emulate biological processing. Here, we present a flexible optoelectronic sensor array of 1024 pixels using a combination of carbon nanotubes and perovskite quantum dots as active materials for an efficient neuromorphic vision system. The device has an extraordinary sensitivity to light with a responsivity of 5.1 × 107 A/W and a specific detectivity of 2 × 1016 Jones, and demonstrates neuromorphic reinforcement learning by training the sensor array with a weak light pulse of 1 μW/cm2.


2021 ◽  
Vol 11 (9) ◽  
pp. 4269
Author(s):  
Kamil Židek ◽  
Ján Piteľ ◽  
Michal Balog ◽  
Alexander Hošovský ◽  
Vratislav Hladký ◽  
...  

The assisted assembly of customized products supported by collaborative robots combined with mixed reality devices is the current trend in the Industry 4.0 concept. This article introduces an experimental work cell with the implementation of the assisted assembly process for customized cam switches as a case study. The research is aimed to design a methodology for this complex task with full digitalization and transformation data to digital twin models from all vision systems. Recognition of position and orientation of assembled parts during manual assembly are marked and checked by convolutional neural network (CNN) model. Training of CNN was based on a new approach using virtual training samples with single shot detection and instance segmentation. The trained CNN model was transferred to an embedded artificial processing unit with a high-resolution camera sensor. The embedded device redistributes data with parts detected position and orientation into mixed reality devices and collaborative robot. This approach to assisted assembly using mixed reality, collaborative robot, vision systems, and CNN models can significantly decrease assembly and training time in real production.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4515
Author(s):  
Rinku Roy ◽  
Manjunatha Mahadevappa ◽  
Kianoush Nazarpour

Humans typically fixate on objects before moving their arm to grasp the object. Patients with ALS disorder can also select the object with their intact eye movement, but are unable to move their limb due to the loss of voluntary muscle control. Though several research works have already achieved success in generating the correct grasp type from their brain measurement, we are still searching for fine controll over an object with a grasp assistive device (orthosis/exoskeleton/robotic arm). Object orientation and object width are two important parameters for controlling the wrist angle and the grasp aperture of the assistive device to replicate a human-like stable grasp. Vision systems are already evolved to measure the geometrical attributes of the object to control the grasp with a prosthetic hand. However, most of the existing vision systems are integrated with electromyography and require some amount of voluntary muscle movement to control the vision system. Due to that reason, those systems are not beneficial for the users with brain-controlled assistive devices. Here, we implemented a vision system which can be controlled through the human gaze. We measured the vertical and horizontal electrooculogram signals and controlled the pan and tilt of a cap-mounted webcam to keep the object of interest in focus and at the centre of the picture. A simple ‘signature’ extraction procedure was also utilized to reduce the algorithmic complexity and system storage capacity. The developed device has been tested with ten healthy participants. We approximated the object orientation and the size of the object and determined an appropriate wrist orientation angle and the grasp aperture size within 22 ms. The combined accuracy exceeded 75%. The integration of the proposed system with the brain-controlled grasp assistive device and increasing the number of grasps can offer more natural manoeuvring in grasp for ALS patients.


2009 ◽  
Vol 19 ◽  
pp. s243-s249 ◽  
Author(s):  
Jun-Hyub PARK ◽  
Dong-Joong KANG ◽  
Myung-Soo SHIN ◽  
Sung-Jo LIM ◽  
Son-Cheol YU ◽  
...  

2018 ◽  
Vol 10 (8) ◽  
pp. 1298 ◽  
Author(s):  
Lei Yin ◽  
Xiangjun Wang ◽  
Yubo Ni ◽  
Kai Zhou ◽  
Jilong Zhang

Multi-camera systems are widely used in the fields of airborne remote sensing and unmanned aerial vehicle imaging. The measurement precision of these systems depends on the accuracy of the extrinsic parameters. Therefore, it is important to accurately calibrate the extrinsic parameters between the onboard cameras. Unlike conventional multi-camera calibration methods with a common field of view (FOV), multi-camera calibration without overlapping FOVs has certain difficulties. In this paper, we propose a calibration method for a multi-camera system without common FOVs, which is used on aero photogrammetry. First, the extrinsic parameters of any two cameras in a multi-camera system is calibrated, and the extrinsic matrix is optimized by the re-projection error. Then, the extrinsic parameters of each camera are unified to the system reference coordinate system by using the global optimization method. A simulation experiment and a physical verification experiment are designed for the theoretical arithmetic. The experimental results show that this method is operable. The rotation error angle of the camera’s extrinsic parameters is less than 0.001rad and the translation error is less than 0.08 mm.


Forests ◽  
2018 ◽  
Vol 9 (1) ◽  
pp. 30 ◽  
Author(s):  
Andrzej Sioma ◽  
Jarosław Socha ◽  
Anna Klamerus-Iwan
Keyword(s):  

1997 ◽  
Vol 119 (2) ◽  
pp. 151-160 ◽  
Author(s):  
Y. M. Zhang ◽  
R. Kovacevic

Seam tracking and weld penetration control are two fundamental issues in automated welding. Although the seam tracking technique has matured, the latter still remains a unique unsolved problem. It was found that the full penetration status during GTA welding can be determined with sufficient accuracy using the sag depression. To achieve a new full penetration sensing technique, a structured-light 3D vision system is developed to extract the sag geometry behind the pool. The laser stripe, which is the intersection of the structured-light and weldment, is thinned and then used to acquire the sag geometry. To reduce possible control delay, a small distance is selected between the pool rear and laser stripe. An adaptive dynamic search for rapid thinning of the stripe and the maximum principle of slope difference for unbiased recognition of sag border were proposed to develop an effective real-time image processing algorithm for sag geometry acquisition. Experiments have shown that the proposed sensor and image algorithm can provide reliable feedback information of sag geometry for the full penetration control system.


Sign in / Sign up

Export Citation Format

Share Document