vision sensor
Recently Published Documents


TOTAL DOCUMENTS

971
(FIVE YEARS 198)

H-INDEX

31
(FIVE YEARS 6)

Author(s):  
Can Cuhadar ◽  
Hoi Nok Tsao

A prominent problem in computer vision is occlusion, which occurs when an object’s key features temporarily disappear behind another crossing body, causing the computer to struggle with image detection. While the human brain is capable of compensating for the invisible parts of the blocked object, computers lack such scene interpretation skills. Cloud computing using convolutional neural networks is typically the method of choice for handling such a scenario. However, for mobile applications where energy consumption and computational costs are critical, cloud computing should be minimized. In this regard, we propose a computer vision sensor capable of efficiently detecting and tracking covered objects without heavy reliance on occlusion handling software. Our edge-computing sensor accomplishes this task by self-learning the object prior to the moment of occlusion and uses this information to “reconstruct” the blocked invisible features. Furthermore, the sensor is capable of tracking a moving object by predicting the path it will most likely take while travelling out of sight behind an obstructing body. Finally, sensor operation is demonstrated by exposing the device to various simulated occlusion events. Keywords:  Computer vision, occlusion handling, edge computing, object tracking, dye sensitized solar cell. Corresponding author Email: [email protected] 


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 480
Author(s):  
Dawid Cekus ◽  
Filip Depta ◽  
Mariusz Kubanek ◽  
Łukasz Kuczyński ◽  
Paweł Kwiatoń

Tracking the trajectory of the load carried by the rotary crane is an important problem that allows reducing the possibility of its damage by hitting an obstacle in its working area. On the basis of the trajectory, it is also possible to determine an appropriate control system that would allow for the safe transport of the load. This work concerns research on the load motion carried by a rotary crane. For this purpose, the laboratory crane model was designed in Solidworks software, and numerical simulations were made using the Motion module. The developed laboratory model is a scaled equivalent of the real Liebherr LTM 1020 object. The crane control included two movements: changing the inclination angle of the crane’s boom and rotation of the jib with the platform. On the basis of the developed model, a test stand was built, which allowed for the verification of numerical results. Event visualization and trajectory tracking were made using a dynamic vision sensor (DVS) and the Tracker program. Based on the obtained experimental results, the developed numerical model was verified. The proposed trajectory tracking method can be used to develop a control system to prevent collisions during the crane’s duty cycle.


2021 ◽  
Vol 11 (24) ◽  
pp. 11808
Author(s):  
Chunghyup Mok ◽  
Insung Baek ◽  
Yoonsang Cho ◽  
Younghoon Kim ◽  
Seoungbum Kim

As the need for efficient warehouse logistics has increased in manufacturing systems, the use of automated guided vehicles (AGVs) has also increased to reduce travel time. The AGVs are controlled by a system using laser sensors or floor-embedded wires to transport pallets and their loads. Because such control systems have only predefined palletizing strategies, AGVs may fail to engage incorrectly positioned pallets. In this study, we consider a vision sensor-based method to address this shortcoming by recognizing a pallet’s position. We propose a multi-task deep learning architecture that simultaneously predicts distances and rotation based on images obtained from a visionary sensor. These predictions complement each other in learning, allowing a multi-task model to learn and execute tasks impossible with single-task models. The proposed model can accurately predict the rotation and displacement of the pallets to derive information necessary for the control system. This information can be used to optimize a palletizing strategy. The superiority of the proposed model was verified by an experiment on images of stored pallets that were collected from a visionary sensor attached to an AGV.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8176
Author(s):  
Youngmo Han

Template matching is a simple image detection algorithm that can easily detect different types of objects just by changing the template without tedious training procedures. Despite these advantages, template matching is not currently widely used. This is because traditional template matching is not very reliable for images that differ from the template. The reliability of template matching can be improved by using additional information (depths for the template) available from the vision sensor system. Methods of obtaining the depth of a template using stereo vision or a few (two or more) template images or a short template video via mono vision are well known in the vision literature and have been commercialized. In this strategy, this paper proposes a template matching vision sensor system that can easily detect various types of objects without prior training. To this end, by using the additional information provided by the vision sensor system, we study a method to increase the reliability of template matching, even when there is a difference in the 3D direction and size between the template and the image. Template images obtained through the vision sensor provide a depth template. Using this depth template, it is possible to predict the change of the image according to the difference in the 3D direction and the size of the object. Using the predicted changes in these images, the template is calibrated close to the given image, and then template matching is performed. For ease of use, the algorithm is proposed as a closed form solution that avoids tedious recursion or training processes. For wider application and more accurate results, the proposed method considers the 3D direction and size difference in the perspective projection model and the general 3D rotation model.


Author(s):  
Chao Liu ◽  
Hui Wang ◽  
Yu Huang ◽  
Youmin Rong ◽  
Jie Meng ◽  
...  

Abstract Mobile welding robot with adaptive seam tracking ability can greatly improve the welding efficiency and quality, which has been extensively studied. To further improve the automation in multiple station welding, a novel intelligent mobile welding robot consists of a four-wheeled mobile platform and a collaborative manipulator is developed. Under the support of simultaneous localization and mapping (SLAM) technology, the robot is capable of automatically navigating to different stations to perform welding operation. To automatically detect the welding seam, a composite sensor system including an RGB-D camera and a laser vision sensor is creatively applied. Based on the sensor system, the multi-layer sensing strategy is performed to ensure the welding seam can be detected and tracked with high precision. By applying hybrid filter to the RGB-D camera measurement, the initial welding seam could be effectively extracted. Then a novel welding start point detection method is proposed. Meanwhile, to guarantee the tracking quality, a robust welding seam tracking algorithm based on laser vision sensor is presented to eliminate the tracking discrepancy caused by the platform parking error, through which the tracking trajectory can be corrected in real-time. The experimental results show that the robot can autonomously detect and track the welding seam effectively in different station. Also, the multiple station welding efficiency can be improved and quality can also be guaranteed.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Zhipeng Li ◽  
Jun Wang ◽  
Tao Zhang ◽  
Dave Balne ◽  
Bing Li ◽  
...  

Due to the influence of environmental interference and too fast speed, there are some problems in ski motion capture, such as inaccurate motion capture, motion delay, and motion loss, resulting in the inconsistency between the actual motion of later athletes and the motion of virtual characters. To solve the above problems, a real-time skiing motion capture method of snowboarders based on a 3D vision sensor is proposed. This method combines the Time of Fight (TOF) camera and high-speed vision sensor to form a motion acquisition system. The collected motion images are fused to form a complete motion image, and the pose is solved. The pose data is bound with the constructed virtual character model to drive the virtual model to synchronously complete the snowboarding motion and realize the real-time capture of skiing motion. The results show that the motion accuracy of the system is as high as 98.6%, which improves the capture effect, and the motion matching proportion is better and more practical. It is also excellent in the investigation of motion delay and motion loss.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Xianhao Zhang ◽  
Yongxiu Shi ◽  
Hua Bai

The creation of visual settings using avatar technology is the initial implementation of a voxel in the reporting context. This article is aimed at demonstrating the important role of a virtual reality immersive physical education model in current physical education and analyzing the immersive virtual reality physical education model. Relying on the mature VR virtual reality technology to establish a virtual simulation experiment platform, the application value in the education field is also reflected in the saving of experimental teaching costs. A complete set of VR teaching courseware also satisfies the functions of teaching training and assessment and is used repeatedly to maximize the value of utilization. This research mainly introduces the content of the optimization of the teaching method of the course combined with virtual reality technology. In order to make the data more convincing, reference literature and data in recent years have been referred to around immersive teaching. The first part is the discussion of immersive teaching, which includes the research of virtual immersive classroom teaching. The second part is a separate analysis of virtual reality technology. The third part is the practical exercise based on the first two parts; that is, from the learning effect and attitude of the student as the main body, the theoretical basis of the two parts on visual sensors and immersive virtual reality physical education teaching is transformed into real practice teaching. In the experiment part, in order to demonstrate the effectiveness and support of immersive virtual teaching, on the one hand, we started with the teaching of teachers, and on the other hand, we carried out investigations from the aspects of students’ learning. The desktop virtual environment teaching method is compared with the existing classroom teaching methods. The image processing and image analysis of the virtual reality technology are combined with the image gray level of the three-dimensional image to analyze the multitask algorithm of the vision sensor. The feasibility of immersive virtual teaching was verified. Research data shows that the 10 students participating in the experiment gave a score of 7.9 for the target attitude of immersive virtual reality physical education. Students’ interest in learning will increase, and efficiency will also be greatly improved. VR can not only provide students with a new learning experience but also be used to strengthen teachers’ teaching skills. Because VR can simulate a real teaching environment, teachers can use this set of tools to try new course materials and improve classroom management capabilities.


Sign in / Sign up

Export Citation Format

Share Document