A UAV-Based Machine Vision Algorithm for Industrial Gauge Detecting and Display Reading

Author(s):  
Chun Li ◽  
Dehua Zheng ◽  
Lizheng Liu ◽  
Xiaochen Zheng
PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258672
Author(s):  
Gabriel Carreira Lencioni ◽  
Rafael Vieira de Sousa ◽  
Edson José de Souza Sardinha ◽  
Rodrigo Romero Corrêa ◽  
Adroaldo José Zanella

The aim of this study was to develop and evaluate a machine vision algorithm to assess the pain level in horses, using an automatic computational classifier based on the Horse Grimace Scale (HGS) and trained by machine learning method. The use of the Horse Grimace Scale is dependent on a human observer, who most of the time does not have availability to evaluate the animal for long periods and must also be well trained in order to apply the evaluation system correctly. In addition, even with adequate training, the presence of an unknown person near an animal in pain can result in behavioral changes, making the evaluation more complex. As a possible solution, the automatic video-imaging system will be able to monitor pain responses in horses more accurately and in real-time, and thus allow an earlier diagnosis and more efficient treatment for the affected animals. This study is based on assessment of facial expressions of 7 horses that underwent castration, collected through a video system positioned on the top of the feeder station, capturing images at 4 distinct timepoints daily for two days before and four days after surgical castration. A labeling process was applied to build a pain facial image database and machine learning methods were used to train the computational pain classifier. The machine vision algorithm was developed through the training of a Convolutional Neural Network (CNN) that resulted in an overall accuracy of 75.8% while classifying pain on three levels: not present, moderately present, and obviously present. While classifying between two categories (pain not present and pain present) the overall accuracy reached 88.3%. Although there are some improvements to be made in order to use the system in a daily routine, the model appears promising and capable of measuring pain on images of horses automatically through facial expressions, collected from video images.


2009 ◽  
Vol 133 (7) ◽  
pp. 546-552 ◽  
Author(s):  
L. O. Solis-Sánchez ◽  
J. J. García-Escalante ◽  
R. Castañeda-Miranda ◽  
I. Torres-Pacheco ◽  
R. Guevara-González

2006 ◽  
Author(s):  
A. R. Tahir ◽  
S. Neethirajan ◽  
D.S. Jayas ◽  
J. Paliwal

2000 ◽  
Vol 12 (1) ◽  
pp. 38-46 ◽  
Author(s):  
Shigehiko HAYASHI ◽  
Katsunobu GANNO ◽  
Yukitsugu ISHII

1999 ◽  
Author(s):  
Wayne D. Daley ◽  
Theodore J. Doll ◽  
Shane W. McWhorter ◽  
Anthony A. Wasilewski

Mathematics ◽  
2021 ◽  
Vol 9 (16) ◽  
pp. 1904
Author(s):  
Valentin Koblar ◽  
Bogdan Filipič

Surface roughness is one of the key characteristics of machined components as it affects the surface quality and, consequently, the lifetime of the components themselves. The most common method of measuring the surface roughness is contact profilometry. Although this method is still widely applied, it has several drawbacks, such as limited measurement speed, sensitivity to vibrations, and requirement for precise positioning of the measured samples. In this paper, machine vision, machine learning and evolutionary optimization algorithms are used to induce a model for predicting the surface roughness of automotive components. Based on the attributes extracted by a machine vision algorithm, a machine learning algorithm generates the roughness predictive model. In addition, an evolutionary algorithm is used to tune the machine vision and machine learning algorithm parameters in order to find the most accurate predictive model. The developed methodology is comparable to the existing contact measurement method with respect to accuracy, but advantageous in that it is capable of predicting the surface roughness online and in real time.


2020 ◽  
Vol 6 (6) ◽  
pp. 48 ◽  
Author(s):  
Paola Pierleoni ◽  
Alberto Belli ◽  
Lorenzo Palma ◽  
Luisiana Sabbatini

The Industry 4.0 paradigm is based on transparency and co-operation and, hence, on monitoring and pervasive data collection. In highly standardized contexts, it is usually easy to gather data using available technologies, while, in complex environments, only very advanced and customizable technologies, such as Computer Vision, are intelligent enough to perform such monitoring tasks well. By the term “complex environment”, we especially refer to those contexts where human activity which cannot be fully standardized prevails. In this work, we present a Machine Vision algorithm which is able to effectively deal with human interactions inside a framed area. By exploiting inter-frame analysis, image pre-processing, binarization, morphological operations, and blob detection, our solution is able to count the pieces assembled by an operator using a real-time video input. The solution is compared with a more advanced Machine Learning-based custom object detector, which is taken as reference. The proposed solution demonstrates a very good performance in terms of Sensitivity, Specificity, and Accuracy when tested on a real situation in an Italian manufacturing firm. The value of our solution, compared with the reference object detector, is that it requires no training and is therefore extremely flexible, requiring only minor changes to the working parameters to translate to other objects, making it appropriate for plant-wide implementation.


Author(s):  
Cesar G. Pachon-Suescun ◽  
Carlos J. Enciso-Aragon ◽  
Robinson Jimenez-Moreno

In the field of robotics, it is essential to know the work area in which the agent is going to develop, for that reason, different methods of mapping and spatial location have been developed for different applications. In this article, a machine vision algorithm is proposed, which is responsible for identifying objects of interest within a work area and determining the polar coordinates to which they are related to the observer, applicable either with a fixed camera or in a mobile agent such as the one presented in this document. The developed algorithm was evaluated in two situations, determining the position of six objects in total around the mobile agent. These results were compared with the real position of each of the objects, reaching a high level of accuracy with an average error of 1.3271% in the distance and 2.8998% in the angle.


Sign in / Sign up

Export Citation Format

Share Document