scholarly journals Technical Note: A respiratory monitoring and processing system based on computer vision: prototype and proof of principle

2016 ◽  
Vol 17 (5) ◽  
pp. 534-541 ◽  
Author(s):  
Nicolas Leduc ◽  
Vincent Atallah ◽  
Patrick Escarmant ◽  
Vincent Vinh-Hung
2017 ◽  
Vol 123 ◽  
pp. S884-S885
Author(s):  
N. Leduc ◽  
V. Atallah ◽  
A. Petit ◽  
S. Belhomme ◽  
V. Vinh-Hung ◽  
...  

2013 ◽  
Vol 40 (7) ◽  
pp. 071712 ◽  
Author(s):  
Y. Peng ◽  
S. Vedam ◽  
S. Gao ◽  
P. Balter

Author(s):  
Dhiraj J. Pangal ◽  
Guillaume Kugener ◽  
Shane Shahrestani ◽  
Frank Attenello ◽  
Gabriel Zada ◽  
...  

2020 ◽  
Vol 2 (2) ◽  
pp. 280-293
Author(s):  
Mathew G. Pelletier ◽  
Greg A. Holt ◽  
John D. Wanjura

The removal of plastic contamination in cotton lint is an issue of top priority to the U.S. cotton industry. One of the main sources of plastic contamination showing up in marketable cotton bales, at the U.S. Department of Agriculture’s classing office, is plastic from the module wrap used to wrap cotton modules produced by the new John Deere round module harvesters. Despite diligent efforts by cotton ginning personnel to remove all plastic encountered during unwrapping of the seed cotton modules, plastic still finds a way into the cotton gin’s processing system. To help mitigate plastic contamination at the gin; an inspection system was developed that utilized low-cost color cameras to see plastic on the module feeder’s dispersing cylinders, that are normally hidden from view by the incoming feed of cotton modules. This technical note presents the design of an automated intelligent machine-vision guided cotton module-feeder inspection system. The system includes a machine-learning program that automatically detects plastic contamination in order to alert the cotton gin personnel as to the presence of plastic contamination on the module feeder’s dispersing cylinders. The system was tested throughout the entire 2019 cotton ginning season at two commercial cotton gins and at one gin in the 2018 ginning season. This note describes the over-all system and mechanical design and provides an over-view and coverage of key relevant issues. Included as an attachment to this technical note are all the mechanical engineering design files as well as the bill-of-materials part source list. A discussion of the observational impact the system had on reduction of plastic contamination is also addressed.


2020 ◽  
Vol 47 (8) ◽  
pp. 3567-3572
Author(s):  
Justin Poon ◽  
Kirpal Kohli ◽  
Marc W. Deyell ◽  
Devin Schellenberg ◽  
Stefan Reinsberg ◽  
...  

Author(s):  
Osama Alfarraj ◽  
Amr Tolba

Abstract The computer vision (CV) paradigm is introduced to improve the computational and processing system efficiencies through visual inputs. These visual inputs are processed using sophisticated techniques for improving the reliability of human–machine interactions (HMIs). The processing of visual inputs requires multi-level data computations for achieving application-specific reliability. Therefore, in this paper, a two-level visual information processing (2LVIP) method is introduced to meet the reliability requirements of HMI applications. The 2LVIP method is used for handling both structured and unstructured data through classification learning to extract the maximum gain from the inputs. The introduced method identifies the gain-related features on its first level and optimizes the features to improve information gain. In the second level, the error is reduced through a regression process to stabilize the precision to meet the HMI application demands. The two levels are interoperable and fully connected to achieve better gain and precision through the reduction in information processing errors. The analysis results show that the proposed method achieves 9.42% higher information gain and a 6.51% smaller error under different classification instances compared with conventional methods.


2020 ◽  
Author(s):  
Simon Nachtergaele ◽  
Johan De Grave

Abstract. Artificial intelligence techniques such as deep neural networks and computer vision are developed for fission track recognition and included in a computer program for the first time. These deep neural networks use the Yolov3 object detection algorithm, which is currently one of the most powerful and fastest object recognition algorithms. These deep neural networks can be used in new software called AI-Track-tive. The developed program successfully finds most of the fission tracks in the microscope images, however, the user still needs to supervise the automatic counting. The success rates of the automatic recognition range from 70 % to 100 % depending on the areal track densities in apatite and (muscovite) external detector. The success rate generally decreases for images with high areal track densities, because overlapping tracks are less easily recognizable for computer vision techniques.


2020 ◽  
Vol 47 (11) ◽  
pp. 5496-5504
Author(s):  
Dante P. I. Capaldi ◽  
Tomi F. Nano ◽  
Hao Zhang ◽  
Lawrie B. Skinner ◽  
Lei Xing

Sign in / Sign up

Export Citation Format

Share Document