scholarly journals Computer Vision for High-Throughput Quantitative Phenotyping: A Case Study of Grapevine Downy Mildew Sporulation and Leaf Trichomes

2017 ◽  
Vol 107 (12) ◽  
pp. 1549-1555 ◽  
Author(s):  
Konstantin Divilov ◽  
Tyr Wiesner-Hanks ◽  
Paola Barba ◽  
Lance Cadle-Davidson ◽  
Bruce I. Reisch

Quantitative phenotyping of downy mildew sporulation is frequently used in plant breeding and genetic studies, as well as in studies focused on pathogen biology such as chemical efficacy trials. In these scenarios, phenotyping a large number of genotypes or treatments can be advantageous but is often limited by time and cost. We present a novel computational pipeline dedicated to estimating the percent area of downy mildew sporulation from images of inoculated grapevine leaf discs in a manner that is time and cost efficient. The pipeline was tested on images from leaf disc assay experiments involving two F1 grapevine families, one that had glabrous leaves (Vitis rupestris B38 × ‘Horizon’ [RH]) and another that had leaf trichomes (Horizon × V. cinerea B9 [HC]). Correlations between computer vision and manual visual ratings reached 0.89 in the RH family and 0.43 in the HC family. Additionally, we were able to use the computer vision system prior to sporulation to measure the percent leaf trichome area. We estimate that an experienced rater scoring sporulation would spend at least 90% less time using the computer vision system compared with the manual visual method. This will allow more treatments to be phenotyped in order to better understand the genetic architecture of downy mildew resistance and of leaf trichome density. We anticipate that this computer vision system will find applications in other pathosystems or traits where responses can be imaged with sufficient contrast from the background.

2021 ◽  
pp. 105084
Author(s):  
Bojana Milovanovic ◽  
Ilija Djekic ◽  
Jelena Miocinovic ◽  
Bartosz G. Solowiej ◽  
Jose M. Lorenzo ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 343
Author(s):  
Kim Bjerge ◽  
Jakob Bonde Nielsen ◽  
Martin Videbæk Sepstrup ◽  
Flemming Helsing-Nielsen ◽  
Toke Thomas Høye

Insect monitoring methods are typically very time-consuming and involve substantial investment in species identification following manual trapping in the field. Insect traps are often only serviced weekly, resulting in low temporal resolution of the monitoring data, which hampers the ecological interpretation. This paper presents a portable computer vision system capable of attracting and detecting live insects. More specifically, the paper proposes detection and classification of species by recording images of live individuals attracted to a light trap. An Automated Moth Trap (AMT) with multiple light sources and a camera was designed to attract and monitor live insects during twilight and night hours. A computer vision algorithm referred to as Moth Classification and Counting (MCC), based on deep learning analysis of the captured images, tracked and counted the number of insects and identified moth species. Observations over 48 nights resulted in the capture of more than 250,000 images with an average of 5675 images per night. A customized convolutional neural network was trained on 2000 labeled images of live moths represented by eight different classes, achieving a high validation F1-score of 0.93. The algorithm measured an average classification and tracking F1-score of 0.71 and a tracking detection rate of 0.79. Overall, the proposed computer vision system and algorithm showed promising results as a low-cost solution for non-destructive and automatic monitoring of moths.


Metals ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. 387
Author(s):  
Martin Choux ◽  
Eduard Marti Bigorra ◽  
Ilya Tyapin

The rapidly growing deployment of Electric Vehicles (EV) put strong demands on the development of Lithium-Ion Batteries (LIBs) but also into its dismantling process, a necessary step for circular economy. The aim of this study is therefore to develop an autonomous task planner for the dismantling of EV Lithium-Ion Battery pack to a module level through the design and implementation of a computer vision system. This research contributes to moving closer towards fully automated EV battery robotic dismantling, an inevitable step for a sustainable world transition to an electric economy. For the proposed task planner the main functions consist in identifying LIB components and their locations, in creating a feasible dismantling plan, and lastly in moving the robot to the detected dismantling positions. Results show that the proposed method has measurement errors lower than 5 mm. In addition, the system is able to perform all the steps in the order and with a total average time of 34 s. The computer vision, robotics and battery disassembly have been successfully unified, resulting in a designed and tested task planner well suited for product with large variations and uncertainties.


2019 ◽  
Vol 82 (1) ◽  
Author(s):  
Edicley Vander Machado ◽  
Priscila Cardoso Cristovam ◽  
Denise de Freitas ◽  
José Álvaro Pereira Gomes ◽  
Vagner Rogério dos Santos

1998 ◽  
Vol 16 (8) ◽  
pp. 533-539 ◽  
Author(s):  
J.H.M. Byne ◽  
J.A.D.W. Anderson

Sign in / Sign up

Export Citation Format

Share Document