Automatic organ-level point cloud segmentation of maize shoots by integrating high-throughput data acquisition and deep learning

2022 ◽  
Vol 193 ◽  
pp. 106702
Author(s):  
Yinglun Li ◽  
Weiliang Wen ◽  
Teng Miao ◽  
Sheng Wu ◽  
Zetao Yu ◽  
...  
GigaScience ◽  
2021 ◽  
Vol 10 (5) ◽  
Author(s):  
Teng Miao ◽  
Weiliang Wen ◽  
Yinglun Li ◽  
Sheng Wu ◽  
Chao Zhu ◽  
...  

Abstract Background The 3D point cloud is the most direct and effective data form for studying plant structure and morphology. In point cloud studies, the point cloud segmentation of individual plants to organs directly determines the accuracy of organ-level phenotype estimation and the reliability of the 3D plant reconstruction. However, highly accurate, automatic, and robust point cloud segmentation approaches for plants are unavailable. Thus, the high-throughput segmentation of many shoots is challenging. Although deep learning can feasibly solve this issue, software tools for 3D point cloud annotation to construct the training dataset are lacking. Results We propose a top-to-down point cloud segmentation algorithm using optimal transportation distance for maize shoots. We apply our point cloud annotation toolkit for maize shoots, Label3DMaize, to achieve semi-automatic point cloud segmentation and annotation of maize shoots at different growth stages, through a series of operations, including stem segmentation, coarse segmentation, fine segmentation, and sample-based segmentation. The toolkit takes ∼4–10 minutes to segment a maize shoot and consumes 10–20% of the total time if only coarse segmentation is required. Fine segmentation is more detailed than coarse segmentation, especially at the organ connection regions. The accuracy of coarse segmentation can reach 97.2% that of fine segmentation. Conclusion Label3DMaize integrates point cloud segmentation algorithms and manual interactive operations, realizing semi-automatic point cloud segmentation of maize shoots at different growth stages. The toolkit provides a practical data annotation tool for further online segmentation research based on deep learning and is expected to promote automatic point cloud processing of various plants.


2013 ◽  
Vol 430 ◽  
pp. 012022 ◽  
Author(s):  
Stefan Mangold ◽  
Ralph Steininger ◽  
Thomas Spangenberg

1989 ◽  
Vol 36 (1) ◽  
pp. 760-764 ◽  
Author(s):  
M. Bowden ◽  
H. Gonzalez ◽  
S. Hansen ◽  
A. Baumbaugh

2021 ◽  
Vol 9 ◽  
Author(s):  
Alexander Gerovichev ◽  
Achiad Sadeh ◽  
Vlad Winter ◽  
Avi Bar-Massada ◽  
Tamar Keasar ◽  
...  

Ecology documents and interprets the abundance and distribution of organisms. Ecoinformatics addresses this challenge by analyzing databases of observational data. Ecoinformatics of insects has high scientific and applied importance, as insects are abundant, speciose, and involved in many ecosystem functions. They also crucially impact human well-being, and human activities dramatically affect insect demography and phenology. Hazards, such as pollinator declines, outbreaks of agricultural pests and the spread insect-borne diseases, raise an urgent need to develop ecoinformatics strategies for their study. Yet, insect databases are mostly focused on a small number of pest species, as data acquisition is labor-intensive and requires taxonomical expertise. Thus, despite decades of research, we have only a qualitative notion regarding fundamental questions of insect ecology, and only limited knowledge about the spatio-temporal distribution of insects. We describe a novel high throughput cost-effective approach for monitoring flying insects as an enabling step toward “big data” entomology. The proposed approach combines “high tech” deep learning with “low tech” sticky traps that sample flying insects in diverse locations. As a proof of concept we considered three recent insect invaders of Israel’s forest ecosystem: two hemipteran pests of eucalypts and a parasitoid wasp that attacks one of them. We developed software, based on deep learning, to identify the three species in images of sticky traps from Eucalyptus forests. These image processing tasks are quite difficult as the insects are small (<5 mm) and stick to the traps in random poses. The resulting deep learning model discriminated the three focal organisms from one another, as well as from other elements such as leaves and other insects, with high precision. We used the model to compare the abundances of these species among six sites, and validated the results by manually counting insects on the traps. Having demonstrated the power of the proposed approach, we started a more ambitious study that monitors these insects at larger spatial and temporal scales. We aim at building an ecoinformatics repository for trap images and generating data-driven models of the populations’ dynamics and morphological traits.


2015 ◽  
Vol 96-97 ◽  
pp. 895-898 ◽  
Author(s):  
J. Nieto ◽  
G. de Arcas ◽  
M. Ruiz ◽  
R. Castro ◽  
J. Vega ◽  
...  

2016 ◽  
Author(s):  
Matthias Vogelgesang ◽  
Lorenzo Rota ◽  
Luis E. Ardila Perez ◽  
Michele Caselle ◽  
Suren Chilingaryan ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document