automated feature extraction
Recently Published Documents


TOTAL DOCUMENTS

80
(FIVE YEARS 22)

H-INDEX

15
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Felix M. Bauer ◽  
Lena Lärm ◽  
Shehan Morandage ◽  
Guillaume Lobet ◽  
Jan Vanderborght ◽  
...  

Root systems of crops play a significant role in agro-ecosystems. The root system is essential for water and nutrient uptake, plant stability, symbiosis with microbes and a good soil structure. Minirhizotrons, consisting of transparent tubes that create windows into the soil, have shown to be effective to non-invasively investigate the root system. Root traits, like root length observed around the tubes of minirhizotron, can therefore be obtained throughout the crop growing season. Analyzing datasets from minirhizotrons using common manual annotation methods, with conventional software tools, are time consuming and labor intensive. Therefore, an objective method for high throughput image analysis that provides data for field root-phenotyping is necessary. In this study we developed a pipeline combining state-of-the-art software tools, using deep neural networks and automated feature extraction. This pipeline consists of two major components and was applied to large root image datasets from minirhizotrons. First, a segmentation by a neural network model, trained with a small image sample is performed. Training and segmentation are done using “Root-Painter”. Then, an automated feature extraction from the segments is carried out by “RhizoVision Explorer”. To validate the results of our automated analysis pipeline, a comparison of root length between manually annotated and automatically processed data was realized with more than 58,000 images. Mainly the results show a high correlation (R=0.81) between manually and automatically determined root lengths. With respect to the processing time, our new pipeline outperforms manual annotation by 98.1 - 99.6 %. Our pipeline,combining state-of-the-art software tools, significantly reduces the processing time for minirhizotron images. Thus, image analysis is no longer the bottle-neck in high-throughput phenotyping approaches.


MethodsX ◽  
2021 ◽  
pp. 101379
Author(s):  
Cali L. Roth ◽  
Peter S. Coates ◽  
K. Benjamin Gustafson ◽  
Michael P. Chenaille ◽  
Mark A. Ricca ◽  
...  

2021 ◽  
Author(s):  
Andrew Griffin ◽  
Sean Griffin ◽  
Kristofer Lasko ◽  
Megan Maloney ◽  
S. Blundell ◽  
...  

Feature extraction algorithms are routinely leveraged to extract building footprints and road networks into vector format. When used in conjunction with high resolution remotely sensed imagery, machine learning enables the automation of such feature extraction workflows. However, many of the feature extraction algorithms currently available have not been thoroughly evaluated in a scientific manner within complex terrain such as the cities of developing countries. This report details the performance of three automated feature extraction (AFE) datasets: Ecopia, Tier 1, and Tier 2, at extracting building footprints and roads from high resolution satellite imagery as compared to manual digitization of the same areas. To avoid environmental bias, this assessment was done in two different regions of the world: Maracay, Venezuela and Niamey, Niger. High, medium, and low urban density sites are compared between regions. We quantify the accuracy of the data and time needed to correct the three AFE datasets against hand digitized reference data across ninety tiles in each city, selected by stratified random sampling. Within each tile, the reference data was compared against the three AFE datasets, both before and after analyst editing, using the accuracy assessment metrics of Intersection over Union and F1 Score for buildings and roads, as well as Average Path Length Similarity (APLS) to measure road network connectivity. It was found that of the three AFE tested, the Ecopia data most frequently outperformed the other AFE in accuracy and reduced the time needed for editing.


2021 ◽  
Vol 65 ◽  
pp. 157-162
Author(s):  
John Jurkiewicz ◽  
Stacie Kroboth ◽  
Viviana Zlochiver ◽  
Peter Hinow

2021 ◽  
Vol 40 (1) ◽  
Author(s):  
Matthew Konnik ◽  
Bahar Ahmadi ◽  
Nicholas May ◽  
Joseph Favata ◽  
Zahra Shahbazi ◽  
...  

AbstractX-ray computed tomography (CT) is a powerful technique for non-destructive volumetric inspection of objects and is widely used for studying internal structures of a large variety of sample types. The raw data obtained through an X-ray CT practice is a gray-scale 3D array of voxels. This data must undergo a geometric feature extraction process before it can be used for interpretation purposes. Such feature extraction process is conventionally done manually, but with the ever-increasing trend of image data sizes and the interest in identifying more miniature features, automated feature extraction methods are sought. Given the fact that conventional computer-vision-based methods, which attempt to segment images into partitions using techniques such as thresholding, are often only useful for aiding the manual feature extraction process, machine-learning based algorithms are becoming popular to develop fully automated feature extraction processes. Nevertheless, the machine-learning algorithms require a huge pool of labeled data for proper training, which is often unavailable. We propose to address this shortage, through a data synthesis procedure. We will do so by fabricating miniature features, with known geometry, position and orientation on thin silicon wafer layers using a femtosecond laser machining system, followed by stacking these layers to construct a 3D object with internal features, and finally obtaining the X-ray CT image of the resulting 3D object. Given that the exact geometry, position and orientation of the fabricated features are known, the X-ray CT image is inherently labeled and is ready to be used for training the machine learning algorithms for automated feature extraction. Through several examples, we will showcase: (1) the capability of synthesizing features of arbitrary geometries and their corresponding labeled images; and (2) use of the synthesized data for training machine-learning based shape classifiers and features parameter extractors.


Sign in / Sign up

Export Citation Format

Share Document