scholarly journals Convolutional Neural Networks for Image-Based High-Throughput Plant Phenotyping: A Review

2020 ◽  
Vol 2020 ◽  
pp. 1-22 ◽  
Author(s):  
Yu Jiang ◽  
Changying Li

Plant phenotyping has been recognized as a bottleneck for improving the efficiency of breeding programs, understanding plant-environment interactions, and managing agricultural systems. In the past five years, imaging approaches have shown great potential for high-throughput plant phenotyping, resulting in more attention paid to imaging-based plant phenotyping. With this increased amount of image data, it has become urgent to develop robust analytical tools that can extract phenotypic traits accurately and rapidly. The goal of this review is to provide a comprehensive overview of the latest studies using deep convolutional neural networks (CNNs) in plant phenotyping applications. We specifically review the use of various CNN architecture for plant stress evaluation, plant development, and postharvest quality assessment. We systematically organize the studies based on technical developments resulting from imaging classification, object detection, and image segmentation, thereby identifying state-of-the-art solutions for certain phenotyping applications. Finally, we provide several directions for future research in the use of CNN architecture for plant phenotyping purposes.

Author(s):  
M. Herrero-Huerta ◽  
S. R. Rahmani ◽  
K. M. Rainey

Abstract. Subsurface agriculture tile lines can greatly impact plant phenotypic characteristics through spatial variation of soil moisture, plant nutrient, and plant rooting depth. Therefore, location of subsurface tile lines plays a critical role in supporting the above ground plant phentoyping and needs to be considered in plant phenotyping analysis. Unnamed Aerial Systems (UAS) imagery together with deep learning methods can develop strong relations between the vegetation spectra and soil parameters.Here, we consider the capability of deep convolutional neural networks (CNN) to evaluate crop quality based on biomass production derived from soil moisture differences by using UAS-based multispectral imagery over soybean breeding fields. Results are still being evaluated, with particular attention to the temporal and spatial resolution of the data required to apply our approach.


2016 ◽  
Vol 29 (20) ◽  
pp. e3850 ◽  
Author(s):  
Yuran Qiao ◽  
Junzhong Shen ◽  
Tao Xiao ◽  
Qianming Yang ◽  
Mei Wen ◽  
...  

2020 ◽  
Author(s):  
Robail Yasrab ◽  
Michael P Pound ◽  
Andrew P French ◽  
Tony P Pridmore

AbstractPlant phenotyping using machine learning and computer vision approaches is a challenging task. Deep learning-based systems for plant phenotyping is more efficient for measuring different plant traits for diverse genetic discoveries compared to the traditional image-based phenotyping approaches. Plant biologists have recently demanded more reliable and accurate image-based phenotyping systems for assessing various features of plants and crops. The core of these image-based phenotyping systems is structural classification and features segmentation. Deep learning-based systems, however, have shown outstanding results in extracting very complicated features and structures of above-ground plants. Nevertheless, the below-ground part of the plant is usually more complicated to analyze due to its complex arrangement and distorted appearance. We proposed a deep convolutional neural networks (CNN) model named “RootNet” that detects and pixel-wise segments plant roots features. The feature of the proposed method is detection and segmentation of very thin (1-3 pixels wide roots). The proposed approach segment high definition images without significantly sacrificing pixel density, it leads to more accurate root type detection and segmentation results. It is hard to train CNNs with high definition images due to GPU memory limitations. The proposed patch-based CNN training setup makes use of the entire image (with maximum pixel desisity) to recognize and segment give root system efficiently. We have used wheat (Triticum aestivum L.) seedlings dataset, which consists of wheat roots grown in visible pouches. The proposed system segments are given root systems and save it to the Root System Markup Language (RSML) for future analysis. RootNet trained on the dataset mentioned above along with popular semantic segmentation architectures, and it achieved a benchmark accuracy.


2016 ◽  
Vol 30 (1) ◽  
pp. 95-101 ◽  
Author(s):  
Alvin Rajkomar ◽  
Sneha Lingam ◽  
Andrew G. Taylor ◽  
Michael Blum ◽  
John Mongan

2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Sign in / Sign up

Export Citation Format

Share Document