scholarly journals A Deep Learning Semantic Segmentation-Based Approach for Field-Level Sorghum Panicle Counting

2019 ◽  
Vol 11 (24) ◽  
pp. 2939 ◽  
Author(s):  
Lonesome Malambo ◽  
Sorin Popescu ◽  
Nian-Wei Ku ◽  
William Rooney ◽  
Tan Zhou ◽  
...  

Small unmanned aerial systems (UAS) have emerged as high-throughput platforms for the collection of high-resolution image data over large crop fields to support precision agriculture and plant breeding research. At the same time, the improved efficiency in image capture is leading to massive datasets, which pose analysis challenges in providing needed phenotypic data. To complement these high-throughput platforms, there is an increasing need in crop improvement to develop robust image analysis methods to analyze large amount of image data. Analysis approaches based on deep learning models are currently the most promising and show unparalleled performance in analyzing large image datasets. This study developed and applied an image analysis approach based on a SegNet deep learning semantic segmentation model to estimate sorghum panicles counts, which are critical phenotypic data in sorghum crop improvement, from UAS images over selected sorghum experimental plots. The SegNet model was trained to semantically segment UAS images into sorghum panicles, foliage and the exposed ground using 462, 250 × 250 labeled images, which was then applied to field orthomosaic to generate a field-level semantic segmentation. Individual panicle locations were obtained after post-processing the segmentation output to remove small objects and split merged panicles. A comparison between model panicle count estimates and manually digitized panicle locations in 60 randomly selected plots showed an overall detection accuracy of 94%. A per-plot panicle count comparison also showed high agreement between estimated and reference panicle counts (Spearman correlation ρ = 0.88, mean bias = 0.65). Misclassifications of panicles during the semantic segmentation step and mosaicking errors in the field orthomosaic contributed mainly to panicle detection errors. Overall, the approach based on deep learning semantic segmentation showed good promise and with a larger labeled dataset and extensive hyper-parameter tuning, should provide even more robust and effective characterization of sorghum panicle counts.

Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Shuo Zhou ◽  
Xiujuan Chai ◽  
Zixuan Yang ◽  
Hongwu Wang ◽  
Chenxue Yang ◽  
...  

Abstract Background Maize (Zea mays L.) is one of the most important food sources in the world and has been one of the main targets of plant genetics and phenotypic research for centuries. Observation and analysis of various morphological phenotypic traits during maize growth are essential for genetic and breeding study. The generally huge number of samples produce an enormous amount of high-resolution image data. While high throughput plant phenotyping platforms are increasingly used in maize breeding trials, there is a reasonable need for software tools that can automatically identify visual phenotypic features of maize plants and implement batch processing on image datasets. Results On the boundary between computer vision and plant science, we utilize advanced deep learning methods based on convolutional neural networks to empower the workflow of maize phenotyping analysis. This paper presents Maize-IAS (Maize Image Analysis Software), an integrated application supporting one-click analysis of maize phenotype, embedding multiple functions: (I) Projection, (II) Color Analysis, (III) Internode length, (IV) Height, (V) Stem Diameter and (VI) Leaves Counting. Taking the RGB image of maize as input, the software provides a user-friendly graphical interaction interface and rapid calculation of multiple important phenotypic characteristics, including leaf sheath points detection and leaves segmentation. In function Leaves Counting, the mean and standard deviation of difference between prediction and ground truth are 1.60 and 1.625. Conclusion The Maize-IAS is easy-to-use and demands neither professional knowledge of computer vision nor deep learning. All functions for batch processing are incorporated, enabling automated and labor-reduced tasks of recording, measurement and quantitative analysis of maize growth traits on a large dataset. We prove the efficiency and potential capability of our techniques and software to image-based plant research, which also demonstrates the feasibility and capability of AI technology implemented in agriculture and plant science.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Dominik Jens Elias Waibel ◽  
Sayedali Shetab Boushehri ◽  
Carsten Marr

Abstract Background Deep learning contributes to uncovering molecular and cellular processes with highly performant algorithms. Convolutional neural networks have become the state-of-the-art tool to provide accurate and fast image data processing. However, published algorithms mostly solve only one specific problem and they typically require a considerable coding effort and machine learning background for their application. Results We have thus developed InstantDL, a deep learning pipeline for four common image processing tasks: semantic segmentation, instance segmentation, pixel-wise regression and classification. InstantDL enables researchers with a basic computational background to apply debugged and benchmarked state-of-the-art deep learning algorithms to their own data with minimal effort. To make the pipeline robust, we have automated and standardized workflows and extensively tested it in different scenarios. Moreover, it allows assessing the uncertainty of predictions. We have benchmarked InstantDL on seven publicly available datasets achieving competitive performance without any parameter tuning. For customization of the pipeline to specific tasks, all code is easily accessible and well documented. Conclusions With InstantDL, we hope to empower biomedical researchers to conduct reproducible image processing with a convenient and easy-to-use pipeline.


EDIS ◽  
2021 ◽  
Vol 2021 (5) ◽  
Author(s):  
Amr Abd-Elrahman ◽  
Katie Britt ◽  
Vance Whitaker

This publication presents a guide to image analysis for researchers and farm managers who use ArcGIS software. Anyone with basic geographic information system analysis skills may follow along with the demonstration and learn to implement the Mask Region Convolutional Neural Networks model, a widely used model for object detection, to delineate strawberry canopies using ArcGIS Pro Image Analyst Extension in a simple workflow. This process is useful for precision agriculture management.


Sensors ◽  
2019 ◽  
Vol 19 (5) ◽  
pp. 1058 ◽  
Author(s):  
Yang-Yang Zheng ◽  
Jian-Lei Kong ◽  
Xue-Bo Jin ◽  
Xiao-Yi Wang ◽  
Min Zuo

Intelligence has been considered as the major challenge in promoting economic potential and production efficiency of precision agriculture. In order to apply advanced deep-learning technology to complete various agricultural tasks in online and offline ways, a large number of crop vision datasets with domain-specific annotation are urgently needed. To encourage further progress in challenging realistic agricultural conditions, we present the CropDeep species classification and detection dataset, consisting of 31,147 images with over 49,000 annotated instances from 31 different classes. In contrast to existing vision datasets, images were collected with different cameras and equipment in greenhouses, captured in a wide variety of situations. It features visually similar species and periodic changes with more representative annotations, which have supported a stronger benchmark for deep-learning-based classification and detection. To further verify the application prospect, we provide extensive baseline experiments using state-of-the-art deep-learning classification and detection models. Results show that current deep-learning-based methods achieve well performance in classification accuracy over 99%. While current deep-learning methods achieve only 92% detection accuracy, illustrating the difficulty of the dataset and improvement room of state-of-the-art deep-learning models when applied to crops production and management. Specifically, we suggest that the YOLOv3 network has good potential application in agricultural detection tasks.


2020 ◽  
Author(s):  
Dominik Waibel ◽  
Sayedali Shetab Boushehri ◽  
Carsten Marr

AbstractMotivationDeep learning contributes to uncovering and understanding molecular and cellular processes with highly performant image computing algorithms. Convolutional neural networks have become the state-of-the-art tool to provide accurate, consistent and fast data processing. However, published algorithms mostly solve only one specific problem and they often require expert skills and a considerable computer science and machine learning background for application.ResultsWe have thus developed a deep learning pipeline called InstantDL for four common image processing tasks: semantic segmentation, instance segmentation, pixel-wise regression and classification. InstantDL enables experts and non-experts to apply state-of-the-art deep learning algorithms to biomedical image data with minimal effort. To make the pipeline robust, we have automated and standardized workflows and extensively tested it in different scenarios. Moreover, it allows to assess the uncertainty of predictions. We have benchmarked InstantDL on seven publicly available datasets achieving competitive performance without any parameter tuning. For customization of the pipeline to specific tasks, all code is easily accessible.Availability and ImplementationInstantDL is available under the terms of MIT licence. It can be found on GitHub: https://github.com/marrlab/[email protected]


Author(s):  
Mohd Najib Ahmad ◽  
Abdul Rashid Mohamed Shariff ◽  
Ishak Aris ◽  
Izhal Abdul Halin ◽  
Ramle Moslim

The bagworm species of Metisa plana, is one of the major species of leaf-eating insect pest that attack oil palm in Peninsular Malaysia. Without any treatment, this situation may cause 43% yield loss from a moderate attack. In 2020, the economic loss due to bagworm attacks was recorded at around RM 180 million. Based on this scenario, it is necessary to closely monitor the bagworm outbreak at  infested areas. Accuracy and precise data collection is debatable, due to human errors. . Hence, the objective of this study is to design and develop a specific machine vision that incorporates an image processing algorithm according to its functional modes. In this regard, a device, the Automated Bagworm Counter or Oto-BaCTM is the first in the world to be developed with an embedded software that is based on the use of a graphic processing unit computation and a TensorFlow/Teano library setup for the trained dataset. The technology is based on the developed deep learning with Faster Regions with Convolutional Neural Networks technique towards real time object detection. The Oto-BaCTM uses an ordinary camera. By using self-developed deep learning algorithms, a motion-tracking and false colour analysis were applied to detect and count number of living and dead larvae and pupae population per frond, respectively, corresponding to three major groups or sizes classification. Initially, in the first trial, the Oto-BaCTM has resulted in low percentages of detection accuracy for the living and dead G1 larvae (47.0% & 71.7%), G2 larvae (39.1 & 50.0%) and G3 pupae (30.1% & 20.9%). After some improvements on the training dataset, the percentages increased in the next field trial, with amounts of 40.5% and 7.0% for the living and dead G1 larvae, 40.1% and 29.2% for the living and dead G2 larvae and 47.7% and 54.6% for the living and dead pupae. The development of the ground-based device is the pioneer in the oil palm industry, in which it reduces human errors when conducting census while promoting precision agriculture practice.


Author(s):  
Aparna .

A naturalist is someone who studies the patterns of nature identify different kingdom of flora and fauna in the nature. Being able to identify the flora and fauna around us often leads to an interest in protecting wild species, collecting and sharing information about the species we see on our travels is very useful for conserving groups like NCC. Deep-learning based techniques and methods are becoming popular in digital naturalist studies, as their performance is superior in image analysis fields, such as object detection, image classification, and semantic segmentation. Deep-learning techniques have achieved state of-the -art performance for automatic segmentation of digital naturalist through multi-model image sensing. Our task as naturalist has grown widely in the field of natural-historians. It has increased from identification to saviours as well. Not only identifying flora and fauna but also to know about their habits, habitats, living and grouping lead to fetching services for protection as well.


2021 ◽  
pp. 1-14
Author(s):  
Yan Zhang ◽  
Gongping Yang ◽  
Yikun Liu ◽  
Chong Wang ◽  
Yilong Yin

Detection of cotton bolls in the field environments is one of crucial techniques for many precision agriculture applications, including yield estimation, disease and pest recognition and automatic harvesting. Because of the complex conditions, such as different growth periods and occlusion among leaves and bolls, detection in the field environments is a task with considerable challenges. Despite this, the development of deep learning technologies have shown great potential to effectively solve this task. In this work, we propose an Improved YOLOv5 network to detect unopened cotton bolls in the field accurately and with lower cost, which combines DenseNet, attention mechanism and Bi-FPN. Besides, we modify the architecture of the network to get larger feature maps from shallower network layers to enhance the ability of detecting bolls due to the size of cotton boll is generally small. We collect image data of cotton in Aodu Farm in Xinjiang Province, China and establish a dataset containing 616 high-resolution images. The experiment results show that the proposed method is superior to the original YOLOv5 model and other methods such as YOLOv3,SSD and FasterRCNN considering the detection accuracy, computational cost, model size and speed at the same time. The detection of cotton boll can be further applied for different purposes such as yield prediction and identification of diseases and pests in earlier stage which can effectively help farmers take effective approaches in time and reduce the crop losses and therefore increase production.


2020 ◽  
Vol 12 (22) ◽  
pp. 3715 ◽  
Author(s):  
Minsoo Park ◽  
Dai Quoc Tran ◽  
Daekyo Jung ◽  
Seunghee Park

To minimize the damage caused by wildfires, a deep learning-based wildfire-detection technology that extracts features and patterns from surveillance camera images was developed. However, many studies related to wildfire-image classification based on deep learning have highlighted the problem of data imbalance between wildfire-image data and forest-image data. This data imbalance causes model performance degradation. In this study, wildfire images were generated using a cycle-consistent generative adversarial network (CycleGAN) to eliminate data imbalances. In addition, a densely-connected-convolutional-networks-based (DenseNet-based) framework was proposed and its performance was compared with pre-trained models. While training with a train set containing an image generated by a GAN in the proposed DenseNet-based model, the best performance result value was realized among the models with an accuracy of 98.27% and an F1 score of 98.16, obtained using the test dataset. Finally, this trained model was applied to high-quality drone images of wildfires. The experimental results showed that the proposed framework demonstrated high wildfire-detection accuracy.


2019 ◽  
Vol 101 (2) ◽  
pp. 473-483 ◽  
Author(s):  
Eun‐Cheon Lim ◽  
Jaeil Kim ◽  
Jihye Park ◽  
Eun‐Jung Kim ◽  
Juhyun Kim ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document