scholarly journals Sorghum Panicle Detection and Counting Using Unmanned Aerial System Images and Deep Learning

2020 ◽  
Vol 11 ◽  
Author(s):  
Zhe Lin ◽  
Wenxuan Guo
2021 ◽  
Vol 13 (14) ◽  
pp. 2822
Author(s):  
Zhe Lin ◽  
Wenxuan Guo

An accurate stand count is a prerequisite to determining the emergence rate, assessing seedling vigor, and facilitating site-specific management for optimal crop production. Traditional manual counting methods in stand assessment are labor intensive and time consuming for large-scale breeding programs or production field operations. This study aimed to apply two deep learning models, the MobileNet and CenterNet, to detect and count cotton plants at the seedling stage with unmanned aerial system (UAS) images. These models were trained with two datasets containing 400 and 900 images with variations in plant size and soil background brightness. The performance of these models was assessed with two testing datasets of different dimensions, testing dataset 1 with 300 by 400 pixels and testing dataset 2 with 250 by 1200 pixels. The model validation results showed that the mean average precision (mAP) and average recall (AR) were 79% and 73% for the CenterNet model, and 86% and 72% for the MobileNet model with 900 training images. The accuracy of cotton plant detection and counting was higher with testing dataset 1 for both CenterNet and MobileNet models. The results showed that the CenterNet model had a better overall performance for cotton plant detection and counting with 900 training images. The results also indicated that more training images are required when applying object detection models on images with different dimensions from training datasets. The mean absolute percentage error (MAPE), coefficient of determination (R2), and the root mean squared error (RMSE) values of the cotton plant counting were 0.07%, 0.98 and 0.37, respectively, with testing dataset 1 for the CenterNet model with 900 training images. Both MobileNet and CenterNet models have the potential to accurately and timely detect and count cotton plants based on high-resolution UAS images at the seedling stage. This study provides valuable information for selecting the right deep learning tools and the appropriate number of training images for object detection projects in agricultural applications.


2021 ◽  
Vol 203 ◽  
pp. 106023
Author(s):  
Qiufu Li ◽  
Yu Zhang ◽  
Hanbang Liang ◽  
Hui Gong ◽  
Liang Jiang ◽  
...  

2020 ◽  
Vol 11 ◽  
Author(s):  
Manya Afonso ◽  
Hubert Fonteijn ◽  
Felipe Schadeck Fiorentin ◽  
Dick Lensink ◽  
Marcel Mooij ◽  
...  

2020 ◽  
Vol 40 (5) ◽  
pp. 735-747
Author(s):  
I. Keren Evangeline ◽  
J. Glory Precious ◽  
N. Pazhanivel ◽  
S. P. Angeline Kirubha

2019 ◽  
Vol 11 (13) ◽  
pp. 1584 ◽  
Author(s):  
Yang Chen ◽  
Won Suk Lee ◽  
Hao Gan ◽  
Natalia Peres ◽  
Clyde Fraisse ◽  
...  

Strawberry growers in Florida suffer from a lack of efficient and accurate yield forecasts for strawberries, which would allow them to allocate optimal labor and equipment, as well as other resources for harvesting, transportation, and marketing. Accurate estimation of the number of strawberry flowers and their distribution in a strawberry field is, therefore, imperative for predicting the coming strawberry yield. Usually, the number of flowers and their distribution are estimated manually, which is time-consuming, labor-intensive, and subjective. In this paper, we develop an automatic strawberry flower detection system for yield prediction with minimal labor and time costs. The system used a small unmanned aerial vehicle (UAV) (DJI Technology Co., Ltd., Shenzhen, China) equipped with an RGB (red, green, blue) camera to capture near-ground images of two varieties (Sensation and Radiance) at two different heights (2 m and 3 m) and built orthoimages of a 402 m2 strawberry field. The orthoimages were automatically processed using the Pix4D software and split into sequential pieces for deep learning detection. A faster region-based convolutional neural network (R-CNN), a state-of-the-art deep neural network model, was chosen for the detection and counting of the number of flowers, mature strawberries, and immature strawberries. The mean average precision (mAP) was 0.83 for all detected objects at 2 m heights and 0.72 for all detected objects at 3 m heights. We adopted this model to count strawberry flowers in November and December from 2 m aerial images and compared the results with a manual count. The average deep learning counting accuracy was 84.1% with average occlusion of 13.5%. Using this system could provide accurate counts of strawberry flowers, which can be used to forecast future yields and build distribution maps to help farmers observe the growth cycle of strawberry fields.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Wenli Zhang ◽  
Kaizhen Chen ◽  
Jiaqi Wang ◽  
Yun Shi ◽  
Wei Guo

AbstractFruit detection and counting are essential tasks for horticulture research. With computer vision technology development, fruit detection techniques based on deep learning have been widely used in modern orchards. However, most deep learning-based fruit detection models are generated based on fully supervised approaches, which means a model trained with one domain species may not be transferred to another. There is always a need to recreate and label the relevant training dataset, but such a procedure is time-consuming and labor-intensive. This paper proposed a domain adaptation method that can transfer an existing model trained from one domain to a new domain without extra manual labeling. The method includes three main steps: transform the source fruit image (with labeled information) into the target fruit image (without labeled information) through the CycleGAN network; Automatically label the target fruit image by a pseudo-label process; Improve the labeling accuracy by a pseudo-label self-learning approach. Use a labeled orange image dataset as the source domain, unlabeled apple and tomato image dataset as the target domain, the performance of the proposed method from the perspective of fruit detection has been evaluated. Without manual labeling for target domain image, the mean average precision reached 87.5% for apple detection and 76.9% for tomato detection, which shows that the proposed method can potentially fill the species gap in deep learning-based fruit detection.


2021 ◽  
Author(s):  
Tanuj Misra ◽  
Alka Arora ◽  
Sudeep Marwaha ◽  
Ranjeet Ranjan Jha ◽  
Mrinmoy Ray ◽  
...  

Abstract Background: Computer vision with deep-learning is emerging as a major approach for non-invasive and non-destructive plant phenotyping. Spikes are the reproductive organs of wheat plants. Detection of spike helps in identifying heading, and counting of the spikes as well as area of the spikes will be useful for determination of the yield of wheat plant. Hence detection and counting of spikes which is considered as the grain bearing organ, has great importance in the phenomics study of large sets of germplasms. Results: In the present study, we developed an online platform “Web-SpikeSegNet” based on a deep-learning framework for spike detection and counting from the wheat plant’s visual images. This platform implements the “SpikeSegNet” approach developed by Misra et al.(2020), which has proved as an effective and robust approach for spike detection and counting. Architecture of the Web-SpikeSegNet consists of 2 layers. First Layer, Client Side Interface Layer, deals with deals with end user’s requests and its corresponding responses management while the second layer, Server Side Application Layer consisting of spike detection and counting module. The backbone of the spike detection module comprises of deep encoder-decoder network with hourglass for spike segmentation. Spike counting module implements the “Analyze Particle” function of imageJ to count the number of spikes. For evaluating the performance of Web-SpikeSegNet, we acquired wheat plant’s visual images using LemnaTec imaging platform installed at Nanaji Deshmukh Plant Phenomics Centre, ICAR-Indian Agricultural Research Institute, New Delhi, India and the satisfactory segmentation performances were obtained as Type I error 0.00159, Type II error 0.0586, Accuracy 99.65%, Precision 99.59% and F1 score 99.65%. Conclusions: In this study, freely available web-based software has been developed based on combined digital image analysis and deep learning techniques. As spike detection and counting in wheat phenotyping are closely related to the yield, Web-SpikeSegNet is a significant step forward in the field of wheat phenotyping and will be very useful to the researchers and students working in the domain.


Sign in / Sign up

Export Citation Format

Share Document