scholarly journals CropDeep: The Crop Vision Dataset for Deep-Learning-Based Classification and Detection in Precision Agriculture

Sensors ◽  
2019 ◽  
Vol 19 (5) ◽  
pp. 1058 ◽  
Author(s):  
Yang-Yang Zheng ◽  
Jian-Lei Kong ◽  
Xue-Bo Jin ◽  
Xiao-Yi Wang ◽  
Min Zuo

Intelligence has been considered as the major challenge in promoting economic potential and production efficiency of precision agriculture. In order to apply advanced deep-learning technology to complete various agricultural tasks in online and offline ways, a large number of crop vision datasets with domain-specific annotation are urgently needed. To encourage further progress in challenging realistic agricultural conditions, we present the CropDeep species classification and detection dataset, consisting of 31,147 images with over 49,000 annotated instances from 31 different classes. In contrast to existing vision datasets, images were collected with different cameras and equipment in greenhouses, captured in a wide variety of situations. It features visually similar species and periodic changes with more representative annotations, which have supported a stronger benchmark for deep-learning-based classification and detection. To further verify the application prospect, we provide extensive baseline experiments using state-of-the-art deep-learning classification and detection models. Results show that current deep-learning-based methods achieve well performance in classification accuracy over 99%. While current deep-learning methods achieve only 92% detection accuracy, illustrating the difficulty of the dataset and improvement room of state-of-the-art deep-learning models when applied to crops production and management. Specifically, we suggest that the YOLOv3 network has good potential application in agricultural detection tasks.

2021 ◽  
Vol 11 (2) ◽  
pp. 851
Author(s):  
Wei-Liang Ou ◽  
Tzu-Ling Kuo ◽  
Chin-Chieh Chang ◽  
Chih-Peng Fan

In this study, for the application of visible-light wearable eye trackers, a pupil tracking methodology based on deep-learning technology is developed. By applying deep-learning object detection technology based on the You Only Look Once (YOLO) model, the proposed pupil tracking method can effectively estimate and predict the center of the pupil in the visible-light mode. By using the developed YOLOv3-tiny-based model to test the pupil tracking performance, the detection accuracy is as high as 80%, and the recall rate is close to 83%. In addition, the average visible-light pupil tracking errors of the proposed YOLO-based deep-learning design are smaller than 2 pixels for the training mode and 5 pixels for the cross-person test, which are much smaller than those of the previous ellipse fitting design without using deep-learning technology under the same visible-light conditions. After the combination of calibration process, the average gaze tracking errors by the proposed YOLOv3-tiny-based pupil tracking models are smaller than 2.9 and 3.5 degrees at the training and testing modes, respectively, and the proposed visible-light wearable gaze tracking system performs up to 20 frames per second (FPS) on the GPU-based software embedded platform.


2020 ◽  
Vol 12 (14) ◽  
pp. 2229
Author(s):  
Haojie Liu ◽  
Hong Sun ◽  
Minzan Li ◽  
Michihisa Iida

Maize plant detection was conducted in this study with the goals of target fertilization and reduction of fertilization waste in weed spots and gaps between maize plants. The methods used included two types of color featuring and deep learning (DL). The four color indices used were excess green (ExG), excess red (ExR), ExG minus ExR, and the hue value from the HSV (hue, saturation, and value) color space, while the DL methods used were YOLOv3 and YOLOv3_tiny. For practical application, this study focused on performance comparison in detection accuracy, robustness to complex field conditions, and detection speed. Detection accuracy was evaluated by the resulting images, which were divided into three categories: true positive, false positive, and false negative. The robustness evaluation was performed by comparing the average intersection over union of each detection method across different sub–datasets—namely original subset, blur processing subset, increased brightness subset, and reduced brightness subset. The detection speed was evaluated by the indicator of frames per second. Results demonstrated that the DL methods outperformed the color index–based methods in detection accuracy and robustness to complex conditions, while they were inferior to color feature–based methods in detection speed. This research shows the application potential of deep learning technology in maize plant detection. Future efforts are needed to improve the detection speed for practical applications.


CONVERTER ◽  
2021 ◽  
pp. 598-605
Author(s):  
Zhao Jianchao

Behind the rapid development of the Internet industry, Internet security has become a hidden danger. In recent years, the outstanding performance of deep learning in classification and behavior prediction based on massive data makes people begin to study how to use deep learning technology. Therefore, this paper attempts to apply deep learning to intrusion detection to learn and classify network attacks. Aiming at the nsl-kdd data set, this paper first uses the traditional classification methods and several different deep learning algorithms for learning classification. This paper deeply analyzes the correlation among data sets, algorithm characteristics and experimental classification results, and finds out the deep learning algorithm which is relatively good at. Then, a normalized coding algorithm is proposed. The experimental results show that the algorithm can improve the detection accuracy and reduce the false alarm rate.


2020 ◽  
Author(s):  
Vineeth N Balasubramanian ◽  
Wei Guo ◽  
Akshay L Chandra ◽  
Sai Vikas Desai

In light of growing challenges in agriculture with ever growing food demand across the world, efficient crop management techniques are necessary to increase crop yield. Precision agriculture techniques allow the stakeholders to make effective and customized crop management decisions based on data gathered from monitoring crop environments. Plant phenotyping techniques play a major role in accurate crop monitoring. Advancements in deep learning have made previously difficult phenotyping tasks possible. This survey aims to introduce the reader to the state of the art research in deep plant phenotyping.


Author(s):  
Mohd Najib Ahmad ◽  
Abdul Rashid Mohamed Shariff ◽  
Ishak Aris ◽  
Izhal Abdul Halin ◽  
Ramle Moslim

The bagworm species of Metisa plana, is one of the major species of leaf-eating insect pest that attack oil palm in Peninsular Malaysia. Without any treatment, this situation may cause 43% yield loss from a moderate attack. In 2020, the economic loss due to bagworm attacks was recorded at around RM 180 million. Based on this scenario, it is necessary to closely monitor the bagworm outbreak at  infested areas. Accuracy and precise data collection is debatable, due to human errors. . Hence, the objective of this study is to design and develop a specific machine vision that incorporates an image processing algorithm according to its functional modes. In this regard, a device, the Automated Bagworm Counter or Oto-BaCTM is the first in the world to be developed with an embedded software that is based on the use of a graphic processing unit computation and a TensorFlow/Teano library setup for the trained dataset. The technology is based on the developed deep learning with Faster Regions with Convolutional Neural Networks technique towards real time object detection. The Oto-BaCTM uses an ordinary camera. By using self-developed deep learning algorithms, a motion-tracking and false colour analysis were applied to detect and count number of living and dead larvae and pupae population per frond, respectively, corresponding to three major groups or sizes classification. Initially, in the first trial, the Oto-BaCTM has resulted in low percentages of detection accuracy for the living and dead G1 larvae (47.0% & 71.7%), G2 larvae (39.1 & 50.0%) and G3 pupae (30.1% & 20.9%). After some improvements on the training dataset, the percentages increased in the next field trial, with amounts of 40.5% and 7.0% for the living and dead G1 larvae, 40.1% and 29.2% for the living and dead G2 larvae and 47.7% and 54.6% for the living and dead pupae. The development of the ground-based device is the pioneer in the oil palm industry, in which it reduces human errors when conducting census while promoting precision agriculture practice.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1520 ◽  
Author(s):  
Qian Zhang ◽  
Yeqi Liu ◽  
Chuanyang Gong ◽  
Yingyi Chen ◽  
Huihui Yu

Deep Learning (DL) is the state-of-the-art machine learning technology, which shows superior performance in computer vision, bioinformatics, natural language processing, and other areas. Especially as a modern image processing technology, DL has been successfully applied in various tasks, such as object detection, semantic segmentation, and scene analysis. However, with the increase of dense scenes in reality, due to severe occlusions, and small size of objects, the analysis of dense scenes becomes particularly challenging. To overcome these problems, DL recently has been increasingly applied to dense scenes and has begun to be used in dense agricultural scenes. The purpose of this review is to explore the applications of DL for dense scenes analysis in agriculture. In order to better elaborate the topic, we first describe the types of dense scenes in agriculture, as well as the challenges. Next, we introduce various popular deep neural networks used in these dense scenes. Then, the applications of these structures in various agricultural tasks are comprehensively introduced in this review, including recognition and classification, detection, counting and yield estimation. Finally, the surveyed DL applications, limitations and the future work for analysis of dense images in agriculture are summarized.


2019 ◽  
Vol 11 (24) ◽  
pp. 2939 ◽  
Author(s):  
Lonesome Malambo ◽  
Sorin Popescu ◽  
Nian-Wei Ku ◽  
William Rooney ◽  
Tan Zhou ◽  
...  

Small unmanned aerial systems (UAS) have emerged as high-throughput platforms for the collection of high-resolution image data over large crop fields to support precision agriculture and plant breeding research. At the same time, the improved efficiency in image capture is leading to massive datasets, which pose analysis challenges in providing needed phenotypic data. To complement these high-throughput platforms, there is an increasing need in crop improvement to develop robust image analysis methods to analyze large amount of image data. Analysis approaches based on deep learning models are currently the most promising and show unparalleled performance in analyzing large image datasets. This study developed and applied an image analysis approach based on a SegNet deep learning semantic segmentation model to estimate sorghum panicles counts, which are critical phenotypic data in sorghum crop improvement, from UAS images over selected sorghum experimental plots. The SegNet model was trained to semantically segment UAS images into sorghum panicles, foliage and the exposed ground using 462, 250 × 250 labeled images, which was then applied to field orthomosaic to generate a field-level semantic segmentation. Individual panicle locations were obtained after post-processing the segmentation output to remove small objects and split merged panicles. A comparison between model panicle count estimates and manually digitized panicle locations in 60 randomly selected plots showed an overall detection accuracy of 94%. A per-plot panicle count comparison also showed high agreement between estimated and reference panicle counts (Spearman correlation ρ = 0.88, mean bias = 0.65). Misclassifications of panicles during the semantic segmentation step and mosaicking errors in the field orthomosaic contributed mainly to panicle detection errors. Overall, the approach based on deep learning semantic segmentation showed good promise and with a larger labeled dataset and extensive hyper-parameter tuning, should provide even more robust and effective characterization of sorghum panicle counts.


2020 ◽  
Vol 12 (7) ◽  
pp. 1128 ◽  
Author(s):  
Kaili Cao ◽  
Xiaoli Zhang

Tree species classification is important for the management and sustainable development of forest resources. Traditional object-oriented tree species classification methods, such as support vector machines, require manual feature selection and generally low accuracy, whereas deep learning technology can automatically extract image features to achieve end-to-end classification. Therefore, a tree classification method based on deep learning is proposed in this study. This method combines the semantic segmentation network U-Net and the feature extraction network ResNet into an improved Res-UNet network, where the convolutional layer of the U-Net network is represented by the residual unit of ResNet, and linear interpolation is used instead of deconvolution in each upsampling layer. At the output of the network, conditional random fields are used for post-processing. This network model is used to perform classification experiments on airborne orthophotos of Nanning Gaofeng Forest Farm in Guangxi, China. The results are then compared with those of U-Net and ResNet networks. The proposed method exhibits higher classification accuracy with an overall classification accuracy of 87%. Thus, the proposed model can effectively implement forest tree species classification and provide new opportunities for tree species classification in southern China.


2019 ◽  
Vol 2 (1) ◽  
Author(s):  
Bhavik N. Patel ◽  
Louis Rosenberg ◽  
Gregg Willcox ◽  
David Baltaxe ◽  
Mimi Lyons ◽  
...  

AbstractHuman-in-the-loop (HITL) AI may enable an ideal symbiosis of human experts and AI models, harnessing the advantages of both while at the same time overcoming their respective limitations. The purpose of this study was to investigate a novel collective intelligence technology designed to amplify the diagnostic accuracy of networked human groups by forming real-time systems modeled on biological swarms. Using small groups of radiologists, the swarm-based technology was applied to the diagnosis of pneumonia on chest radiographs and compared against human experts alone, as well as two state-of-the-art deep learning AI models. Our work demonstrates that both the swarm-based technology and deep-learning technology achieved superior diagnostic accuracy than the human experts alone. Our work further demonstrates that when used in combination, the swarm-based technology and deep-learning technology outperformed either method alone. The superior diagnostic accuracy of the combined HITL AI solution compared to radiologists and AI alone has broad implications for the surging clinical AI deployment and implementation strategies in future practice.


2021 ◽  
Author(s):  
Zhihao Tan ◽  
Jiawei Shi ◽  
Rongjie Lv ◽  
Qingyuan Li ◽  
Jing Yang ◽  
...  

Abstract BackgroundCotton is one of the most economically important crops in the world. The fertility of male reproductive organs is a key determinant of cotton yield. The anther dehiscence or indehiscence directly determine the probability of fertilization in cotton. Thus, the rapid and accurate identification of cotton anther dehiscence status is important for judging anther growth status and promoting genetic breeding research. The development of computer vision technology and the advent of big data have prompted the application of deep learning techniques to agricultural phenotype research. Therefore, two deep learning models (Faster R-CNN and YOLOv5) were proposed to detect the number and dehiscence status of anthers. ResultThe single-stage model based on YOLOv5 has higher recognition efficiency and the ability to deploy to the mobile end. Breeding researchers can apply this model to terminals to achieve a more intuitive understanding of cotton anther dehiscence status. Moreover, three improvement strategies of Faster R-CNN model were proposed, the improved model has higher detection accuracy than YOLOv5 model. We have made four improvements to the Faster R-CNN model and after the ensemble of the four models, R2 of “open” reaches 0.8765, R2 of “close” reaches 0.8539, R2 of “all” reaches 0.8481, higher than the prediction result of either model alone, and can completely replace the manual counting method. We can use this model to quickly extract the dehiscence rate of cotton anther under high temperature (HT) condition. In addition, the percentage of dehiscent anther of randomly selected 30 cotton varieties were observed from cotton population under normal conditions and HT conditions through the ensemble of Faster R-CNN model and manual observation. The result showed HT varying decreased the percentage of dehiscent anther in different cotton lines, consistent with the manual method. ConclusionsThe deep learning technology first time been applied to cotton anther dehiscence status recognition instead of manual method to quickly screen the HT tolerant cotton varieties and can help to explore the key genetic improvement genes in the future, promote cotton breeding and improvement.


Sign in / Sign up

Export Citation Format

Share Document