scholarly journals Unmanned Aerial System-Based Weed Mapping in Sod Production Using a Convolutional Neural Network

2021 ◽  
Vol 12 ◽  
Author(s):  
Jing Zhang ◽  
Jerome Maleski ◽  
David Jespersen ◽  
F. C. Waltz ◽  
Glen Rains ◽  
...  

Weeds are a persistent problem on sod farms, and herbicides to control different weed species are one of the largest chemical inputs. Recent advances in unmanned aerial systems (UAS) and artificial intelligence provide opportunities for weed mapping on sod farms. This study investigates the weed type composition and area through both ground and UAS-based weed surveys and trains a convolutional neural network (CNN) for identifying and mapping weeds in sod fields using UAS-based imagery and a high-level application programming interface (API) implementation (Fastai) of the PyTorch deep learning library. The performance of the CNN was overall similar to, and in some classes (broadleaf and spurge) better than, human eyes indicated by the metric recall. In general, the CNN detected broadleaf, grass weeds, spurge, sedge, and no weeds at a precision between 0.68 and 0.87, 0.57 and 0.82, 0.68 and 0.83, 0.66 and 0.90, and 0.80 and 0.88, respectively, when using UAS images at 0.57 cm–1.28 cm pixel–1 resolution. Recall ranges for the five classes were 0.78–0.93, 0.65–0.87, 0.82–0.93, 0.52–0.79, and 0.94–0.99. Additionally, this study demonstrates that a CNN can achieve precision and recall above 0.9 at detecting different types of weeds during turf establishment when the weeds are mature. The CNN is limited by the image resolution, and more than one model may be needed in practice to improve the overall performance of weed mapping.

Drones ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 4
Author(s):  
Donovan Flores ◽  
Iván González-Hernández ◽  
Rogelio Lozano ◽  
Jesus Manuel Vazquez-Nicolas ◽  
Jorge Luis Hernandez Toral

We present an automatic agave detection method for counting plants based on aerial data from a UAV (Unmanned Aerial Vehicle). Our objective is to autonomously count the number of agave plants in an area to aid management of the yield. An orthomosaic is obtained from agave plantations, which is then used to create a database. This database is in turn used to train a Convolutional Neural Network (CNN). The proposed method is based on computer image processing, and the CNN increases the detection performance of the approach. The main contribution of the present paper is to propose a method for agave plant detection with a high level of precision. In order to test the proposed method in a real agave plantation, we develop a UAV platform, which is equipped with several sensors to reach accurate counting. Therefore, our prototype can safely track a desired path to detect and count agave plants. For comparison purposes, we perform the same application using a simpler algorithm. The result shows that our proposed algorithm has better performance reaching an F1 score of 0.96 as opposed to 0.57 for the Haar algorithm. The obtained experimental results suggest that the proposed algorithm is robust and has considerable potential to help farmers manage agave agroecosystems.


2018 ◽  
Vol 1085 ◽  
pp. 042040
Author(s):  
Adriano Di Florio ◽  
Felice Pantaleo ◽  
Antonio Carta ◽  

2020 ◽  
Vol 10 (15) ◽  
pp. 5333
Author(s):  
Anam Manzoor ◽  
Waqar Ahmad ◽  
Muhammad Ehatisham-ul-Haq ◽  
Abdul Hannan ◽  
Muhammad Asif Khan ◽  
...  

Emotions are a fundamental part of human behavior and can be stimulated in numerous ways. In real-life, we come across different types of objects such as cake, crab, television, trees, etc., in our routine life, which may excite certain emotions. Likewise, object images that we see and share on different platforms are also capable of expressing or inducing human emotions. Inferring emotion tags from these object images has great significance as it can play a vital role in recommendation systems, image retrieval, human behavior analysis and, advertisement applications. The existing schemes for emotion tag perception are based on the visual features, like color and texture of an image, which are poorly affected by lightning conditions. The main objective of our proposed study is to address this problem by introducing a novel idea of inferring emotion tags from the images based on object-related features. In this aspect, we first created an emotion-tagged dataset from the publicly available object detection dataset (i.e., “Caltech-256”) using subject evaluation from 212 users. Next, we used a convolutional neural network-based model to automatically extract the high-level features from object images for recognizing nine (09) emotion categories, such as amusement, awe, anger, boredom, contentment, disgust, excitement, fear, and sadness. Experimental results on our emotion-tagged dataset endorse the success of our proposed idea in terms of accuracy, precision, recall, specificity, and F1-score. Overall, the proposed scheme achieved an accuracy rate of approximately 85% and 79% using top-level and bottom-level emotion tagging, respectively. We also performed a gender-based analysis for inferring emotion tags and observed that male and female subjects have discernment in emotions perception concerning different object categories.


2019 ◽  
Vol 9 (16) ◽  
pp. 3312 ◽  
Author(s):  
Zhu ◽  
Ge ◽  
Liu

In order to realize the non-destructive intelligent identification of weld surface defects, an intelligent recognition method based on deep learning is proposed, which is mainly formed by convolutional neural network (CNN) and forest random. First, the high-level features are automatically learned through the CNN. Random forest is trained with extracted high-level features to predict the classification results. Secondly, the weld surface defects images are collected and preprocessed by image enhancement and threshold segmentation. A database of weld surface defects is established using pre-processed images. Finally, comparative experiments are performed on the weld surface defects database. The results show that the accuracy of the method combined with CNN and random forest can reach 0.9875, and it also demonstrates the method is effective and practical.


2018 ◽  
Vol 8 (12) ◽  
pp. 2367 ◽  
Author(s):  
Hongling Luo ◽  
Jun Sang ◽  
Weiqun Wu ◽  
Hong Xiang ◽  
Zhili Xiang ◽  
...  

In recent years, the trampling events due to overcrowding have occurred frequently, which leads to the demand for crowd counting under a high-density environment. At present, there are few studies on monitoring crowds in a large-scale crowded environment, while there exists technology drawbacks and a lack of mature systems. Aiming to solve the crowd counting problem with high-density under complex environments, a feature fusion-based deep convolutional neural network method FF-CNN (Feature Fusion of Convolutional Neural Network) was proposed in this paper. The proposed FF-CNN mapped the crowd image to its crowd density map, and then obtained the head count by integration. The geometry adaptive kernels were adopted to generate high-quality density maps which were used as ground truths for network training. The deconvolution technique was used to achieve the fusion of high-level and low-level features to get richer features, and two loss functions, i.e., density map loss and absolute count loss, were used for joint optimization. In order to increase the sample diversity, the original images were cropped with a random cropping method for each iteration. The experimental results of FF-CNN on the ShanghaiTech public dataset showed that the fusion of low-level and high-level features can extract richer features to improve the precision of density map estimation, and further improve the accuracy of crowd counting.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4931
Author(s):  
Che-Chou Shen ◽  
Jui-En Yang

In ultrasound B-mode imaging, speckle noises decrease the accuracy of estimation of tissue echogenicity of imaged targets from the amplitude of the echo signals. In addition, since the granular size of the speckle pattern is affected by the point spread function (PSF) of the imaging system, the resolution of B-mode image remains limited, and the boundaries of tissue structures often become blurred. This study proposed a convolutional neural network (CNN) to remove speckle noises together with improvement of image spatial resolution to reconstruct ultrasound tissue echogenicity map. The CNN model is trained using in silico simulation dataset and tested with experimentally acquired images. Results indicate that the proposed CNN method can effectively eliminate the speckle noises in the background of the B-mode images while retaining the contours and edges of the tissue structures. The contrast and the contrast-to-noise ratio of the reconstructed echogenicity map increased from 0.22/2.72 to 0.33/44.14, and the lateral and axial resolutions also improved from 5.9/2.4 to 2.9/2.0, respectively. Compared with other post-processing filtering methods, the proposed CNN method provides better approximation to the original tissue echogenicity by completely removing speckle noises and improving the image resolution together with the capability for real-time implementation.


Sign in / Sign up

Export Citation Format

Share Document