Comparison of Object Detection Methods for Corn Damage Assessment Using Deep Learning

2020 ◽  
Vol 63 (6) ◽  
pp. 1969-1980
Author(s):  
Ali Hamidisepehr ◽  
Seyed V. Mirnezami ◽  
Jason K. Ward

HighlightsCorn damage detection was possible using advanced deep learning and computer vision techniques trained with images of simulated corn lodging.RetinaNet and YOLOv2 both worked well at identifying regions of lodged corn.Automating crop damage identification could provide useful information to producers and other stakeholders from visual-band UAS imagery.Abstract. Severe weather events can cause large financial losses to farmers. Detailed information on the location and severity of damage will assist farmers, insurance companies, and disaster response agencies in making wise post-damage decisions. The goal of this study was a proof-of-concept to detect areas of damaged corn from aerial imagery using computer vision and deep learning techniques. A specific objective was to compare existing object detection algorithms to determine which is best suited for corn damage detection. Simulated corn lodging was used to create a training and analysis data set. An unmanned aerial system equipped with an RGB camera was used for image acquisition. Three popular object detectors (Faster R-CNN, YOLOv2, and RetinaNet) were assessed for their ability to detect damaged areas. Average precision (AP) was used to compare object detectors. RetinaNet and YOLOv2 demonstrated robust capability for corn damage identification, with AP ranging from 98.43% to 73.24% and from 97.0% to 55.99%, respectively, across all conditions. Faster R-CNN did not perform as well as the other two models, with AP between 77.29% and 14.47% for all conditions. Detecting corn damage at later growth stages was more difficult for all three object detectors. Keywords: Computer vision, Faster R-CNN, RetinaNet, Severe weather, Smart farming, YOLO.

Entropy ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. 1174
Author(s):  
Ashish Kumar Gupta ◽  
Ayan Seal ◽  
Mukesh Prasad ◽  
Pritee Khanna

Detection and localization of regions of images that attract immediate human visual attention is currently an intensive area of research in computer vision. The capability of automatic identification and segmentation of such salient image regions has immediate consequences for applications in the field of computer vision, computer graphics, and multimedia. A large number of salient object detection (SOD) methods have been devised to effectively mimic the capability of the human visual system to detect the salient regions in images. These methods can be broadly categorized into two categories based on their feature engineering mechanism: conventional or deep learning-based. In this survey, most of the influential advances in image-based SOD from both conventional as well as deep learning-based categories have been reviewed in detail. Relevant saliency modeling trends with key issues, core techniques, and the scope for future research work have been discussed in the context of difficulties often faced in salient object detection. Results are presented for various challenging cases for some large-scale public datasets. Different metrics considered for assessment of the performance of state-of-the-art salient object detection models are also covered. Some future directions for SOD are presented towards end.


Mekatronika ◽  
2020 ◽  
Vol 2 (2) ◽  
pp. 49-54
Author(s):  
Arzielah Ashiqin Alwi ◽  
Ahmad Najmuddin Ibrahim ◽  
Muhammad Nur Aiman Shapiee ◽  
Muhammad Ar Rahim Ibrahim ◽  
Mohd Azraai Mohd Razman ◽  
...  

Dynamic gameplay, fast-paced and fast-changing gameplay, where angle shooting (top and bottom corner) has the best chance of a good goal, are the main aspects of handball. When it comes to the narrow-angle area, the goalkeeper has trouble blocked the goal. Therefore, this research discusses image processing to investigate the shooting precision performance analysis to detect the ball's accuracy at high speed. In the handball goal, the participants had to complete 50 successful shots at each of the four target locations. Computer vision will then be implemented through a camera to identify the ball, followed by determining the accuracy of the ball position of floating, net tangle and farthest or smallest using object detection as the accuracy marker. The model will be trained using Deep Learning (DL)  models of YOLOv2, YOLOv3, and Faster R-CNN and the best precision models of ball detection accuracy were compared. It was found that the best performance of the accuracy of the classifier Faster R-CNN produces 99% for all ball positions.


2021 ◽  
Author(s):  
Antoine Bouziat ◽  
Sylvain Desroziers ◽  
Abdoulaye Koroko ◽  
Antoine Lechevallier ◽  
Mathieu Feraille ◽  
...  

<p>Automation and robotics raise growing interests in the mining industry. If not already a reality, it is no more science fiction to imagine autonomous robots routinely participating in the exploration and extraction of mineral raw materials in the near future. Among the various scientific and technical issues to be addressed towards this objective, this study focuses on the automation of real-time characterisation of rock images captured on the field, either to discriminate rock types and mineral species or to detect small elements such as mineral grains or metallic nuggets. To do so, we investigate the potential of methods from the Computer Vision community, a subfield of Artificial Intelligence dedicated to image processing. In particular, we aim at assessing the potential of Deep Learning approaches and convolutional neuronal networks (CNN) for the analysis of field samples pictures, highlighting key challenges before an industrial use in operational contexts.</p><p>In a first initiative, we appraise Deep Learning methods to classify photographs of macroscopic rock samples between 12 lithological families. Using the architecture of reference CNN and a collection of 2700 images, we achieve a prediction accuracy above 90% for new pictures of good photographic quality. Nonetheless we then seek to improve the robustness of the method for on-the-fly field photographs. To do so, we train an additional CNN to automatically separate the rock sample from the background, with a detection algorithm. We also introduce a more sophisticated classification method combining a set of several CNN with a decision tree. The CNN are specifically trained to recognise petrological features such as textures, structures or mineral species, while the decision tree mimics the naturalist methodology for lithological identification.</p><p>In a second initiative, we evaluate Deep Learning techniques to spot and delimitate specific elements in finer-scale images. We use a data set of carbonate thin sections with various species of microfossils. The data comes from a sedimentology study but analogies can be drawn with igneous geology use cases. We train four state-of-the-art Deep Learning methods for object detection with a limited data set of 15 annotated images. The results on 130 other thin section images are then qualitatively assessed by expert geologists, and precisions and inference times quantitatively measured. The four models show good capabilities in detecting and categorising the microfossils. However differences in accuracy and performance are underlined, leading to recommendations for comparable projects in a mining context.</p><p>Altogether, this study illustrates the power of Computer Vision and Deep Learning approaches to automate rock image analysis. However, to make the most of these technologies in mining activities, stimulating research opportunities lies in adapting the algorithms to the geological use cases, embedding as much geological knowledge as possible in the statistical models, and mitigating the number of training data to be manually interpreted beforehand.   </p>


2021 ◽  
Vol 63 (11) ◽  
pp. 1-5
Author(s):  
Hoang Anh Tuan Dang ◽  
◽  
Minh Thang Nguyen ◽  

Despite the increasing application of deep learning (DL) models in various socioeconomics such as financial analysis and forecast, intelligent transport, self-driving, disease diagnosis, the effective use of this technology to support agricultural cultivation is still limited. This paper introduces the implementation of the lightest and state-of-the-art YOLOv5 architecture for automatic recognising of important growth stages of Cucumis meloL. from the camera images collected in the greenhouse. This image identification initiative achieved an average accuracy of 96% F1-score in the identification of the five growth stages of Cucumis melo L. using a limited set of training and testing data (total 2,818 images of Cucumis melo L.). These preliminary results lead to the conclusion that the YOLOv5 object detection and classification model is a truly lightweight and promising DL solution after the adoption of the transfer learning technique. Moreover, the YOLOv5 model can execute good performance on edge devices which may open up a new approach in different object detection and classification in real-time directly from a smartphone, Jetson Nano, IP camera...


Author(s):  
S Gopi Naik

Abstract: The plan is to establish an integrated system that can manage high-quality visual information and also detect weapons quickly and efficiently. It is obtained by integrating ARM-based computer vision and optimization algorithms with deep neural networks able to detect the presence of a threat. The whole system is connected to a Raspberry Pi module, which will capture live broadcasting and evaluate it using a deep convolutional neural network. Due to the intimate interaction between object identification and video and image analysis in real-time objects, By generating sophisticated ensembles that incorporate various low-level picture features with high-level information from object detection and scenario classifiers, their performance can quickly plateau. Deep learning models, which can learn semantic, high-level, deeper features, have been developed to overcome the issues that are present in optimization algorithms. It presents a review of deep learning based object detection frameworks that use Convolutional Neural Network layers for better understanding of object detection. The Mobile-Net SSD model behaves differently in network design, training methods, and optimization functions, among other things. The crime rate in suspicious areas has been reduced as a consequence of weapon detection. However, security is always a major concern in human life. The Raspberry Pi module, or computer vision, has been extensively used in the detection and monitoring of weapons. Due to the growing rate of human safety protection, privacy and the integration of live broadcasting systems which can detect and analyse images, suspicious areas are becoming indispensable in intelligence. This process uses a Mobile-Net SSD algorithm to achieve automatic weapons and object detection. Keywords: Computer Vision, Weapon and Object Detection, Raspberry Pi Camera, RTSP, SMTP, Mobile-Net SSD, CNN, Artificial Intelligence.


2018 ◽  
Vol 7 (3.34) ◽  
pp. 221
Author(s):  
Sooyoung Cho ◽  
Sang Geun Choi ◽  
Daeyeol Kim ◽  
Gyunghak Lee ◽  
Chae BongSohn

Performances of computer vision tasks have been drastically improved after applying deep learning. Such object recognition, object segmentation, object tracking, and others have been approached to the super-human level. Most of the algorithms were trained by using supervised learning. In general, the performance of computer vision is improved by increasing the size of the data. The collected data was labeled and used as a data set of the YOLO algorithm. In this paper, we propose a data set generation method using Unity which is one of the 3D engines. The proposed method makes it easy to obtain the data necessary for learning. We classify 2D polymorphic objects and test them against various data using a deep learning model. In the classification using CNN and VGG-16, 90% accuracy was achieved. And we used Tiny-YOLO of YOLO algorithm for object recognition and we achieved 78% accuracy. Finally, we compared in terms of virtual and real environments it showed a result of 97 to 99 percent for each accuracy.


Sign in / Sign up

Export Citation Format

Share Document