scholarly journals Oto-BaCTM: An Automated Artificial Intelligence (AI) Detector and Counter for Bagworm (Lepidoptera: Psychidae) Census

Author(s):  
Mohd Najib Ahmad ◽  
Abdul Rashid Mohamed Shariff ◽  
Ishak Aris ◽  
Izhal Abdul Halin ◽  
Ramle Moslim

The bagworm species of Metisa plana, is one of the major species of leaf-eating insect pest that attack oil palm in Peninsular Malaysia. Without any treatment, this situation may cause 43% yield loss from a moderate attack. In 2020, the economic loss due to bagworm attacks was recorded at around RM 180 million. Based on this scenario, it is necessary to closely monitor the bagworm outbreak at  infested areas. Accuracy and precise data collection is debatable, due to human errors. . Hence, the objective of this study is to design and develop a specific machine vision that incorporates an image processing algorithm according to its functional modes. In this regard, a device, the Automated Bagworm Counter or Oto-BaCTM is the first in the world to be developed with an embedded software that is based on the use of a graphic processing unit computation and a TensorFlow/Teano library setup for the trained dataset. The technology is based on the developed deep learning with Faster Regions with Convolutional Neural Networks technique towards real time object detection. The Oto-BaCTM uses an ordinary camera. By using self-developed deep learning algorithms, a motion-tracking and false colour analysis were applied to detect and count number of living and dead larvae and pupae population per frond, respectively, corresponding to three major groups or sizes classification. Initially, in the first trial, the Oto-BaCTM has resulted in low percentages of detection accuracy for the living and dead G1 larvae (47.0% & 71.7%), G2 larvae (39.1 & 50.0%) and G3 pupae (30.1% & 20.9%). After some improvements on the training dataset, the percentages increased in the next field trial, with amounts of 40.5% and 7.0% for the living and dead G1 larvae, 40.1% and 29.2% for the living and dead G2 larvae and 47.7% and 54.6% for the living and dead pupae. The development of the ground-based device is the pioneer in the oil palm industry, in which it reduces human errors when conducting census while promoting precision agriculture practice.

Agriculture ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. 1265
Author(s):  
Mohd Najib Ahmad ◽  
Abdul Rashid Mohamed Shariff ◽  
Ishak Aris ◽  
Izhal Abdul Halin

The bagworm is a vicious leaf eating insect pest that threatens the oil palm plantations in Malaysia. The economic impact from defoliation of approximately 10% to 13% due to bagworm attack might cause about 33% to 40% yield loss over 2 years. Due to this, monitoring and detecting of bagworm populations in oil palm plantations is required as the preliminary steps to ensure proper planning of control actions in these areas. Hence, the development of an image processing algorithm for detection and counting of Metisa plana Walker, a species of Malaysia’s local bagworm, using image segmentation has been researched and completed. The color and shape features from the segmented images for real time object detection showed an average detection accuracy of 40% and 34%, at 30 cm and 50 cm camera distance, respectively. After some improvements on training dataset and marking detected bagworm with bounding box, a deep learning algorithm with Faster Regional Convolutional Neural Network (Faster R-CNN) algorithm was applied leading to the percentage of the detection accuracy increased up to 100% at a camera distance of 30 cm in close conditions. The proposed solution is also designed to distinguish between the living and dead larvae of the bagworms using motion detection which resulted in approximately 73–100% accuracy at a camera distance of 30 cm in the close conditions. Through false color analysis, distinct differences in the pixel count based on the slope was observed for dead and live pupae at 630 nm and 940 nm, with the slopes recorded at 0.38 and 0.28, respectively. The higher pixel count and slope correlated with the dead pupae while the lower pixel count and slope, represented the living pupae.


2021 ◽  
Vol 13 (10) ◽  
pp. 1909
Author(s):  
Jiahuan Jiang ◽  
Xiongjun Fu ◽  
Rui Qin ◽  
Xiaoyan Wang ◽  
Zhifeng Ma

Synthetic Aperture Radar (SAR) has become one of the important technical means of marine monitoring in the field of remote sensing due to its all-day, all-weather advantage. National territorial waters to achieve ship monitoring is conducive to national maritime law enforcement, implementation of maritime traffic control, and maintenance of national maritime security, so ship detection has been a hot spot and focus of research. After the development from traditional detection methods to deep learning combined methods, most of the research always based on the evolving Graphics Processing Unit (GPU) computing power to propose more complex and computationally intensive strategies, while in the process of transplanting optical image detection ignored the low signal-to-noise ratio, low resolution, single-channel and other characteristics brought by the SAR image imaging principle. Constantly pursuing detection accuracy while ignoring the detection speed and the ultimate application of the algorithm, almost all algorithms rely on powerful clustered desktop GPUs, which cannot be implemented on the frontline of marine monitoring to cope with the changing realities. To address these issues, this paper proposes a multi-channel fusion SAR image processing method that makes full use of image information and the network’s ability to extract features; it is also based on the latest You Only Look Once version 4 (YOLO-V4) deep learning framework for modeling architecture and training models. The YOLO-V4-light network was tailored for real-time and implementation, significantly reducing the model size, detection time, number of computational parameters, and memory consumption, and refining the network for three-channel images to compensate for the loss of accuracy due to light-weighting. The test experiments were completed entirely on a portable computer and achieved an Average Precision (AP) of 90.37% on the SAR Ship Detection Dataset (SSDD), simplifying the model while ensuring a lead over most existing methods. The YOLO-V4-lightship detection algorithm proposed in this paper has great practical application in maritime safety monitoring and emergency rescue.


2021 ◽  
Author(s):  
Sung Hyun Noh ◽  
Chansik An ◽  
Dain Kim ◽  
Seung Hyun Lee ◽  
Min-Yung Chang ◽  
...  

Abstract Background A computer algorithm that automatically detects sacroiliac joint abnormalities on plain radiograph would help radiologists avoid missing sacroiliitis. This study aimed to develop and validate a deep learning model to detect and diagnose sacroiliitis on plain radiograph in young patients with low back pain. Methods This Institutional Review Board-approved retrospective study included 478 and 468 plain radiographs from 241 and 433 young (< 40 years) patients who complained of low back pain with and without ankylosing spondylitis, respectively. They were randomly split into training and test datasets with a ratio of 8:2. Radiologists reviewed the images and labeled the coordinates of a bounding box and determined the presence or absence of sacroiliitis for each sacroiliac joint. We fine-tined and optimized the EfficientDet-D4 object detection model pre-trained on the COCO 2107 dataset on the training dataset and validated the final model on the test dataset. Results The mean average precision, an evaluation metric for object detection accuracy, was 0.918 at 0.5 intersection over union. In the diagnosis of sacroiliitis, the area under the curve, sensitivity, specificity, accuracy, and F1-score were 0.932 (95% confidence interval, 0.903–0.961), 96.9% (92.9–99.0), 86.8% (81.5–90.9), 91.1% (87.7–93.7), and 90.2% (85.0–93.9), respectively. Conclusions The EfficientDet, a deep learning-based object detection algorithm, could be used to automatically diagnose sacroiliitis on plain radiograph.


Author(s):  
Dima M. Alalharith ◽  
Hajar M. Alharthi ◽  
Wejdan M. Alghamdi ◽  
Yasmine M. Alsenbel ◽  
Nida Aslam ◽  
...  

Computer-based technologies play a central role in the dentistry field, as they present many methods for diagnosing and detecting various diseases, such as periodontitis. The current study aimed to develop and evaluate the state-of-the-art object detection and recognition techniques and deep learning algorithms for the automatic detection of periodontal disease in orthodontic patients using intraoral images. In this study, a total of 134 intraoral images were divided into a training dataset (n = 107 [80%]) and a test dataset (n = 27 [20%]). Two Faster Region-based Convolutional Neural Network (R-CNN) models using ResNet-50 Convolutional Neural Network (CNN) were developed. The first model detects the teeth to locate the region of interest (ROI), while the second model detects gingival inflammation. The detection accuracy, precision, recall, and mean average precision (mAP) were calculated to verify the significance of the proposed model. The teeth detection model achieved an accuracy, precision, recall, and mAP of 100 %, 100%, 51.85%, and 100%, respectively. The inflammation detection model achieved an accuracy, precision, recall, and mAP of 77.12%, 88.02%, 41.75%, and 68.19%, respectively. This study proved the viability of deep learning models for the detection and diagnosis of gingivitis in intraoral images. Hence, this highlights its potential usability in the field of dentistry and aiding in reducing the severity of periodontal disease globally through preemptive non-invasive diagnosis.


Sensors ◽  
2019 ◽  
Vol 19 (5) ◽  
pp. 1058 ◽  
Author(s):  
Yang-Yang Zheng ◽  
Jian-Lei Kong ◽  
Xue-Bo Jin ◽  
Xiao-Yi Wang ◽  
Min Zuo

Intelligence has been considered as the major challenge in promoting economic potential and production efficiency of precision agriculture. In order to apply advanced deep-learning technology to complete various agricultural tasks in online and offline ways, a large number of crop vision datasets with domain-specific annotation are urgently needed. To encourage further progress in challenging realistic agricultural conditions, we present the CropDeep species classification and detection dataset, consisting of 31,147 images with over 49,000 annotated instances from 31 different classes. In contrast to existing vision datasets, images were collected with different cameras and equipment in greenhouses, captured in a wide variety of situations. It features visually similar species and periodic changes with more representative annotations, which have supported a stronger benchmark for deep-learning-based classification and detection. To further verify the application prospect, we provide extensive baseline experiments using state-of-the-art deep-learning classification and detection models. Results show that current deep-learning-based methods achieve well performance in classification accuracy over 99%. While current deep-learning methods achieve only 92% detection accuracy, illustrating the difficulty of the dataset and improvement room of state-of-the-art deep-learning models when applied to crops production and management. Specifically, we suggest that the YOLOv3 network has good potential application in agricultural detection tasks.


2020 ◽  
Author(s):  
Zongfeng Yang ◽  
Shang Gao ◽  
Feng Xiao ◽  
Ganghua Li ◽  
Yangfeng Ding ◽  
...  

Abstract Background : Identification and characterization of new traits with a sound physiological foundation is essential for crop breeding and management. Deep learning has been widely used in image data analysis to explore spatial and temporal information on crop growth and development, thus strengthening the power of the identification of physiological traits. This study aims to develop a novel trait that indicates source and sink relation in japonica rice based on deep learning.Results : We applied a deep learning approach to accurately segment leaf and panicle and subsequently developed the procedure of GvCrop to calculate the leaf to panicle ratio (LPR) of rice populations during grain filling. Images of the training dataset were captured in the field experiments, with large variations in camera shooting angle, the elevation angle and the azimuth angle of the sun, rice genotype, and plant phenological stages. Accurately labeled by manually annotating all the panicle and leaf regions, the resulting dataset were used to train FPN-Mask (Feature Pyramid Network Mask) models, consisting of a backbone network and a task-specific sub-network. The model with the highest accuracy is then selected to study the variations in LPR among 192 rice germplasms and among agronomical practices. Despite the challenging field conditions, FPN-Mask models achieved a high detection accuracy, with Pixel Accuracy being 0.99 for panicles and 0.98 for leaves. The calculated LPRs showed large spatial and temporal variations as well as genotypic differences.Conclusion : Deep learning techniques can achieve high accuracy in simultaneously detecting panicle and leaf data from complex rice field images. The proposed FPN-Mask model is applicable for detecting and quantifying crop performance under field conditions. The newly identified trait of LPR should provide a high throughput protocol for breeders to select superior rice cultivars as well as for agronomists to precisely manage field crops that have a good balance of source and sink.


2021 ◽  
Vol 8 ◽  
Author(s):  
Mohamed Elgendi ◽  
Muhammad Umer Nasir ◽  
Qunfeng Tang ◽  
David Smith ◽  
John-Paul Grenier ◽  
...  

Chest X-ray imaging technology used for the early detection and screening of COVID-19 pneumonia is both accessible worldwide and affordable compared to other non-invasive technologies. Additionally, deep learning methods have recently shown remarkable results in detecting COVID-19 on chest X-rays, making it a promising screening technology for COVID-19. Deep learning relies on a large amount of data to avoid overfitting. While overfitting can result in perfect modeling on the original training dataset, on a new testing dataset it can fail to achieve high accuracy. In the image processing field, an image augmentation step (i.e., adding more training data) is often used to reduce overfitting on the training dataset, and improve prediction accuracy on the testing dataset. In this paper, we examined the impact of geometric augmentations as implemented in several recent publications for detecting COVID-19. We compared the performance of 17 deep learning algorithms with and without different geometric augmentations. We empirically examined the influence of augmentation with respect to detection accuracy, dataset diversity, augmentation methodology, and network size. Contrary to expectation, our results show that the removal of recently used geometrical augmentation steps actually improved the Matthews correlation coefficient (MCC) of 17 models. The MCC without augmentation (MCC = 0.51) outperformed four recent geometrical augmentations (MCC = 0.47 for Data Augmentation 1, MCC = 0.44 for Data Augmentation 2, MCC = 0.48 for Data Augmentation 3, and MCC = 0.49 for Data Augmentation 4). When we retrained a recently published deep learning without augmentation on the same dataset, the detection accuracy significantly increased, with a χMcNemar′s statistic2=163.2 and a p-value of 2.23 × 10−37. This is an interesting finding that may improve current deep learning algorithms using geometrical augmentations for detecting COVID-19. We also provide clinical perspectives on geometric augmentation to consider regarding the development of a robust COVID-19 X-ray-based detector.


2018 ◽  
Vol 10 (11) ◽  
pp. 1690 ◽  
Author(s):  
M Bah ◽  
Adel Hafiane ◽  
Raphael Canals

In recent years, weeds have been responsible for most agricultural yield losses. To deal with this threat, farmers resort to spraying the fields uniformly with herbicides. This method not only requires huge quantities of herbicides but impacts the environment and human health. One way to reduce the cost and environmental impact is to allocate the right doses of herbicide to the right place and at the right time (precision agriculture). Nowadays, unmanned aerial vehicles (UAVs) are becoming an interesting acquisition system for weed localization and management due to their ability to obtain images of the entire agricultural field with a very high spatial resolution and at a low cost. However, despite significant advances in UAV acquisition systems, the automatic detection of weeds remains a challenging problem because of their strong similarity to the crops. Recently, a deep learning approach has shown impressive results in different complex classification problems. However, this approach needs a certain amount of training data, and creating large agricultural datasets with pixel-level annotations by an expert is an extremely time-consuming task. In this paper, we propose a novel fully automatic learning method using convolutional neuronal networks (CNNs) with an unsupervised training dataset collection for weed detection from UAV images. The proposed method comprises three main phases. First, we automatically detect the crop rows and use them to identify the inter-row weeds. In the second phase, inter-row weeds are used to constitute the training dataset. Finally, we perform CNNs on this dataset to build a model able to detect the crop and the weeds in the images. The results obtained are comparable to those of traditional supervised training data labeling, with differences in accuracy of 1.5% in the spinach field and 6% in the bean field.


2021 ◽  
Author(s):  
Amran Hossain ◽  
Mohammad Tariqul Islam ◽  
Ali F. Almutairi

Abstract Automated classification and detection of brain abnormalities like tumors from microwave head images is essential for investigating and monitoring disease progression. This paper presents the automatic classification and detection of human brain abnormalities through the deep learning-based YOLOv5 model in microwave head images. The YOLOv5 is a faster object detection model, which has a less computational architecture with high accuracy. At the beginning, the backscattered signals are collected from the implemented 3D wideband nine antennas array-based microwave head imaging (MWHI) system, where one antenna operates as a transmitter and the remaining eight antennas operate as receivers. In this research, fabricated tissue-mimicking head phantom with a benign and malignant tumor as brain abnormalities, which is utilized in MWHI system. Afterwards, the M-DMAS (modified-delay-multiply-and-sum) imaging algorithm is applied on the post-processed scattering parameters to reconstruct head regions image with 640×640 pixels. Three hundred sample images are collected, including benign and malignant tumors from various locations in head regions by the MWHI system. Later, the images are preprocessed and augmented to create a final image dataset containing 3600 images, and then used for training, validation, and testing the YOLOv5 model. Subsequently, 80% of images are utilized for training, and 20% are used for testing the model. Then from the 80% training dataset, 20% is utilized for validation to avoid overfitting. The brain abnormalities classification and detection performances with various datasets are investigated by the YOLOv5s, YOLOv5m, and YOLOv5l models of YOLOv5. It is investigated that the YOLOv5l model showed the best result for abnormalities classification and detection compared to other models. However, the achieved training accuracy, validation loss, precision, recall, F1-score, training and validation classification loss, and mean average precision (mAP) are 99.84%, 9.38%, 93.20%, 94.80%, 94.01%, 0.004, 0.0133, and 96.20% respectively for the YOLOv5l model, which ensures the better classification and detection accuracy of the model. Finally, a testing dataset with different scenarios is evaluated through the three versions of the YOLOv5 model, and conclude that brain abnormalities classification and detection with location are successfully classified and detected. Thus, the deep model is applicable in the portable MWHI system.


2019 ◽  
Vol 11 (24) ◽  
pp. 2939 ◽  
Author(s):  
Lonesome Malambo ◽  
Sorin Popescu ◽  
Nian-Wei Ku ◽  
William Rooney ◽  
Tan Zhou ◽  
...  

Small unmanned aerial systems (UAS) have emerged as high-throughput platforms for the collection of high-resolution image data over large crop fields to support precision agriculture and plant breeding research. At the same time, the improved efficiency in image capture is leading to massive datasets, which pose analysis challenges in providing needed phenotypic data. To complement these high-throughput platforms, there is an increasing need in crop improvement to develop robust image analysis methods to analyze large amount of image data. Analysis approaches based on deep learning models are currently the most promising and show unparalleled performance in analyzing large image datasets. This study developed and applied an image analysis approach based on a SegNet deep learning semantic segmentation model to estimate sorghum panicles counts, which are critical phenotypic data in sorghum crop improvement, from UAS images over selected sorghum experimental plots. The SegNet model was trained to semantically segment UAS images into sorghum panicles, foliage and the exposed ground using 462, 250 × 250 labeled images, which was then applied to field orthomosaic to generate a field-level semantic segmentation. Individual panicle locations were obtained after post-processing the segmentation output to remove small objects and split merged panicles. A comparison between model panicle count estimates and manually digitized panicle locations in 60 randomly selected plots showed an overall detection accuracy of 94%. A per-plot panicle count comparison also showed high agreement between estimated and reference panicle counts (Spearman correlation ρ = 0.88, mean bias = 0.65). Misclassifications of panicles during the semantic segmentation step and mosaicking errors in the field orthomosaic contributed mainly to panicle detection errors. Overall, the approach based on deep learning semantic segmentation showed good promise and with a larger labeled dataset and extensive hyper-parameter tuning, should provide even more robust and effective characterization of sorghum panicle counts.


Sign in / Sign up

Export Citation Format

Share Document