Oil Palm Tree Detection from High Resolution Drone Image Using Convolutional Neural Network

2019 ◽  
Vol 1 (2) ◽  
pp. 6-9
Author(s):  
Chee Cheong Lee ◽  
See Yee Tan ◽  
Tien Sze Lim ◽  
Voon Chet Koo

We propose a method to combine several image processing methods with Convolutional Neural Network (CNN) to perform palm tree detection and counting. This paper focuses on drone imaging, which has a high image resolution and is widely deployed in the plantation industry. Analyzing drone images is challenging due to variable drone flying altitudes, resulting in inconsistent tree sizes in images captured. Counting by template matching or fixed sliding window size method often produces an inaccurate count. Instead, our method employs frequency domain analysis to estimate tree size before CNN. The method is evaluated using two images, ranging from a few thousand trees to a few hundred thousand trees per image. We have summarized the accuracy of the proposed method by comparing the results with manually labelled ground truth.

2019 ◽  
Vol 1 (2) ◽  
pp. 6-9
Author(s):  
Chee Cheong Lee ◽  
See Yee Tan ◽  
Tien Sze Lim ◽  
Voon Chet Koo

We propose a method to combine several image processing methods with Convolutional Neural Network (CNN) to perform palm tree detection and counting. This paper focuses on drone imaging, which has a high image resolution and is widely deployed in the plantation industry. Analyzing drone images is challenging due to variable drone flying altitudes, resulting in inconsistent tree sizes in images captured. Counting by template matching or fixed sliding window size method often produces an inaccurate count. Instead, our method employs frequency domain analysis to estimate tree size before CNN. The method is evaluated using two images, ranging from a few thousand trees to a few hundred thousand trees per image. We have summarized the accuracy of the proposed method by comparing the results with manually labelled ground truth.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Khaled Z. Abd-Elmoniem ◽  
Inas A. Yassine ◽  
Nader S. Metwalli ◽  
Ahmed Hamimi ◽  
Ronald Ouwerkerk ◽  
...  

AbstractRegional soft tissue mechanical strain offers crucial insights into tissue's mechanical function and vital indicators for different related disorders. Tagging magnetic resonance imaging (tMRI) has been the standard method for assessing the mechanical characteristics of organs such as the heart, the liver, and the brain. However, constructing accurate artifact-free pixelwise strain maps at the native resolution of the tagged images has for decades been a challenging unsolved task. In this work, we developed an end-to-end deep-learning framework for pixel-to-pixel mapping of the two-dimensional Eulerian principal strains $$\varvec{{\varepsilon }}_{\boldsymbol{p1}}$$ ε p 1 and $$\varvec{{\varepsilon }}_{\boldsymbol{p2}}$$ ε p 2 directly from 1-1 spatial modulation of magnetization (SPAMM) tMRI at native image resolution using convolutional neural network (CNN). Four different deep learning conditional generative adversarial network (cGAN) approaches were examined. Validations were performed using Monte Carlo computational model simulations, and in-vivo datasets, and compared to the harmonic phase (HARP) method, a conventional and validated method for tMRI analysis, with six different filter settings. Principal strain maps of Monte Carlo tMRI simulations with various anatomical, functional, and imaging parameters demonstrate artifact-free solid agreements with the corresponding ground-truth maps. Correlations with the ground-truth strain maps were R = 0.90 and 0.92 for the best-proposed cGAN approach compared to R = 0.12 and 0.73 for the best HARP method for $$\varvec{{\varepsilon }}_{\boldsymbol{p1}}$$ ε p 1 and $$\varvec{{\varepsilon }}_{\boldsymbol{p2}}$$ ε p 2 , respectively. The proposed cGAN approach's error was substantially lower than the error in the best HARP method at all strain ranges. In-vivo results are presented for both healthy subjects and patients with cardiac conditions (Pulmonary Hypertension). Strain maps, obtained directly from their corresponding tagged MR images, depict for the first time anatomical, functional, and temporal details at pixelwise native high resolution with unprecedented clarity. This work demonstrates the feasibility of using the deep learning cGAN for direct myocardial and liver Eulerian strain mapping from tMRI at native image resolution with minimal artifacts.


2019 ◽  
Vol 40 (19) ◽  
pp. 7500-7515 ◽  
Author(s):  
Nurulain Abd Mubin ◽  
Eiswary Nadarajoo ◽  
Helmi Zulhaidi Mohd Shafri ◽  
Alireza Hamedianfar

Author(s):  
Liang Kim Meng ◽  
Azira Khalil ◽  
Muhamad Hanif Ahmad Nizar ◽  
Maryam Kamarun Nisham ◽  
Belinda Pingguan-Murphy ◽  
...  

Background: Bone Age Assessment (BAA) refers to a clinical procedure that aims to identify a discrepancy between biological and chronological age of an individual by assessing the bone age growth. Currently, there are two main methods of executing BAA which are known as Greulich-Pyle and Tanner-Whitehouse techniques. Both techniques involve a manual and qualitative assessment of hand and wrist radiographs, resulting in intra and inter-operator variability accuracy and time-consuming. An automatic segmentation can be applied to the radiographs, providing the physician with more accurate delineation of the carpal bone and accurate quantitative analysis. Methods: In this study, we proposed an image feature extraction technique based on image segmentation with the fully convolutional neural network with eight stride pixel (FCN-8). A total of 290 radiographic images including both female and the male subject of age ranging from 0 to 18 were manually segmented and trained using FCN-8. Results and Conclusion: The results exhibit a high training accuracy value of 99.68% and a loss rate of 0.008619 for 50 epochs of training. The experiments compared 58 images against the gold standard ground truth images. The accuracy of our fully automated segmentation technique is 0.78 ± 0.06, 1.56 ±0.30 mm and 98.02% in terms of Dice Coefficient, Hausdorff Distance, and overall qualitative carpal recognition accuracy, respectively.


2021 ◽  
Vol 18 (1) ◽  
pp. 172988142199332
Author(s):  
Xintao Ding ◽  
Boquan Li ◽  
Jinbao Wang

Indoor object detection is a very demanding and important task for robot applications. Object knowledge, such as two-dimensional (2D) shape and depth information, may be helpful for detection. In this article, we focus on region-based convolutional neural network (CNN) detector and propose a geometric property-based Faster R-CNN method (GP-Faster) for indoor object detection. GP-Faster incorporates geometric property in Faster R-CNN to improve the detection performance. In detail, we first use mesh grids that are the intersections of direct and inverse proportion functions to generate appropriate anchors for indoor objects. After the anchors are regressed to the regions of interest produced by a region proposal network (RPN-RoIs), we then use 2D geometric constraints to refine the RPN-RoIs, in which the 2D constraint of every classification is a convex hull region enclosing the width and height coordinates of the ground-truth boxes on the training set. Comparison experiments are implemented on two indoor datasets SUN2012 and NYUv2. Since the depth information is available in NYUv2, we involve depth constraints in GP-Faster and propose 3D geometric property-based Faster R-CNN (DGP-Faster) on NYUv2. The experimental results show that both GP-Faster and DGP-Faster increase the performance of the mean average precision.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Peter M. Maloca ◽  
Philipp L. Müller ◽  
Aaron Y. Lee ◽  
Adnan Tufail ◽  
Konstantinos Balaskas ◽  
...  

AbstractMachine learning has greatly facilitated the analysis of medical data, while the internal operations usually remain intransparent. To better comprehend these opaque procedures, a convolutional neural network for optical coherence tomography image segmentation was enhanced with a Traceable Relevance Explainability (T-REX) technique. The proposed application was based on three components: ground truth generation by multiple graders, calculation of Hamming distances among graders and the machine learning algorithm, as well as a smart data visualization (‘neural recording’). An overall average variability of 1.75% between the human graders and the algorithm was found, slightly minor to 2.02% among human graders. The ambiguity in ground truth had noteworthy impact on machine learning results, which could be visualized. The convolutional neural network balanced between graders and allowed for modifiable predictions dependent on the compartment. Using the proposed T-REX setup, machine learning processes could be rendered more transparent and understandable, possibly leading to optimized applications.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 262
Author(s):  
Chih-Yung Huang ◽  
Zaky Dzulfikri

Stamping is one of the most widely used processes in the sheet metalworking industry. Because of the increasing demand for a faster process, ensuring that the stamping process is conducted without compromising quality is crucial. The tool used in the stamping process is crucial to the efficiency of the process; therefore, effective monitoring of the tool health condition is essential for detecting stamping defects. In this study, vibration measurement was used to monitor the stamping process and tool health. A system was developed for capturing signals in the stamping process, and each stamping cycle was selected through template matching. A one-dimensional (1D) convolutional neural network (CNN) was developed to classify the tool wear condition. The results revealed that the 1D CNN architecture a yielded a high accuracy (>99%) and fast adaptability among different models.


2022 ◽  
Vol 192 ◽  
pp. 106560
Author(s):  
Thani Jintasuttisak ◽  
Eran Edirisinghe ◽  
Ali Elbattay

Sign in / Sign up

Export Citation Format

Share Document