Detection of Potato Disease Using Image Segmentation and Machine Learning

Author(s):  
Md. Asif Iqbal ◽  
Kamrul Hasan Talukder
2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Peter M. Maloca ◽  
Philipp L. Müller ◽  
Aaron Y. Lee ◽  
Adnan Tufail ◽  
Konstantinos Balaskas ◽  
...  

AbstractMachine learning has greatly facilitated the analysis of medical data, while the internal operations usually remain intransparent. To better comprehend these opaque procedures, a convolutional neural network for optical coherence tomography image segmentation was enhanced with a Traceable Relevance Explainability (T-REX) technique. The proposed application was based on three components: ground truth generation by multiple graders, calculation of Hamming distances among graders and the machine learning algorithm, as well as a smart data visualization (‘neural recording’). An overall average variability of 1.75% between the human graders and the algorithm was found, slightly minor to 2.02% among human graders. The ambiguity in ground truth had noteworthy impact on machine learning results, which could be visualized. The convolutional neural network balanced between graders and allowed for modifiable predictions dependent on the compartment. Using the proposed T-REX setup, machine learning processes could be rendered more transparent and understandable, possibly leading to optimized applications.


Bone Reports ◽  
2021 ◽  
Vol 14 ◽  
pp. 100865
Author(s):  
B.K. Davies ◽  
Andrew Hibbert ◽  
Mark Hopkinson ◽  
Gill Holdsworth ◽  
Isabel Orriss

Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 4893 ◽  
Author(s):  
Hejar Shahabi ◽  
Ben Jarihani ◽  
Sepideh Tavakkoli Piralilou ◽  
David Chittleborough ◽  
Mohammadtaghi Avand ◽  
...  

Gully erosion is a dominant source of sediment and particulates to the Great Barrier Reef (GBR) World Heritage area. We selected the Bowen catchment, a tributary of the Burdekin Basin, as our area of study; the region is associated with a high density of gully networks. We aimed to use a semi-automated object-based gully networks detection process using a combination of multi-source and multi-scale remote sensing and ground-based data. An advanced approach was employed by integrating geographic object-based image analysis (GEOBIA) with current machine learning (ML) models. These included artificial neural networks (ANN), support vector machines (SVM), and random forests (RF), and an ensemble ML model of stacking to deal with the spatial scaling problem in gully networks detection. Spectral indices such as the normalized difference vegetation index (NDVI) and topographic conditioning factors, such as elevation, slope, aspect, topographic wetness index (TWI), slope length (SL), and curvature, were generated from Sentinel 2A images and the ALOS 12-m digital elevation model (DEM), respectively. For image segmentation, the ESP2 tool was used to obtain three optimal scale factors. On using object pureness index (OPI), object matching index (OMI), and object fitness index (OFI), the accuracy of each scale in image segmentation was evaluated. The scale parameter of 45 with OFI of 0.94, which is a combination of OPI and OMI indices, proved to be the optimal scale parameter for image segmentation. Furthermore, segmented objects based on scale 45 were overlaid with 70% and 30% of a prepared gully inventory map to select the ML models’ training and testing objects, respectively. The quantitative accuracy assessment methods of Precision, Recall, and an F1 measure were used to evaluate the model’s performance. Integration of GEOBIA with the stacking model using a scale of 45 resulted in the highest accuracy in detection of gully networks with an F1 measure value of 0.89. Here, we conclude that the adoption of optimal scale object definition in the GEOBIA and application of the ensemble stacking of ML models resulted in higher accuracy in the detection of gully networks.


2020 ◽  
Vol 33 (5) ◽  
pp. 1224-1241 ◽  
Author(s):  
Imene Mecheter ◽  
Lejla Alic ◽  
Maysam Abbod ◽  
Abbes Amira ◽  
Jim Ji

Abstract Recent emerging hybrid technology of positron emission tomography/magnetic resonance (PET/MR) imaging has generated a great need for an accurate MR image-based PET attenuation correction. MR image segmentation, as a robust and simple method for PET attenuation correction, has been clinically adopted in commercial PET/MR scanners. The general approach in this method is to segment the MR image into different tissue types, each assigned an attenuation constant as in an X-ray CT image. Machine learning techniques such as clustering, classification and deep networks are extensively used for brain MR image segmentation. However, only limited work has been reported on using deep learning in brain PET attenuation correction. In addition, there is a lack of clinical evaluation of machine learning methods in this application. The aim of this review is to study the use of machine learning methods for MR image segmentation and its application in attenuation correction for PET brain imaging. Furthermore, challenges and future opportunities in MR image-based PET attenuation correction are discussed.


Sign in / Sign up

Export Citation Format

Share Document