A Two-Phase Image Classification Approach with Very Less Data

Author(s):  
B. N. Deepankan ◽  
Ritu Agarwal
Author(s):  
Vishu Madaan ◽  
Aditya Roy ◽  
Charu Gupta ◽  
Prateek Agrawal ◽  
Anand Sharma ◽  
...  

AbstractCOVID-19 (also known as SARS-COV-2) pandemic has spread in the entire world. It is a contagious disease that easily spreads from one person in direct contact to another, classified by experts in five categories: asymptomatic, mild, moderate, severe, and critical. Already more than 66 million people got infected worldwide with more than 22 million active patients as of 5 December 2020 and the rate is accelerating. More than 1.5 million patients (approximately 2.5% of total reported cases) across the world lost their life. In many places, the COVID-19 detection takes place through reverse transcription polymerase chain reaction (RT-PCR) tests which may take longer than 48 h. This is one major reason of its severity and rapid spread. We propose in this paper a two-phase X-ray image classification called XCOVNet for early COVID-19 detection using convolutional neural Networks model. XCOVNet detects COVID-19 infections in chest X-ray patient images in two phases. The first phase pre-processes a dataset of 392 chest X-ray images of which half are COVID-19 positive and half are negative. The second phase trains and tunes the neural network model to achieve a 98.44% accuracy in patient classification.


2020 ◽  
Vol 26 (4) ◽  
pp. 405-425
Author(s):  
Javed Miandad ◽  
Margaret M. Darrow ◽  
Michael D. Hendricks ◽  
Ronald P. Daanen

ABSTRACT This study presents a new methodology to identify landslide and landslide-susceptible locations in Interior Alaska using only geomorphic properties from light detection and ranging (LiDAR) derivatives (i.e., slope, profile curvature, and roughness) and the normalized difference vegetation index (NDVI), focusing on the effect of different resolutions of LiDAR images. We developed a semi-automated object-oriented image classification approach in ArcGIS 10.5 and prepared a landslide inventory from visual observation of hillshade images. The multistage work flow included combining derivatives from 1-, 2.5-, and 5-m-resolution LiDAR, image segmentation, image classification using a support vector machine classifier, and image generalization to clean false positives. We assessed classification accuracy by generating confusion matrix tables. Analysis of the results indicated that LiDAR image scale played an important role in the classification, and the use of NDVI generated better results. Overall, the LiDAR 5-m-resolution image with NDVI generated the best results with a kappa value of 0.55 and an overall accuracy of 83 percent. The LiDAR 1-m-resolution image with NDVI generated the highest producer accuracy of 73 percent in identifying landslide locations. We produced a combined overlay map by summing the individual classified maps that was able to delineate landslide objects better than the individual maps. The combined classified map from 1-, 2.5-, and 5-m-resolution LiDAR with NDVI generated producer accuracies of 60, 80, and 86 percent and user accuracies of 39, 51, and 98 percent for landslide, landslide-susceptible, and stable locations, respectively, with an overall accuracy of 84 percent and a kappa value of 0.58. This semi-automated object-oriented image classification approach demonstrated potential as a viable tool with further refinement and/or in combination with additional data sources.


2016 ◽  
Author(s):  
Pu Hong ◽  
Xiao-feng Ye ◽  
Hui Yu ◽  
Zhi-jie Zhang ◽  
Yu-fei Cai ◽  
...  

2021 ◽  
pp. 422-430
Author(s):  
Preethi Harris ◽  
M. Nithin ◽  
S. Nithish Kannan ◽  
R. Gokul Prasanth ◽  
M. Kissore Kumar

Fractals ◽  
2019 ◽  
Vol 27 (05) ◽  
pp. 1950079
Author(s):  
JUNYING SU ◽  
YINGKUI LI ◽  
QINGWU HU

To maximize the advantages of both spectral and spatial information, we introduce a new spectral–spatial jointed hyperspectral image classification approach based on fractal dimension (FD) analysis of spectral response curve (SRC) in spectral domain and extended morphological processing in spatial domain. This approach first calculates the FD image based on the whole SRC of the hyperspectral image and decomposes the SRC into segments to derive the FD images with each SRC segment. These FD images based on the segmented SRC are composited into a multidimensional FD image set in spectral domain. Then, the extended morphological profiles (EMPs) are derived from the image set through morphological open and close operations in spatial domain. Finally, all these EMPs and FD features are combined into one feature vector for a probabilistic support vector machine (SVM) classification. This approach was demonstrated using three hyperspectral images in urban areas of the university campus and downtown area of Pavia, Italy, and the Washington DC Mall area in the USA, respectively. We assessed the potential and performance of this approach by comparing with PCA-based method in hyperspectral image classification. Our results indicate that the classification accuracy of our proposed method is much higher than the accuracies of the classification methods based on the spectral or spatial domain alone, and similar to or slightly higher than the classification accuracy of PCA-based spectral–spatial jointed classification method. The proposed FD approach also provides a new self-similarity measure of land class in spectral domain, a unique property to represent hyperspectral self-similarity of SRC in hyperspectral imagery.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Wei Wang ◽  
Yiyang Hu ◽  
Ting Zou ◽  
Hongmei Liu ◽  
Jin Wang ◽  
...  

Because deep neural networks (DNNs) are both memory-intensive and computation-intensive, they are difficult to apply to embedded systems with limited hardware resources. Therefore, DNN models need to be compressed and accelerated. By applying depthwise separable convolutions, MobileNet can decrease the number of parameters and computational complexity with less loss of classification precision. Based on MobileNet, 3 improved MobileNet models with local receptive field expansion in shallow layers, also called Dilated-MobileNet (Dilated Convolution MobileNet) models, are proposed, in which dilated convolutions are introduced into a specific convolutional layer of the MobileNet model. Without increasing the number of parameters, dilated convolutions are used to increase the receptive field of the convolution filters to obtain better classification accuracy. The experiments were performed on the Caltech-101, Caltech-256, and Tubingen animals with attribute datasets, respectively. The results show that Dilated-MobileNets can obtain up to 2% higher classification accuracy than MobileNet.


Sign in / Sign up

Export Citation Format

Share Document