Arial Image Classification using Deep Neural Networks with Discrete Cosine Transform, TSBTC and Augmentation Techniques

Author(s):  
Sudeep D. Thepade ◽  
Abhishek Gokhale ◽  
Aishwarya Patki ◽  
Janhavi Khindkar ◽  
Pooja Chaudhary
Author(s):  
A. Voulodimos ◽  
K. Fokeas ◽  
N. Doulamis ◽  
A. Doulamis ◽  
K. Makantasis

Abstract. Hyperspectral image classification has drawn significant attention in the recent years driven by the increasing abundance of sensor-generated hyper- and multi-spectral data, combined with the rapid advancements in the field of machine learning. A vast range of techniques, especially involving deep learning models, have been proposed attaining high levels of classification accuracy. However, many of these approaches significantly deteriorate in performance in the presence of noise in the hyperspectral data. In this paper, we propose a new model that effectively addresses the challenge of noise residing in hyperspectral images. The proposed model, which will be called DCT-CNN, combines the representational power of Convolutional Neural Networks with the noise elimination capabilities introduced by frequency-domain filtering enabled through the Discrete Cosine Transform. In particular, the proposed method entails the transformation of pixel macroblocks to the frequency domain and the discarding of information that corresponds to the higher frequencies in every patch, in which pixel information of abrupt changes and noise often resides. Experiment results in Indian Pines, Salinas and Pavia University datasets indicate that the proposed DCT-CNN constitutes a promising new model for accurate hyperspectral image classification offering robustness to different types of noise, such as Gaussian and salt and pepper noise.


2021 ◽  
Author(s):  
Akinori Minagi ◽  
Hokuto Hirano ◽  
Kazuhiro Takemoto

Abstract Transfer learning from natural images is well used in deep neural networks (DNNs) for medical image classification to achieve computer-aided clinical diagnosis. Although the adversarial vulnerability of DNNs hinders practical applications owing to the high stakes of diagnosis, adversarial attacks are expected to be limited because training data — which are often required for adversarial attacks — are generally unavailable in terms of security and privacy preservation. Nevertheless, we hypothesized that adversarial attacks are also possible using natural images because pre-trained models do not change significantly after fine-tuning. We focused on three representative DNN-based medical image classification tasks (i.e., skin cancer, referable diabetic retinopathy, and pneumonia classifications) and investigated whether medical DNN models with transfer learning are vulnerable to universal adversarial perturbations (UAPs), generated using natural images. UAPs from natural images are useful for both non-targeted and targeted attacks. The performance of UAPs from natural images was significantly higher than that of random controls, although slightly lower than that of UAPs from training images. Vulnerability to UAPs from natural images was observed between different natural image datasets and between different model architectures. The use of transfer learning causes a security hole, which decreases the reliability and safety of computer-based disease diagnosis. Model training from random initialization (without transfer learning) reduced the performance of UAPs from natural images; however, it did not completely avoid vulnerability to UAPs. The vulnerability of UAPs from natural images will become a remarkable security threat.


Author(s):  
Nasser Edinne Benhassine ◽  
Abdelnour Boukaache ◽  
Djalil Boudjehem

Medical imaging systems are very important in medicine domain. They assist specialists to make the final decision about the patient’s condition, and strongly help in early cancer detection. The classification of mammogram images represents a very important operation to identify whether the breast cancer is benign or malignant. In this chapter, we propose a new computer aided diagnostic (CAD) system, which is composed of three steps. In the first step, the input image is pre-processed to remove the noise and artifacts and also to separate the breast profile from the pectoral muscle. This operation is a difficult task that can affect the final decision. For this reason, a hybrid segmentation method using the seeded region growing (SRG) algorithm applied on a localized triangular region has been proposed. In the second step, we have proposed a features extraction method based on the discrete cosine transform (DCT), where the processed images of the breast profiles are transformed by the DCT where the part containing the highest energy value is selected. Then, in the feature’s selection step, a new most discriminative power coefficients algorithm has been proposed to select the most significant features. In the final step of the proposed system, we have used the most known classifiers in the field of the image classification for evaluation. An effective classification has been made using the Support Vector Machines (SVM), Naive Bayes (NB), Artificial Neural Network (ANN) and k-Nearest Neighbors (KNN) classifiers. To evaluate the efficiency and to measure the performances of the proposed CAD system, we have selected the mini Mammographic Image Analysis Society (MIAS) database. The obtained results show the effectiveness of the proposed algorithm over others, which are recently proposed in the literature, whereas the new CAD reached an accuracy of 100%, in certain cases, with only a small set of selected features.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Wei Wang ◽  
Yiyang Hu ◽  
Ting Zou ◽  
Hongmei Liu ◽  
Jin Wang ◽  
...  

Because deep neural networks (DNNs) are both memory-intensive and computation-intensive, they are difficult to apply to embedded systems with limited hardware resources. Therefore, DNN models need to be compressed and accelerated. By applying depthwise separable convolutions, MobileNet can decrease the number of parameters and computational complexity with less loss of classification precision. Based on MobileNet, 3 improved MobileNet models with local receptive field expansion in shallow layers, also called Dilated-MobileNet (Dilated Convolution MobileNet) models, are proposed, in which dilated convolutions are introduced into a specific convolutional layer of the MobileNet model. Without increasing the number of parameters, dilated convolutions are used to increase the receptive field of the convolution filters to obtain better classification accuracy. The experiments were performed on the Caltech-101, Caltech-256, and Tubingen animals with attribute datasets, respectively. The results show that Dilated-MobileNets can obtain up to 2% higher classification accuracy than MobileNet.


2019 ◽  
Vol 119 ◽  
pp. 11-17 ◽  
Author(s):  
Titus J. Brinker ◽  
Achim Hekler ◽  
Alexander H. Enk ◽  
Carola Berking ◽  
Sebastian Haferkamp ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document