scholarly journals SFRNet: Feature Extraction-Fusion Steganalysis Network Based on Squeeze-and-Excitation Block and RepVgg Block

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Guiyong Xu ◽  
Yang Xu ◽  
Sicong Zhang ◽  
Xiaoyao Xie

In the era of big data, convolutional neural network (CNN) has been widely used in the field of image classification and has achieved excellent performance. More and more researchers are beginning to combine deep neural networks with steganalysis to improve performance in recent years. However, most of the steganalysis algorithm based on the convolutional neural network has only run test against the WOW and S-UNIWARD algorithms; meanwhile, their versatility is insufficient due to long training time and the limit of image size. This paper proposes a new network architecture, called SFRNet, to solve these problems. The feature extraction and fusion layer can extract more features from the digital image. The RepVgg block is used to accelerate the inference and increase memory utilization. The SE block improves the detection accuracy rate because it can learn feature weights to make effective feature maps with significant weights and invalid or ineffective feature maps with small weights. Experimental results show that the SFRNet has achieved excellent performance in the detection accuracy rate against four state-of-the-art steganography algorithms in the spatial domain, e.g., HUGO, WOW, S-UNIWARD, and MiPOD, under different payloads. The SFRNet detection accuracy rate achieves 89.6% against S-UNIWARD algorithm with the payload of 0.4bpp and 72.5% at 0.2bpp. As the same time, the training time of our network is greatly reduced by 35% compared with Yedroudj-Net.

Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4403
Author(s):  
Umme Hafsa Billah ◽  
Hung Manh La ◽  
Alireza Tavakkoli

An autonomous concrete crack inspection system is necessary for preventing hazardous incidents arising from deteriorated concrete surfaces. In this paper, we present a concrete crack detection framework to aid the process of automated inspection. The proposed approach employs a deep convolutional neural network architecture for crack segmentation, while addressing the effect of gradient vanishing problem. A feature silencing module is incorporated in the proposed framework, capable of eliminating non-discriminative feature maps from the network to improve performance. Experimental results support the benefit of incorporating feature silencing within a convolutional neural network architecture for improving the network’s robustness, sensitivity, and specificity. An added benefit of the proposed architecture is its ability to accommodate for the trade-off between specificity (positive class detection accuracy) and sensitivity (negative class detection accuracy) with respect to the target application. Furthermore, the proposed framework achieves a high precision rate and processing time than the state-of-the-art crack detection architectures.


Author(s):  
Umme Billah ◽  
Hung La ◽  
Alireza Tavakkoli

An autonomous concrete crack inspection system is necessary for preventing hazardous incidents arising from deteriorated concrete surfaces. In this paper, we represent a concrete crack detection framework to aid the process of automated inspection. The proposed approach employs a deep convolutional neural network architecture for crack segmentation from concrete image. The proposed network alleviates the effect of gradient vanishing problem present in deep neural network architectures. A feature silencing module is incorporated in the crack detection framework, for eliminating unnecessary feature maps from the network. The overall performance of the network significantly improves as a result. Experimental results support the benefit of incorporating feature silencing within a convolutional neural network architecture for improving the network’s robustness, sensitivity, and specificity. An added benefit of the proposed architecture is its ability to accommodate for the trade-off between specificity (positive class detection accuracy) and sensitivity (negative class detection accuracy) with respect to the target application. Furthermore, the proposed framework achieves a high precision rate and processing time than crack detection architectures present in literature.


2020 ◽  
Vol 9 (4) ◽  
pp. 1430-1437
Author(s):  
Mohammad Arif Rasyidi ◽  
Taufiqotul Bariyah

Batik is one of Indonesia's cultures that is well-known worldwide. Batik is a fabric that is painted using canting and liquid wax so that it forms patterns of high artistic value. In this study, we applied the convolutional neural network (CNN) to identify six batik patterns, namely Banji, Ceplok, Kawung, Mega Mendung, Parang, and Sekar Jagad. 994 images from the 6 categories were collected and then divided into training and test data with a ratio of 8:2. Image augmentation was also done to provide variations in training data as well as to prevent overfitting. Experimental results on the test data showed that CNN produced an excellent performance as indicated by accuracy of 94% and top-2 accuracy of 99% which was obtained using the DenseNet network architecture.


2021 ◽  
Author(s):  
Lakpa Dorje Tamang

In this paper, we propose a symmetric series convolutional neural network (SS-CNN), which is a novel deep convolutional neural network (DCNN)-based super-resolution (SR) technique for ultrasound medical imaging. The proposed model comprises two parts: a feature extraction network (FEN) and an up-sampling layer. In the FEN, the low-resolution (LR) counterpart of the ultrasound image passes through a symmetric series of two different DCNNs. The low-level feature maps obtained from the subsequent layers of both DCNNs are concatenated in a feed forward manner, aiding in robust feature extraction to ensure high reconstruction quality. Subsequently, the final concatenated features serve as an input map to the latter 2D convolutional layers, where the textural information of the input image is connected via skip connections. The second part of the proposed model is a sub-pixel convolutional (SPC) layer, which up-samples the output of the FEN by multiplying it with a multi-dimensional kernel followed by a periodic shuffling operation to reconstruct a high-quality SR ultrasound image. We validate the performance of the SS-CNN with publicly available ultrasound image datasets. Experimental results show that the proposed model achieves an exquisite reconstruction performance of ultrasound image over the conventional methods in terms of peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM), while providing compelling SR reconstruction time.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2846
Author(s):  
Anik Sen ◽  
Kaushik Deb ◽  
Pranab Kumar Dhar ◽  
Takeshi Koshiba

Recognizing the sport of cricket on the basis of different batting shots can be a significant part of context-based advertisement to users watching cricket, generating sensor-based commentary systems and coaching assistants. Due to the similarity between different batting shots, manual feature extraction from video frames is tedious. This paper proposes a hybrid deep-neural-network architecture for classifying 10 different cricket batting shots from offline videos. We composed a novel dataset, CricShot10, comprising uneven lengths of batting shots and unpredictable illumination conditions. Impelled by the enormous success of deep-learning models, we utilized a convolutional neural network (CNN) for automatic feature extraction, and a gated recurrent unit (GRU) to deal with long temporal dependency. Initially, conventional CNN and dilated CNN-based architectures were developed. Following that, different transfer-learning models were investigated—namely, VGG16, InceptionV3, Xception, and DenseNet169—which freeze all the layers. Experiment results demonstrated that the VGG16–GRU model outperformed the other models by attaining 86% accuracy. We further explored VGG16 and two models were developed, one by freezing all but the final 4 VGG16 layers, and another by freezing all but the final 8 VGG16 layers. On our CricShot10 dataset, these two models were 93% accurate. These results verify the effectiveness of our proposed architecture compared with other methods in terms of accuracy.


IoT ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 222-235
Author(s):  
Guillaume Coiffier ◽  
Ghouthi Boukli Hacene ◽  
Vincent Gripon

Deep Neural Networks are state-of-the-art in a large number of challenges in machine learning. However, to reach the best performance they require a huge pool of parameters. Indeed, typical deep convolutional architectures present an increasing number of feature maps as we go deeper in the network, whereas spatial resolution of inputs is decreased through downsampling operations. This means that most of the parameters lay in the final layers, while a large portion of the computations are performed by a small fraction of the total parameters in the first layers. In an effort to use every parameter of a network at its maximum, we propose a new convolutional neural network architecture, called ThriftyNet. In ThriftyNet, only one convolutional layer is defined and used recursively, leading to a maximal parameter factorization. In complement, normalization, non-linearities, downsamplings and shortcut ensure sufficient expressivity of the model. ThriftyNet achieves competitive performance on a tiny parameters budget, exceeding 91% accuracy on CIFAR-10 with less than 40 k parameters in total, 74.3% on CIFAR-100 with less than 600 k parameters, and 67.1% On ImageNet ILSVRC 2012 with no more than 4.15 M parameters. However, the proposed method typically requires more computations than existing counterparts.


2021 ◽  
Vol 7 ◽  
pp. e451
Author(s):  
Reinel Tabares-Soto ◽  
Harold Brayan Arteaga-Arteaga ◽  
Alejandro Mora-Rubio ◽  
Mario Alejandro Bravo-Ortíz ◽  
Daniel Arias-Garzón ◽  
...  

In recent years, Deep Learning techniques applied to steganalysis have surpassed the traditional two-stage approach by unifying feature extraction and classification in a single model, the Convolutional Neural Network (CNN). Several CNN architectures have been proposed to solve this task, improving steganographic images’ detection accuracy, but it is unclear which computational elements are relevant. Here we present a strategy to improve accuracy, convergence, and stability during training. The strategy involves a preprocessing stage with Spatial Rich Models filters, Spatial Dropout, Absolute Value layer, and Batch Normalization. Using the strategy improves the performance of three steganalysis CNNs and two image classification CNNs by enhancing the accuracy from 2% up to 10% while reducing the training time to less than 6 h and improving the networks’ stability.


2021 ◽  
Vol 50 (2) ◽  
pp. 342-356
Author(s):  
Venkatesan Rajinikanth ◽  
Seifedine Kadry ◽  
Yunyoung Nam

Due to the increased disease occurrence rates in humans, the need for the Automated Disease Diagnosis (ADD) systems is also raised. Most of the ADD systems are proposed to support the doctor during the screening and decision making process. This research aims at developing a Computer Aided Disease Diagnosis (CADD) scheme to categorize the brain tumour of 2D MRI slices into Glioblastoma/Glioma class with better accuracy. The main contribution of this research work is to develop a CADD system with Convolutional-Neural-Network (CNN) supported segmentation and classification. The proposed CADD framework consist of the following phases; (i) Image collection and resizing, (ii) Automated tumour segmentation using VGG-UNet, (iv) Deep-feature extraction using VGG16 network, (v) Handcrafted feature extraction, (vi) Finest feature choice by firefly-algorithm, and (vii) Serial feature concatenation and binary classification. The merit of the executed CADD is confirmed using an investigation realized using the benchmark as well as clinically collected brain MRI slices. In this work, a binary classification with a 10-fold cross validation is implemented using well known classifiers and the results attained with the SVM-Cubic (accuracy >98%) is superior. This result confirms that the combination of CNN assisted segmentation and classification helps to achieve enhanced disease detection accuracy.


2020 ◽  
Author(s):  
Nihad K Chowdhury ◽  
Muhtadir Rahman ◽  
Muhammad Ashad Kabir

The COVID-19 pandemic continues to severely undermine the prosperity of the global health system. To combat this pandemic, effective screening techniques for infected patients are indispensable. There is no doubt that the use of chest X-ray images for radiological assessment is one of the essential screening techniques. Some of the early studies revealed that the patient’s chest X-ray images showed abnormalities, which is natural for patients infected with COVID-19. In this paper,we proposed a parallel-dilated convolutional neural network (CNN) based COVID-19 detection system from chest x-ray images, named as Parallel-Dilated COVIDNet (PDCOVIDNet). First, the publicly available chest X-ray collection fully preloaded and enhanced, and then classified by the proposed method. Differing convolution dilation rate in a parallel form demonstrates the proof-of-principle for using PDCOVIDNet to extract radiological features for COVID-19 detection. Accordingly, we have assisted our method with two visualization methods, which are specifically designed to increase understanding of the key components associated with COVID-19 infection. Both visualization methods compute gradients for a given image category related to feature maps of the last convolutional layer to create a class-discriminative region. In our experiment, we used a total of 2,905 chest X-ray images, comprising three cases (such as COVID-19, normal, and viral pneumonia), and empirical evaluations revealed that the proposed method extracted more significantfeatures expeditiously related to suspected disease. The experimental results demonstrate that our proposed method significantly improves performance metrics: the accuracy, precision, recall and F1 scores reach 96.58%, 96.58%, 96.59% and 96.58%, respectively, which is comparable or enhanced compared with the state-of-the-art methods. We believe that our contribution can support resistance to COVID-19, and will adopt for COVID-19 screening in AI-based systems.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7076
Author(s):  
Jun Wang ◽  
Xiaomeng Zhou ◽  
Jingjing Wu

To improve the recognition rate of chip appearance defects, an algorithm based on a convolution neural network is proposed to identify chip appearance defects of various shapes and features. Furthermore, to address the problems of long training time and low accuracy caused by redundant input samples, an automatic data sample cleaning algorithm based on prior knowledge is proposed to reduce training and classification time, as well as improve the recognition rate. First, defect positions are determined by performing image processing and region-of-interest extraction. Subsequently, interference samples between chip defects are analyzed for data cleaning. Finally, a chip appearance defect classification model based on a convolutional neural network is constructed. The experimental results show that the recognition miss detection rate of this algorithm is zero, and the accuracy rate exceeds 99.5%, thereby fulfilling industry requirements.


Sign in / Sign up

Export Citation Format

Share Document