Convolutional Neural Networks for Steganalysis via Transfer Learning

Author(s):  
Juan Tian ◽  
Yingxiang Li

Recently, a large number of studies have shown that Convolutional Neural Networks are effective for learning features automatically for steganalysis. This paper uses the transfer learning method to help the training of CNNs for steganalysis. First, a Gaussian high-pass filter is designed for pretreatment of the images, that can enhance the weak stego noise in the stegos. Then, the classical Inception-V3 model is improved, and the improved network is used for steganalysis through the method of transfer learning. In order to test the effectiveness of the developed model, two spatial domain content-adaptive steganographic algorithms WOW and S-UNIWARD are used. The results imply that the proposed CNN achieves a better performance at low embedding rates compared with the SRM with ensemble classifiers and the SPAM implemented with a Gaussian SVM on BOSSbase. Finally, a steganalysis system based on the trained model was designed. Through experiments, the generalization ability of the system was tested and discussed.

2020 ◽  
Vol 34 (04) ◽  
pp. 5281-5288 ◽  
Author(s):  
Satoshi Nishida ◽  
Yusuke Nakano ◽  
Antoine Blanc ◽  
Naoya Maeda ◽  
Masataka Kado ◽  
...  

The human brain can effectively learn a new task from a small number of samples, which indicates that the brain can transfer its prior knowledge to solve tasks in different domains. This function is analogous to transfer learning (TL) in the field of machine learning. TL uses a well-trained feature space in a specific task domain to improve performance in new tasks with insufficient training data. TL with rich feature representations, such as features of convolutional neural networks (CNNs), shows high generalization ability across different task domains. However, such TL is still insufficient in making machine learning attain generalization ability comparable to that of the human brain. To examine if the internal representation of the brain could be used to achieve more efficient TL, we introduce a method for TL mediated by human brains. Our method transforms feature representations of audiovisual inputs in CNNs into those in activation patterns of individual brains via their association learned ahead using measured brain responses. Then, to estimate labels reflecting human cognition and behavior induced by the audiovisual inputs, the transformed representations are used for TL. We demonstrate that our brain-mediated TL (BTL) shows higher performance in the label estimation than the standard TL. In addition, we illustrate that the estimations mediated by different brains vary from brain to brain, and the variability reflects the individual variability in perception. Thus, our BTL provides a framework to improve the generalization ability of machine-learning feature representations and enable machine learning to estimate human-like cognition and behavior, including individual variability.


Author(s):  
Maryam Abata ◽  
Mahmoud Mehdi ◽  
Said Mazer ◽  
Moulhime El Bekkali ◽  
Catherine Algani

2021 ◽  
Vol 2 (3) ◽  
Author(s):  
Gustaf Halvardsson ◽  
Johanna Peterson ◽  
César Soto-Valero ◽  
Benoit Baudry

AbstractThe automatic interpretation of sign languages is a challenging task, as it requires the usage of high-level vision and high-level motion processing systems for providing accurate image perception. In this paper, we use Convolutional Neural Networks (CNNs) and transfer learning to make computers able to interpret signs of the Swedish Sign Language (SSL) hand alphabet. Our model consists of the implementation of a pre-trained InceptionV3 network, and the usage of the mini-batch gradient descent optimization algorithm. We rely on transfer learning during the pre-training of the model and its data. The final accuracy of the model, based on 8 study subjects and 9400 images, is 85%. Our results indicate that the usage of CNNs is a promising approach to interpret sign languages, and transfer learning can be used to achieve high testing accuracy despite using a small training dataset. Furthermore, we describe the implementation details of our model to interpret signs as a user-friendly web application.


Sign in / Sign up

Export Citation Format

Share Document