Damageless Watermark Extraction Using Nonlinear Feature Extraction Scheme Trained on Frequency Domain

Author(s):  
Kensuke Naoe ◽  
Yoshiyasu Takefuji

In this chapter, we propose a new information hiding and extracting method without embedding any information into the target content by using a nonlinear feature extraction scheme trained on frequency domain. The proposed method can detect hidden bit patterns from the content by processing the coefficients of the selected feature subblocks to the trained neural network. The coefficients are taken from the frequency domain of the decomposed target content by frequency transform. The bit patterns are retrieved from the network only with the proper extraction keys provided. The extraction keys, in the proposed method, are the coordinates of the selected feature subblocks and the neural network weights generated by the supervised learning of the neural network. The supervised learning uses the coefficients of the selected feature subblocks as the set of input values, and the hidden bit patterns are used as the teacher signal values of the neural network, which is the watermark signal in the proposed method. With our proposed method, we are able to introduce a watermark scheme with no damage to the target content.

2015 ◽  
Vol 770 ◽  
pp. 540-546 ◽  
Author(s):  
Yuri Eremenko ◽  
Dmitry Poleshchenko ◽  
Anton Glushchenko

The question about modern intelligent information processing methods usage for a ball mill filling level evaluation is considered. Vibration acceleration signal has been measured on a mill laboratory model for that purpose. It is made with accelerometer attached to a mill pin. The conclusion is made that mill filling level can not be measured with the help of such signal amplitude only. So this signal spectrum processed by a neural network is used. A training set for the neural network is formed with the help of spectral analysis methods. Trained neural network is able to find the correlation between mill pin vibration acceleration signal and mill filling level. Test set is formed from the data which is not included into the training set. This set is used in order to evaluate the network ability to evaluate the mill filling degree. The neural network guarantees no more than 7% error in the evaluation of mill filling level.


Author(s):  
Дарья Михалина ◽  
Daria Mikhalina ◽  
Александр Кузьменко ◽  
Aleksandr Kuz'menko ◽  
Константин Дергачев ◽  
...  

The article discusses one of the latest ways to colorize a black and white image using deep learning methods. For colorization, a convolutional neural network with a large number of layers (Deep convolutional) is used, the architecture of which includes a ResNet model. This model was pre-trained on images of the ImageNet dataset. A neural network receives a black and white image and returns a colorized color. Since, due to the characteristics of ResNet, an input multiple of 255 is received, a program was written that, using frames, enlarges the image for the required size. During the operation of the neural network, the CIE Lab color model is used, which allows to separate the black and white component of the image from the color. For training the neural network, the Place 365 dataset was used, containing 365 different classes, such as animals, landscape elements, people, and so on. The training was carried out on the Nvidia GTX 1080 video card. The result was a trained neural network capable of colorizing images of any size and format. As example we had a speed of 0.08 seconds and an image of 256 by 256 pixels in size. In connection with the concept of the dataset used for training, the resulting model is focused on the recognition of natural landscapes and urban areas.


2010 ◽  
Vol 41 (10) ◽  
pp. 29-37 ◽  
Author(s):  
Zhixiong Li ◽  
Xinping Yan ◽  
Chengqing Yuan ◽  
Jiangbin Zhao ◽  
Zhongxiao Peng

Author(s):  
Shaolei Wang ◽  
Zhongyuan Wang ◽  
Wanxiang Che ◽  
Sendong Zhao ◽  
Ting Liu

Spoken language is fundamentally different from the written language in that it contains frequent disfluencies or parts of an utterance that are corrected by the speaker. Disfluency detection (removing these disfluencies) is desirable to clean the input for use in downstream NLP tasks. Most existing approaches to disfluency detection heavily rely on human-annotated data, which is scarce and expensive to obtain in practice. To tackle the training data bottleneck, in this work, we investigate methods for combining self-supervised learning and active learning for disfluency detection. First, we construct large-scale pseudo training data by randomly adding or deleting words from unlabeled data and propose two self-supervised pre-training tasks: (i) a tagging task to detect the added noisy words and (ii) sentence classification to distinguish original sentences from grammatically incorrect sentences. We then combine these two tasks to jointly pre-train a neural network. The pre-trained neural network is then fine-tuned using human-annotated disfluency detection training data. The self-supervised learning method can capture task-special knowledge for disfluency detection and achieve better performance when fine-tuning on a small annotated dataset compared to other supervised methods. However, limited in that the pseudo training data are generated based on simple heuristics and cannot fully cover all the disfluency patterns, there is still a performance gap compared to the supervised models trained on the full training dataset. We further explore how to bridge the performance gap by integrating active learning during the fine-tuning process. Active learning strives to reduce annotation costs by choosing the most critical examples to label and can address the weakness of self-supervised learning with a small annotated dataset. We show that by combining self-supervised learning with active learning, our model is able to match state-of-the-art performance with just about 10% of the original training data on both the commonly used English Switchboard test set and a set of in-house annotated Chinese data.


Sign in / Sign up

Export Citation Format

Share Document