A robust video zero-watermarking based on deep convolutional neural network and self-organizing map in polar complex exponential transform domain

Author(s):  
Yumei Gao ◽  
Xiaobing Kang ◽  
Yajun Chen
2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Wenbing Wang ◽  
Yan Li ◽  
Shengli Liu

Zero-watermarking is one of the solutions for image copyright protection without tampering with images, and thus it is suitable for medical images, which commonly do not allow any distortion. Moment-based zero-watermarking is robust against both image processing and geometric attacks, but the discrimination of watermarks is often ignored by researchers, resulting in the high possibility that host images and fake host images cannot be distinguished by verifier. To this end, this paper proposes a PCET- (polar complex exponential transform-) based zero-watermarking scheme based on the stability of the relationships between moment magnitudes of the same order and stability of the relationships between moment magnitudes of the same repetition, which can handle multiple medical images simultaneously. The scheme first calculates the PCET moment magnitudes for each image in an image group. Then, the magnitudes of the same order and the magnitudes of the same repetition are compared to obtain the content-related features. All the image features are added together to obtain the features for the image group. Finally, the scheme extracts a robust feature vector with the chaos system and takes the bitwise XOR of the robust feature and a scrambled watermark to generate a zero-watermark. The scheme produces robust features with both resistance to various attacks and low similarity among different images. In addition, the one-to-many mapping between magnitudes and robust feature bits reduces the number of moments involved, which not only reduces the computation time but also further improves the robustness. The experimental results show that the proposed scheme meets the performance requirements of zero-watermarking on the robustness, discrimination, and capacity, and it outperforms the state-of-the-art methods in terms of robustness, discrimination, and computational time under the same payloads.


2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Sign in / Sign up

Export Citation Format

Share Document