scholarly journals Reference Sharing Mechanism-Based Self-Embedding Watermarking Scheme with Deterministic Content Reconstruction

2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Dongmei Niu ◽  
Hongxia Wang ◽  
Minquan Cheng ◽  
Canghong Shi

This paper presents a reference sharing mechanism-based self-embedding watermarking scheme. The host image is embedded with watermark bits including the reference data for content recovery and the authentication data for tampering location. The special encoding matrix derived from the generator matrix of selected systematic Maximum Distance Separable (MDS) code is adopted. The reference data is generated by encoding all the representative data of the original image blocks. On the receiver side, the tampered image blocks can be located by the authentication data. The reference data embedded in one image block can be shared by all the image blocks to restore the tampered content. The tampering coincidence problem can be avoided at the extreme. The maximal tampering rate is deduced theoretically. Experimental results show that, as long as the tampering rate is less than the maximal tampering rate, the content recovery is deterministic. The quality of recovered content does not decrease with the maximal tampering rate.

2013 ◽  
Vol 13 (02) ◽  
pp. 1340002 ◽  
Author(s):  
DURGESH SINGH ◽  
SHIVENDRA SHIVANI ◽  
SUNEETA AGARWAL

This paper suggests an efficient fragile watermarking scheme for image content authentication along with altered region restoration capability. In this scheme, image is divided into nonoverlapping blocks of size 2 × 2 and for each block, eight bits for image content recovery data and four bits for authentication data from five most significant bits (MSBs) of each pixel, are generated. These 12 bits are embedded into the least significant bits (LSBs) of the pixels which are placed in its corresponding mapping block. At the receiver end by comparing the recalculated and extracted authentication data, the tampered blocks can easily be identified and using recovery data, one can easily restore the tampered block. Results of experiments demonstrate that the proposed scheme is effective enough for alteration detection as well as tamper recovery of the image.


2021 ◽  
Vol 11 (3) ◽  
pp. 1146
Author(s):  
Cheonshik Kim ◽  
Ching-Nung Yang

Research on self-embedding watermarks is being actively conducted to solve personal privacy and copyright problems by image attack. In this paper, we propose a self-embedded watermarking technique based on Absolute Moment Block Truncation Coding (AMBTC) for reconstructing tampered images by cropping attacks and forgery. AMBTC is suitable as a recovery bit (watermark) for the tampered image. This is because AMBTC has excellent compression performance and image quality. Moreover, to improve the quality of the marked image, the Optimal Pixel Adjustment Process (OPAP) method is used in the process of hiding AMBTC in the cover image. To find a damaged block in a marked image, the authentication data along with the watermark must be hidden in the block. We employ a checksum for authentication. The watermark is embedded in the pixels of the cover image using 3LSB and 2LSB, and the checksum is hidden in the LSB. Through the recovering procedure, it is possible to recover the original marked image from the tampered marked image. In addition, when the tampering ratio was 45%, the image (Lena) could be recovered at 36 dB. The proposed self-embedding method was verified through an experiment, and the result was the recovered image showed superior perceptual quality compared to the previous methods.


Sensors ◽  
2018 ◽  
Vol 18 (7) ◽  
pp. 2390 ◽  
Author(s):  
Wenhuan Lu ◽  
Zonglei Chen ◽  
Ling Li ◽  
Xiaochun Cao ◽  
Jianguo Wei ◽  
...  

In this paper, a novel imperceptible, fragile and blind watermark scheme is proposed for speech tampering detection and self-recovery. The embedded watermark data for content recovery is calculated from the original discrete cosine transform (DCT) coefficients of host speech. The watermark information is shared in a frames-group instead of stored in one frame. The scheme trades off between the data waste problem and the tampering coincidence problem. When a part of a watermarked speech signal is tampered with, one can accurately localize the tampered area, the watermark data in the area without any modification still can be extracted. Then, a compressive sensing technique is employed to retrieve the coefficients by exploiting the sparseness in the DCT domain. The smaller the tampered the area, the better quality of the recovered signal is. Experimental results show that the watermarked signal is imperceptible, and the recovered signal is intelligible for high tampering rates of up to 47.6%. A deep learning-based enhancement method is also proposed and implemented to increase the SNR of recovered speech signal.


2013 ◽  
Vol 28 (1) ◽  
pp. 18-31 ◽  
Author(s):  
T. G. Fawcett ◽  
C. E. Crowder ◽  
S. N. Kabekkodu ◽  
F. Needham ◽  
J. A. Kaduk ◽  
...  

Eighty specimens of cellulosic materials were analyzed over a period of several years to study the diffraction characteristics resulting from polymorphism, crystallinity, and chemical substitution. The aim of the study was to produce and verify the quality of reference data useful for the diffraction analyses of cellulosic materials. These reference data can be used for material identification, polymorphism, and crystallinity measurements. Overall 13 new references have been characterized for publication in the Powder Diffraction File (PDF) and several others are in the process of publication.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5540
Author(s):  
Nayeem Hasan ◽  
Md Saiful Islam ◽  
Wenyu Chen ◽  
Muhammad Ashad Kabir ◽  
Saad Al-Ahmadi

This paper proposes an encryption-based image watermarking scheme using a combination of second-level discrete wavelet transform (2DWT) and discrete cosine transform (DCT) with an auto extraction feature. The 2DWT has been selected based on the analysis of the trade-off between imperceptibility of the watermark and embedding capacity at various levels of decomposition. DCT operation is applied to the selected area to gather the image coefficients into a single vector using a zig-zig operation. We have utilized the same random bit sequence as the watermark and seed for the embedding zone coefficient. The quality of the reconstructed image was measured according to bit correction rate, peak signal-to-noise ratio (PSNR), and similarity index. Experimental results demonstrated that the proposed scheme is highly robust under different types of image-processing attacks. Several image attacks, e.g., JPEG compression, filtering, noise addition, cropping, sharpening, and bit-plane removal, were examined on watermarked images, and the results of our proposed method outstripped existing methods, especially in terms of the bit correction ratio (100%), which is a measure of bit restoration. The results were also highly satisfactory in terms of the quality of the reconstructed image, which demonstrated high imperceptibility in terms of peak signal-to-noise ratio (PSNR ≥ 40 dB) and structural similarity (SSIM ≥ 0.9) under different image attacks.


2019 ◽  
Vol 29 (1) ◽  
pp. 1480-1495
Author(s):  
D. Khalandar Basha ◽  
T. Venkateswarlu

Abstract The image restoration (IR) technique is a part of image processing to improve the quality of an image that is affected by noise and blur. Thus, IR is required to attain a better quality of image. In this paper, IR is performed using linear regression-based support vector machine (LR-SVM). This LR-SVM has two steps: training and testing. The training and testing stages have a distinct windowing process for extracting blocks from the images. The LR-SVM is trained through a block-by-block training sequence. The extracted block-by-block values of images are used to enhance the classification process of IR. In training, the imperfections on the image are easily identified by setting the target vectors as the original images. Then, the noisy image is given at LR-SVM testing, based on the original image restored from the dictionary. Finally, the image block from the testing stage is enhanced using the hybrid Laplacian of Gaussian (HLOG) filter. The denoising of the HLOG filter provides enhanced results by using block-by-block values. This proposed approach is named as LR-SVM-HLOG. A dataset used in this LR-SVM-HLOG method is the Berkeley Segmentation Database. The performance of LR-SVM-HLOG was analyzed as peak signal-to-noise ratio (PSNR) and structural similarity index. The PSNR values of the house and pepper image (color image) are 40.82 and 36.56 dB, respectively, which are higher compared to the inter- and intra-block sparse estimation method and block matching and three-dimensional filtering for color images at 20% noise.


1993 ◽  
Vol 37 ◽  
pp. 117-121
Author(s):  
Ron Jenkins

While most contemporary methods of qualitative analysis of multi-phase materials are still based on the classic Search/Match/Identify process developed by Hanawalt, Rinn and Frevel in the 1930s, during the past 10 years or so the personal computer, with associated CD-ROM storage, has made a dramatic impact on the ways in which classical procedures are being implemented. Until recently, most of the commercial mainframe and PC based software packages for qualitative phase identification have been designed to implement a fully automatic search/matching sequence. All of the major instrument suppliers now offer such programs as part of their Automated Powder Diffractometer (APD) packages. While these programs are extremely useful, the success of their application to a specific problem is critically dependent on the quality of both experimental data and reference data. Until the problems arising from comparing variable quality experimental and reference data are completely understood, it appears that there will continue to be an interest in user-inter active (computer-aided) manual methods of search/matching. This paper explores the use of the personal computer in the area of computer-aided search/matching


2020 ◽  
Vol 34 (05) ◽  
pp. 9749-9756
Author(s):  
Junnan Zhu ◽  
Yu Zhou ◽  
Jiajun Zhang ◽  
Haoran Li ◽  
Chengqing Zong ◽  
...  

Multimodal summarization with multimodal output (MSMO) is to generate a multimodal summary for a multimodal news report, which has been proven to effectively improve users' satisfaction. The existing MSMO methods are trained by the target of text modality, leading to the modality-bias problem that ignores the quality of model-selected image during training. To alleviate this problem, we propose a multimodal objective function with the guidance of multimodal reference to use the loss from the summary generation and the image selection. Due to the lack of multimodal reference data, we present two strategies, i.e., ROUGE-ranking and Order-ranking, to construct the multimodal reference by extending the text reference. Meanwhile, to better evaluate multimodal outputs, we propose a novel evaluation metric based on joint multimodal representation, projecting the model output and multimodal reference into a joint semantic space during evaluation. Experimental results have shown that our proposed model achieves the new state-of-the-art on both automatic and manual evaluation metrics. Besides, our proposed evaluation method can effectively improve the correlation with human judgments.


2021 ◽  
Vol 2131 (3) ◽  
pp. 032029
Author(s):  
I Lipatov ◽  
M Molchanova ◽  
O Lebedev

Abstract The article deals with the actual aspects of practical mathematical modeling of hydro-dynamic processes in the chambers of navigational locks. The use of direct and inverse Fourier transforms has been tested to obtain the representations of non-stationary graphs acceptable for analysis. Cross-sections of the water flow filling the chamber of a typical lock in the Volga-Don shipping channel were used as reference data (VDSC). The control sections in the flow were selected with a qualitatively different hydrodynamic nature of motion. A two-dimensional array of non-stationary data results were decomposed into Fourier series. The resulting graph of the amplitude-frequency spectrum was analyzed by the harmonics forming it. Its amplitude was taken as the criterion for the harmonics’ selection. After zeroing the insignificant harmonics, the inverse Fourier transform was performed. The quality of the data array approximation was controlled by visual overlay of the original graphs on the processed one. In all cases, it was possible to obtain the acceptable approximation results. This created a reliable basis for the scientific analysis and development of engineering measures for the implementation of safe ship passage through gateways. At the end of the article, a number of the data processing specific features are presented, caused by a variety of hydrodynamic features of the flow in various sections.


Sign in / Sign up

Export Citation Format

Share Document