A Robustness Watermarking Algorithm for Digital Color Image Print-Scan Process

2013 ◽  
Vol 333-335 ◽  
pp. 992-997
Author(s):  
Yun Lu Ge ◽  
Hui Han ◽  
Xiao Dong Sun ◽  
Sheng Pin Wang ◽  
Sheng Yun Ji

Most of watermarking algorithms are for digital grey image, which are not robust against the attacks of print-scan process, and the embedded information capacity is small. To solve these problem, a new method based on DWT transform and Walsh orthogonal transform for the print-scan process of digital color image was proposed. The method chosed the color spaces conversion from RGB to CIEL*a*b* for digital color image. The low frequency components of the DWT transform image was embed the watermark. The results show that the correlation of watermark is improved using Walsh orthogonal transform, the watermark extraction rate is high and image watermark is distinct and readable after print-scan process. And this method is robust against the various attacks of the print-scan process, such as color space conversion, image halftone, D/A conversion, A/D conversion, scaling, rotation, cropping, skew, and random noise signals.

Author(s):  
ZHAO Baiting ◽  
WANG Feng ◽  
JIA Xiaofen ◽  
GUO Yongcun ◽  
WANG Chengjun

Background:: Aiming at the problems of color distortion, low clarity and poor visibility of underwater image caused by complex underwater environment, a wavelet fusion method UIPWF for underwater image enhancement is proposed. Methods:: First of all, an improved NCB color balance method is designed to identify and cut the abnormal pixels, and balance the color of R, G and B channels by affine transformation. Then, the color correction map is converted to CIELab color space, and the L component is equalized with contrast limited adaptive histogram to obtain the brightness enhancement map. Finally, different fusion rules are designed for low-frequency and high-frequency components, the pixel level wavelet fusion of color balance image and brightness enhancement image is realized to improve the edge detail contrast on the basis of protecting the underwater image contour. Results:: The experiments demonstrate that compared with the existing underwater image processing methods, UIPWF is highly effective in the underwater image enhancement task, improves the objective indicators greatly, and produces visually pleasing enhancement images with clear edges and reasonable color information. Conclusion:: The UIPWF method can effectively mitigate the color distortion, improve the clarity and contrast, which is applicable for underwater image enhancement in different environments.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Dina Khattab ◽  
Hala Mousher Ebied ◽  
Ashraf Saad Hussein ◽  
Mohamed Fahmy Tolba

This paper presents a comparative study using different color spaces to evaluate the performance of color image segmentation using the automatic GrabCut technique. GrabCut is considered as one of the semiautomatic image segmentation techniques, since it requires user interaction for the initialization of the segmentation process. The automation of the GrabCut technique is proposed as a modification of the original semiautomatic one in order to eliminate the user interaction. The automatic GrabCut utilizes the unsupervised Orchard and Bouman clustering technique for the initialization phase. Comparisons with the original GrabCut show the efficiency of the proposed automatic technique in terms of segmentation, quality, and accuracy. As no explicit color space is recommended for every segmentation problem, automatic GrabCut is applied withRGB,HSV,CMY,XYZ, andYUVcolor spaces. The comparative study and experimental results using different color images show thatRGBcolor space is the best color space representation for the set of the images used.


2011 ◽  
Vol 143-144 ◽  
pp. 737-741 ◽  
Author(s):  
Hai Bo Liu ◽  
Wei Wei Li ◽  
Yu Jie Dong

Vision system is an important part of the whole robot soccer system.In order to win the game, the robot system must be more quick and more accuracy.A color image segmentation method using improved seed-fill algorithm in YUV color space is introduced in this paper. The new method dramatically reduces the work of calculation,and speeds up the image processing. The result of comparing it with the old method based on RGB color space was showed in the paper.The second step of the vision sub system is identification the color block that separated by the first step.A improved seed fill algorithm is used in the paper.The implementation on MiroSot Soccer Robot System shows that the new method is fast and accurate.


2021 ◽  
Vol 18 (5) ◽  
pp. 605-617
Author(s):  
Zhennan Yu ◽  
Yang Liu

Abstract As the robustness for the wave equation-based inversion methods, wave equation migration velocity analysis (WEMVA) is stable for overcoming the multipathing problem and has become popular in recent years. As a rapidly developed method, differential semblance optimisation (DSO) is convenient to implement and can automatically detect the moveout existing in common image gathers (CIGs). However, by implementing in the image domain with the target of minimising moveouts and improving coherence of the CIGs, the DSO method often suffers from imaging artefacts caused by uneven illumination and irregular observation geometry, which may produce poor velocity updates with artefact contamination. To deal with this issue, in this paper, by introducing Wiener-like filters, we modify the conventional image matching-based objective function to a new one by introducing the quadratic Wasserstein metric technique. The new misfit function measures the distance of two distributions obtained by the convolutional filters and target functions. With the new misfit function, the adjoint sources and the corresponding gradients are improved. We apply the new method to two numerical examples and one field dataset. The corresponding results indicate that the new method is robust to compensate low frequency components of velocity models.


2020 ◽  
Vol 2020 (28) ◽  
pp. 193-198
Author(s):  
Hoang Le ◽  
Mahmoud Afifi ◽  
Michael S. Brown

Color space conversion is the process of converting color values in an image from one color space to another. Color space conversion is challenging because different color spaces have different sized gamuts. For example, when converting an image encoded in a medium-sized color gamut (e.g., AdobeRGB or Display-P3) to a small color gamut (e.g., sRGB), color values may need to be compressed in a many-to-one manner (i.e., multiple colors in the source gamut will map to a single color in the target gamut). If we try to convert this sRGB-encoded image back to a wider gamut color encoding, it can be challenging to recover the original colors due to the color fidelity loss. We propose a method to address this problem by embedding wide-gamut metadata inside saved images captured by a camera. Our key insight is that in the camera hardware, a captured image is converted to an intermediate wide-gamut color space (i.e., ProPhoto) as part of the processing pipeline. This wide-gamut image representation is then saved to a display color space and saved in an image format such as JPEG or HEIC. Our method proposes to include a small sub-sampling of the color values from the ProPhoto image state in the camera to the final saved JPEG/HEIC image. We demonstrate that having this additional wide-gamut metadata available during color space conversion greatly assists in constructing a color mapping function to convert between color spaces. Our experiments show our metadata-assisted color mapping method provides a notable improvement (up to 60% in terms of E) over conventional color space methods using perceptual rendering intent. In addition, we show how to extend our approach to perform adaptive color space conversion based spatially over the image for additional improvements.


2016 ◽  
Vol 2016 ◽  
pp. 1-12 ◽  
Author(s):  
Xin Jin ◽  
Rencan Nie ◽  
Dongming Zhou ◽  
Quan Wang ◽  
Kangjian He

This paper proposed an effective multifocus color image fusion algorithm based on nonsubsampled shearlet transform (NSST) and pulse coupled neural networks (PCNN); the algorithm can be used in different color spaces. In this paper, we take HSV color space as an example, H component is clustered by adaptive simplified PCNN (S-PCNN), and then the H component is fused according to oscillation frequency graph (OFG) of S-PCNN; at the same time, S and V components are decomposed by NSST, and different fusion rules are utilized to fuse the obtained results. Finally, inverse HSV transform is performed to get the RGB color image. The experimental results indicate that the proposed color image fusion algorithm is more efficient than other common color image fusion algorithms.


2005 ◽  
Vol 36 (1) ◽  
pp. 474 ◽  
Author(s):  
Sung-Jo Koo ◽  
Chang-Gon Kim ◽  
Jong-Ki An ◽  
Man-Hyo Park ◽  
Sang-Deog Yeo

2013 ◽  
Vol 11 (4) ◽  
pp. 2484-2489
Author(s):  
Rajeev Sunakara ◽  
P.Ravi Sankar

Contrast enhancement has an important role in image processing applications. This paper presents a color enhancement algorithm based on adaptive filter technique. First, the proposed method is divided into three major parts: obtain luminance image and backdrop image, adaptive modification and color restoration. different traditional color image enhancement algorithms, the adaptive filter in the algorithm takes color information into consideration. The algorithm finds the significance of color information in color image enhancement and utilizes color space conversion to obtain a much better visibility. In the practical results, the proposed method reproduces better enhancement and reduce the halo distortion compared with the bilateral  methods.


Author(s):  
Eugenijus Margalikas ◽  
Simona Ramanauskaitė

Abstract In this paper, we present a novel image steganography method which is based on color palette transformation in color space. Most of the existing image steganography methods modify separate image pixels, and random noise appears in the image. By proposing a method, which changes the color palette of the image (all pixels of the same color will be changed to the same color), we achieve a higher user perception. Presented comparison of stegoimage quality metrics with other image steganography methods proved the new method is one of the best according to Structural Similarity Index (SSIM) and Peak Signal Noise Ratio (PSNR) values. The capability is average among other methods, but our method has a bigger capacity among methods with better SSIM and PSNR values. The color and pixel capability can be increased by using standard or adaptive color palette images with smoothing, but it will increase the embedding identification possibility.


2012 ◽  
Vol 262 ◽  
pp. 86-91
Author(s):  
Yang Jin ◽  
Zhen Liu ◽  
Peng Fei Wang ◽  
San Guo Liu ◽  
Hong Jie Zhai

Color image is an information substance with color components, among which exists correlation. In order to investigate the internal relevance of information in color image and to find out the dependence between them, to research the correlation of the color component image is significant. In order to investigate the relationship between the color space and the correlation of the component images, color spaces RGB/ LCH/ LAB/ OHTA/ YCC are selected and the correlation coefficients and cross correlations of the component images are computed and analyzed on MATLAB platform. The Result shows, that the statistical correlation coefficients of component images under RGB color space are the highest, while in OHTA color space the lowest are showed. The correlation coefficients under LAB and LCH are relative lower. In the opposite color spaces, the correlation coefficients of two opposite color components images are higher than the coefficients between the lightness and one of the opposite color component images. For the cross correlation of color component images, it shows a weak negative exponent relationship between pixel distance and cross correlation. The average cross correlation of component images in LCH space is obvious lower than in other spaces, while the levels of cross correlation in other spaces are similar. The relationship between cross correlation and color characteristics of image in RGB color space is closely, while in OHTA space, the difference of cross correlations among component images are usually small. In LCH space, the difference of cross correlations among component images is obvious, the cross correlation among chroma and the other components (lightness and hue) are much lower.


Sign in / Sign up

Export Citation Format

Share Document