GA-Based Optimized Image Watermarking Method With Histogram and Butterworth Filtering

2020 ◽  
Vol 10 (2) ◽  
pp. 59-80
Author(s):  
Sunesh Malik ◽  
Rama Kishore Reddlapalli ◽  
Girdhar Gopal

The present paper proposes a new and significant method of optimization for digital image watermarking by using a combination of Genetic Algorithms (GA), Histogram and Butterworth filtering. In this proposed method, the histogram range selection of low frequency components is taken as a significant parameter which assists in bettering the imperceptibility and robustness against attacks. The tradeoff between the perceptual transparency and robustness is considered as an optimization puzzle which is solved with the help of Genetic Algorithm. As a result, the experimental outcomes of the present approach are obtained. These results are secure and robust to various attacks such as rotation, cropping, scaling, additive noise and filtering attacks. The peak signal to noise ratio (PSNR) and Normalized cross correlation (NC) are carefully analyzed and assessed for a set of images and MATLAB2016B software is employed as a means of accomplishing or achieving these experimental results.

Author(s):  
Sunesh Malik ◽  
Rama Kishore Reddlapalli ◽  
Girdhar Gopal

The present paper proposes a new and significant method of optimization for digital image watermarking by using a combination of Genetic Algorithms (GA), Histogram and Butterworth filtering. In this proposed method, the histogram range selection of low frequency components is taken as a significant parameter which assists in bettering the imperceptibility and robustness against attacks. The tradeoff between the perceptual transparency and robustness is considered as an optimization puzzle which is solved with the help of Genetic Algorithm. As a result, the experimental outcomes of the present approach are obtained. These results are secure and robust to various attacks such as rotation, cropping, scaling, additive noise and filtering attacks. The peak signal to noise ratio (PSNR) and Normalized cross correlation (NC) are carefully analyzed and assessed for a set of images and MATLAB2016B software is employed as a means of accomplishing or achieving these experimental results.


2014 ◽  
Vol 12 (10) ◽  
pp. 3997-4013 ◽  
Author(s):  
H. B. Kekre ◽  
Tanuja Sarode ◽  
Shachi Natu

Digital image watermarking is aimed at copyright protection of digital images. Strength of embedded watermark plays an important role in robustness and invisibility of watermarking technique. In this paper, effect of two parameters namely, watermark strength and middle frequency coefficients of host image used for embedding watermark is studied. In the given watermarking technique, watermark is normalized before embedding. This reduces the strength of watermark so that there will be minimum possible distortion in watermarked image. However, it has been observed in our work proposed in previous paper that, such embedment responds poorly to various image processing attacks like compression, cropping, resizing, noise addition etc. Hence in this paper, an attempt has been made to increase the strength of embedded watermark by using suitable weight factor so that robustness of watermarking technique proposed in our previous paper is further increased with small acceptable decrease in imperceptibility. Also middle frequency elements of host image selected for embedding watermark are varied by selecting different rows of host such that slowly we move from middle frequency components towards high frequency components. For certain attacks like image cropping, selection of middle frequency coefficients affects the robustness achieved. Increase in weight factor significantly improves the performance of given watermarking technique by more than 50% as proposed in our previous paper where weight factor value was 25.


Author(s):  
Gundula B. Runge ◽  
Al Ferri ◽  
Bonnie Ferri

This paper considers an anytime strategy to implement controllers that react to changing computational resources. The anytime controllers developed in this paper are suitable for cases when the time scale of switching is in the order of the task execution time, that is, on the time scale found commonly with sporadically missed deadlines. This paper extends the prior work by developing frequency-weighted anytime controllers. The selection of the weighting function is driven by the expectation of the situations that would require anytime operation. For example, if the anytime operation is due to occasional and isolated missed deadlines, then the weighting on high frequencies should be larger than that for low frequencies. Low frequency components will have a smaller change over one sample time, so failing to update these components for one sample period will have less effect than with the high frequency components. An example will be included that applies the anytime control strategy to a model of a DC motor with deadzone and saturation nonlinearities.


Geophysics ◽  
2019 ◽  
Vol 84 (6) ◽  
pp. R989-R1001 ◽  
Author(s):  
Oleg Ovcharenko ◽  
Vladimir Kazei ◽  
Mahesh Kalita ◽  
Daniel Peter ◽  
Tariq Alkhalifah

Low-frequency seismic data are crucial for convergence of full-waveform inversion (FWI) to reliable subsurface properties. However, it is challenging to acquire field data with an appropriate signal-to-noise ratio in the low-frequency part of the spectrum. We have extrapolated low-frequency data from the respective higher frequency components of the seismic wavefield by using deep learning. Through wavenumber analysis, we find that extrapolation per shot gather has broader applicability than per-trace extrapolation. We numerically simulate marine seismic surveys for random subsurface models and train a deep convolutional neural network to derive a mapping between high and low frequencies. The trained network is then tested on sections from the BP and SEAM Phase I benchmark models. Our results indicate that we are able to recover 0.25 Hz data from the 2 to 4.5 Hz frequencies. We also determine that the extrapolated data are accurate enough for FWI application.


Methodology ◽  
2013 ◽  
Vol 9 (2) ◽  
pp. 41-53 ◽  
Author(s):  
Michael P. McAssey ◽  
Jonathan Helm ◽  
Fushing Hsieh ◽  
David A. Sbarra ◽  
Emilio Ferrer

A defining feature of many physiological systems is their synchrony and reciprocal influence. An important challenge, however, is how to measure such features. This paper presents two new approaches for identifying synchrony between the physiological signals of individuals in dyads. The approaches are adaptations of two recently-developed techniques, depending on the nature of the physiological time series. For respiration and thoracic impedance, signals that are measured continuously, we use Empirical Mode Decomposition to extract the low-frequency components of a nonstationary signal, which carry the signal’s trend. We then compute the maximum cross-correlation between the trends of two signals within consecutive overlapping time windows of fixed width throughout each of a number of experimental tasks, and identify the proportion of large values of this measure occurring during each task. For heart rate, which is output discretely, we use a structural linear model that takes into account heteroscedastic measurement error on both series. The results of this study indicate that these methods are effective in detecting synchrony between physiological measures and can be used to examine emotional coherence in dyadic interactions.


Author(s):  
Himani A. Shah ◽  
Mr. Dipak Agrawal ◽  
Mr. Nimit Modi ◽  
Dr. Sheshang Degadwala

Compressive sensing based image reconstruction that improves the algorithm to applying different approach which is DWT and DCT. First, by using wavelet transform, wavelet low frequency of the sub bands in which the image is decomposed in to low frequency and high frequency wavelet coefficients, second is to applied DCT on low frequency coordinates and construct the different transformation matrix. Use the measurement matrix measure the high frequency coefficient components and combine with DCT low frequency components image and sparse signal output is applied on compressive sensing. In compressive sensing, random measurement matrices are generally used and ?1minimisation algorithms often use linear programming to cover sparse signal vectors. But explicitly constructible measurement matrices providing performance guarantees were and ?1minimisation algorithms are often demanding in computational complexity for applications involving very large problem dimensions. To improve the PSNR (pick signal to noise ratio) of reconstructions image uses different coding such as Huffman and Arithmetic.


Sign in / Sign up

Export Citation Format

Share Document