Estimating Digital Watermark Synchronization Signal Using Partial Pixel Least Squares

2020 ◽  
Vol 2020 (4) ◽  
pp. 24-1-24-7
Author(s):  
Robert Lyons ◽  
Brett Bradley

To read a digital watermark from printed images requires that the watermarking system read correctly after affine distortions. One way to recover from affine distortions is to add a synchronization signal in the Fourier frequency domain and use this synchronization signal to estimate the applied affine distortion. If the synchronization signal contains a collection of frequency impulses, then a least squares match of frequency impulse locations results in a reasonably accurate linear transform estimation. Nearest neighbor frequency impulse peak location estimation provides a good rough estimate for the linear transform, but a more accurate refinement of the least squares estimate is accomplished with partial pixel peak location estimates. In this paper we will show how to estimate peak locations to any desired accuracy using only the complex frequencies computed by the standard DFT. We will show that these improved peak location estimates result in a more accurate linear transform estimate. We conclude with an assessment of detector robustness that results from this improved linear transformation accuracy.

Geophysics ◽  
1989 ◽  
Vol 54 (5) ◽  
pp. 570-580 ◽  
Author(s):  
Keith A. Meyerholtz ◽  
Gary L. Pavlis ◽  
Sally A. Szpakowski

This paper introduces convolutional quelling as a technique to improve imaging of seismic tomography data. We show the result amounts to a special type of damped, weighted, least‐squares solution. This insight allows us to implement the technique in a practical manner using a sparse matrix, conjugate gradient equation solver. We applied the algorithm to synthetic data using an eight nearest‐neighbor smoothing filter for the quelling. The results were found to be superior to a simple, least‐squares solution because convolutional quelling suppresses side bands in the resolving function that lead to imaging artifacts.


2021 ◽  
Vol 17 (11) ◽  
pp. 155014772110559
Author(s):  
Zelin Ren ◽  
Yongqiang Tang ◽  
Wensheng Zhang

The fault diagnosis approaches based on k-nearest neighbor rule have been widely researched for industrial processes and achieve excellent performance. However, for quality-related fault diagnosis, the approaches using k-nearest neighbor rule have been still not sufficiently studied. To tackle this problem, in this article, we propose a novel quality-related fault diagnosis framework, which is made up of two parts: fault detection and fault isolation. In the fault detection stage, we innovatively propose a novel non-linear quality-related fault detection method called kernel partial least squares- k-nearest neighbor rule, which organically incorporates k-nearest neighbor rule with kernel partial least squares. Specifically, we first employ kernel partial least squares to establish a non-linear regression model between quality variables and process variables. After that, the statistics and thresholds corresponding to process space and predicted quality space are appropriately designed by adopting k-nearest neighbor rule. In the fault isolation stage, in order to match our proposed non-linear quality-related fault detection method kernel partial least squares- k-nearest neighbor seamlessly, we propose a modified variable contributions by k-nearest neighbor (VCkNN) fault isolation method called modified variable contributions by k-nearest neighbor (MVCkNN), which elaborately introduces the idea of the accumulative relative contribution rate into VC k-nearest neighbor, such that the smearing effect caused by the normal distribution hypothesis of VC k-nearest neighbor can be mitigated effectively. Finally, a widely used numerical example and the Tennessee Eastman process are employed to verify the effectiveness of our proposed approach.


Geophysics ◽  
1984 ◽  
Vol 49 (11) ◽  
pp. 1869-1880 ◽  
Author(s):  
William S. Harlan ◽  
Jon F. Claerbout ◽  
Fabio Rocca

A signal/noise separation must recognize the lateral coherence of geologic events and their statistical predictability before extracting those components most useful for a particular process, such as velocity analysis. Events with recognizable coherence we call signal; the rest we term noise. Let us define “focusing” as increasing the statistical independence of samples with some invertible, linear transform L. By the central limit theorem, focused signal must become more non‐Gaussian; the same L must defocus noise and make it more Gaussian. A measure F defined from cross entropy measures non‐Gaussianity from local histograms of an array, and thereby measures focusing. Local histograms of the transformed data and of transformed, artificially incoherent data provide enough information to estimate the amplitude distributions of transformed signal and noise; errors only increase the estimate of noise. These distributions allow the recognition and extraction of samples containing the highest percentage of signal. Estimating signal and noise iteratively improves the extractions of each. After the removal of bed reflections and noise, F will determine the best migration velocity for the remaining diffractions. Slant stacks map lines to points, greatly concentrating continuous reflections. We extract samples containing the highest concentration of this signal, invert, and subtract from the data, leaving diffractions and noise. Next, we migrate with many velocities, extract focused events, and invert. Then we find the least‐squares sum of these events best resembling the diffractions in the original data. Migration of these diffractions maximizes F at the best velocity. We successfully extract diffractions and estimate velocities for a window of data containing a growth fault. A spatially variable least‐squares superposition allows spatially variable velocity estimates. Local slant stacks allow a laterally adaptable extraction of locally linear events. For a stacked section we successfully extract weak signal with highly variable coherency from behind strong Gaussian noise. Unlike normal moveout (NMO), wave‐equation migration of a few common midpoint (CMP) gathers can image the skewed hyperbolas of dipping reflectors correctly. Short local slant stacks along midpoint will extract reflections with different dips. A simple Stolt (1978) (f-k) type algorithm migrates these dipping events with appropriate dispersion relations. This migration may then be used to extract events containing velocity information over offset. Offset truncations become another removable form of noise. One may remove non‐Gaussian noise from shot gathers by first removing the most identifiable signal, then estimating the samples containing the highest percentage of noise. Those samples containing a significant percentage of signal may be zeroed; what remains represents the most identifiable noise and may be subtracted from the original data. With this procedure we successfully remove ground roll and other noise from a shot (field) gather.


Sign in / Sign up

Export Citation Format

Share Document