Auto-Layering Wavelet Transfer to Remove the Circumstance Effect and Noise

2013 ◽  
Vol 798-799 ◽  
pp. 624-629
Author(s):  
Pang Da Dai ◽  
Yu Jun Zhang ◽  
Chang Hua Lu ◽  
Yi Zhou ◽  
Wei Zhang ◽  
...  

The accuracy of visibility measurement from night light sources image is usually affected by the circumstance light and noise. This paper presents an auto-layering wavelet transfer method to remove the circumstance effect and noise simultaneously. Firstly, the light propagation through the fog at night condition is formulized, where the model and features of night image with circumstance effect and noise is given. Secondly, we propose to use multi-scale features of wavelet transfer to decompose the image to remove the circumstance effect and noise, where an auto-layering method is used based on the energy ratio of wavelet coefficients. Experiments show that our method is able to remove the circumstance effect and noise simultaneously and to adjust the decomposed layering number automatically. Our method is not only suitable for many wavelet functions, but also preserves the light sources as well as their glows in the digital images. The relative error of using db4 is 3.16%, and the relative error of using sym2 is 2.02%.

Author(s):  
Stefan Von Weber ◽  
Alexander Von Eye

The Cosmic Membrane theory states that the space in which the cosmic microwave background radiation has no dipole is identical with Newton’s absolute space. Light propagates in this space only. In contrast, in a moving inertial frame of reference light propagation is in-homogeneous, i.e. it depends on the direction. Therefore, the derivation of the dilation of time in the sense of Einstein’s special relativity theory, i.e., together with the derivation of the length contraction under the constraint of constant cross dimensions, loses its plausibility, and one has to search for new physical foundations of the relativistic contraction and dilation of time. The Cosmic Membrane theory states also that light paths remain always constant independent on the orientation and the speed of the moving inertial frame of reference. Effects arise by the dilation of time. We predict a long term effect of the Kennedy-Thorndike experiment, but we show also that this effect is undetectable with today’s means. The reason is that the line width of the light sources hides the effect. The use of lasers, cavities and Fabry-Pérot etalons do not change this. We propose a light clock of special construction that could indicate Newton’s absolute time t0 nearly precisely.


2011 ◽  
Vol 19 (2) ◽  
Author(s):  
A. Roy ◽  
S. Mitra ◽  
R. Agrawal

AbstractManipulation in image has been in practice since centuries. These manipulated images are intended to alter facts — facts of ethics, morality, politics, sex, celebrity or chaos. Image forensic science is used to detect these manipulations in a digital image. There are several standard ways to analyze an image for manipulation. Each one has some limitation. Also very rarely any method tried to capitalize on the way image was taken by the camera. We propose a new method that is based on light and its shade as light and shade are the fundamental input resources that may carry all the information of the image. The proposed method measures the direction of light source and uses the light based technique for identification of any intentional partial manipulation in the said digital image. The method is tested for known manipulated images to correctly identify the light sources. The light source of an image is measured in terms of angle. The experimental results show the robustness of the methodology.


Placenta ◽  
2014 ◽  
Vol 35 (9) ◽  
pp. A56
Author(s):  
Nen Huynh ◽  
Jen-Mei Chang ◽  
Philip Katzmann ◽  
Richard Miller ◽  
John Moye ◽  
...  

2014 ◽  
Vol 511-512 ◽  
pp. 490-494 ◽  
Author(s):  
Yi Min Qiu ◽  
Shi Hong Chen ◽  
Yi Zhou ◽  
Xin Hai Liu

This paper proposed a new image enhancement algorithm based on edge sharpening of wavelet coefficients for stereoscopic images. Our scheme uses the multi-scale characteristic of wavelet transform, decomposes the original image into low frequency approximation sub-graph and several high frequency direction. Under the multi-scale, the low frequency approximation sub-graph is processed by edge sharpening method. Then the low frequency sub-graph decomposes in multi-scale again. At last, the low frequency approximation graph after four layers decompose sharpening and the high frequency approximation of the decomposed sub-graph will be refactored to get the new image. Experimental results show that whether PSNR or visual effect, or the subjective assessment of the DMOS value, the proposed method has better enhanced performance than the conventional edge sharpening and wavelet transform. And it has good image edge enhancement, details protection. Meanwhile, the proposed algorithm has the same computational complexity with wavelet transform.


Author(s):  
Victor Olexandrovych Makarichev ◽  
Vladimir Vasilyevich Lukin ◽  
Iryna Victorivna Brysina

Discrete atomic compression (DAC) of digital images is considered. It is a lossy compression algorithm. The aim of this paper is to obtain a mechanism for control of quality loss. Among a large number of different metrics, which are used to assess loss of quality, the maximum absolute deviation or the MAD-metric is chosen, since it is the most sensitive to any even the most minor changes of processed data. In DAC, the main loss of quality is got in the process of quantizing atomic wavelet coefficients that is the subject matter of this paper. The goal is to investigate the effect of the quantization procedure on atomic wavelet coefficients. We solve the following task: to obtain estimates of these coefficients. In the current research, we use the methods of atomic function theory and digital image processing. Using the properties of the generalized atomic wavelets, we get  estimates of generalized atomic wavelet expansion coefficients. These inequalities provide dependence of quality loss measured by the MAD-metric on the parameters of quantization in the form of upper bounds. They are confirmed by the DAC-processing of the test images. Also, loss of quality measured by root mean square (RMS) and peak signal to noise ratio (PSNR) is computed. Analyzing the results of experiments, which are carried out using the computer program "Discrete Atomic Compression: Research Kit", we obtain the following results: 1) the deviation of the expected value of MAD from its real value in some cases is large; 2) accuracy of the estimates depends on parameters of quantization, as well as depth of atomic wavelet expansion and type of the digital image (full color or grayscale); 3) discrepancies can be reduced by applying a correction coefficient; 4) the ratio of the expected value of MAD to its real value behaves relatively constant and the ratio of the expected value of MAD to RMS and PSNR do not. Conclusions: discrete atomic compression of digital images in combination with the proposed method of quality loss control provide obtaining results of the desired quality and its further development, research and application are promising.


2011 ◽  
Vol 1 (3) ◽  
pp. 240-250 ◽  
Author(s):  
K. Koch

Digital Images with 3D Geometry from Data Compression by Multi-scale Representations of B-Spline SurfacesTo build up a 3D (three-dimensional) model of the surface of an object, the heights of points on the surface are measured, for instance, by a laser scanner. The intensities of the reflected laser beam of the points can be used to visualize the 3D model as range image. It is proposed here to fit a two-dimensional B-spline surface to the measured heights and intensities by the lofting method. To fully use the geometric information of the laser scanning, points on the fitted surface with their intensities are computed with a density higher than that of the measurements. This gives a 3D model of high resolution which is visualized by the intensities of the points on the B-spline surface. For a realistic view of the 3D model, the coordinates of a digital photo of the object are transformed to the coordinate system of the 3D model so that the points get the colors of the digital image. To efficiently compute and store the 3D model, data compression is applied. It is derived from the multi-scale representation of the dense grid of points on the B-spline surface. The proposed method is demonstrated for an example.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3610
Author(s):  
Haonan Su ◽  
Cheolkon Jung ◽  
Long Yu

We formulate multi-spectral fusion and denoising for the luminance channel as a maximum a posteriori estimation problem in the wavelet domain. To deal with the discrepancy between RGB and near infrared (NIR) data in fusion, we build a discrepancy model and introduce the wavelet scale map. The scale map adjusts the wavelet coefficients of NIR data to have the same distribution as the RGB data. We use the priors of the wavelet scale map and its gradient as the contrast preservation term and gradient denoising term, respectively. Specifically, we utilize the local contrast and visibility measurements in the contrast preservation term to transfer the selected NIR data to the fusion result. We also use the gradient of NIR wavelet coefficients as the weight for the gradient denoising term in the wavelet scale map. Based on the wavelet scale map, we perform fusion of the RGB and NIR wavelet coefficients in the base and detail layers. To remove noise, we model the prior of the fused wavelet coefficients using NIR-guided Laplacian distributions. In the chrominance channels, we remove noise guided by the fused luminance channel. Based on the luminance variation after fusion, we further enhance the color of the fused image. Our experimental results demonstrated that the proposed method successfully performed the fusion of RGB and NIR images with noise reduction, detail preservation, and color enhancement.


Sign in / Sign up

Export Citation Format

Share Document