Edge-Aware Color Image Manipulation by Combination of Low-Pass Linear Filter and Morphological Processing of Its Residuals

Author(s):  
Marcin Iwanowski
Author(s):  
Peilin Li ◽  
Sang-Heon Lee ◽  
Hung-Yao Hsu

In this paper, an image fusion is presented to improve the citrus identification by filtering the incoming data from two cameras. The citrus image data has been photographed by using a portable bi-camera cold mirror acquisition system. The prototype of the customized fixture has been manufactured to position and align a classical cold mirror with two CCD cameras in relative kinematic position. The algorithmic registration on the pairwise images has been bypassed by both the spatial alignment of two cameras with recourse of software calibration and the triggering synchronization in temporal during the photographing. The pairwise frames have been fused by using the Daubechies wavelets decomposition filters. The pixel level fusion index rule is proposed to combine the low pass coefficients of the visible image and the low pass coefficients of the near-infrared image convoluted by the complementary of entropy filter from the visible low pass coefficients. In the study, the fused artifact color image and the non-fused color image have been processed and compared by some classification methods such as low dimensional projection, self-organizing map and the support vector machine.


2016 ◽  
Vol 8 (2) ◽  
Author(s):  
Marc Wildi ◽  
Tucker McElroy

AbstractThe classic model-based paradigm in time series analysis is rooted in the Wold decomposition of the data-generating process into an uncorrelated white noise process. By design, this universal decomposition is indifferent to particular features of a specific prediction problem (e. g., forecasting or signal extraction) – or features driven by the priorities of the data-users. A single optimization principle (one-step ahead forecast error minimization) is proposed by this classical paradigm to address a plethora of prediction problems. In contrast, this paper proposes to reconcile prediction problem structures, user priorities, and optimization principles into a general framework whose scope encompasses the classic approach. We introduce the linear prediction problem (LPP), which in turn yields an LPP objective function. Then one can fit models via LPP minimization, or one can directly optimize the linear filter corresponding to the LPP, yielding the Direct Filter Approach. We provide theoretical results and practical algorithms for both applications of the LPP, and discuss the merits and limitations of each. Our empirical illustrations focus on trend estimation (low-pass filtering) and seasonal adjustment in real-time, i. e., constructing filters that depend only on present and past data.


Author(s):  
S. Borguet ◽  
O. Léonard ◽  
P. Dewallef

Gas-path measurements used to assess the health condition of an engine are corrupted by noise. Generally, a data cleaning step occurs before proceeding with fault detection and isolation. Classical linear filters such as the exponentially weighted moving average filter are traditionally used for noise removal. Unfortunately, these low-pass filters distort trend shifts indicative of faults, which increases the detection delay. The present paper investigates two new approaches to non-linear filtering of time series. On one hand, the synthesis approach reconstructs the signal as a combination of elementary signals chosen from a pre-defined library. On the other hand, the analysis approach imposes a constraint on the shape of the signal (e.g., piecewise constant). Both approaches incorporate prior information about the signal in a different way, but they lead to trend filters that are very capable at noise removal while preserving at the same time sharp edges in the signal. This is highlighted through the comparison with a classical linear filter on a batch of synthetic data representative of typical engine fault profiles.


1996 ◽  
Vol 76 (5) ◽  
pp. 3425-3441 ◽  
Author(s):  
M. Carandini ◽  
F. Mechler ◽  
C. S. Leonard ◽  
J. A. Movshon

1. To study the encoding of input currents into output spike trains by regular-spiking cells, we recorded intracellularly from slices of the guinea pig visual cortex while injecting step, sinusoidal, and broadband noise currents. 2. When measured with sinusoidal currents, the frequency tuning of the spike responses was markedly band-pass. The preferred frequency was between 8 and 30 Hz, and grew with stimulus amplitude and mean intensity. 3. Stimulation with broadband noise currents dramatically enhanced the gain of the spike responses at low and high frequencies, yielding an essentially flat frequency tuning between 0.1 and 130 Hz. 4. The averaged spike responses to sinusoidal currents exhibited two nonlinearities: rectification and spike synchronization. By contrast, no nonlinearity was evident in the averaged responses to broadband noise stimuli. 5. These properties of the spike responses were not present in the membrane potential responses. The latter were roughly linear, and their frequency tuning was low-pass and well fit by a single-compartment passive model of the cell membrane composed of a resistance and a capacitance in parallel (RC circuit). 6. To account for the spike responses, we used a “sandwich model” consisting of a low-pass linear filter (the RC circuit), a rectification nonlinearity, and a high-pass linear filter. The model is described by six parameters and predicts analog firing rates rather than discrete spikes. It provided satisfactory fits to the firing rate responses to steps, sinusoids, and broadband noise currents. 7. The properties of spike encoding are consistent with temporal nonlinearities of the visual responses in V1, such as the dependence of response frequency tuning and latency on stimulus contrast and bandwidth. We speculate that one of the roles of the high-frequency membrane potential fluctuations observed in vivo could be to amplify and linearize the responses to lower, stimulus-related frequencies.


2013 ◽  
Vol 2013 ◽  
pp. 1-18 ◽  
Author(s):  
Lih-Jen Kau ◽  
Tien-Lin Lee

An efficient approach to the sharpening of color images is proposed in this paper. For this, the image to be sharpened is first transformed to theHSVcolor model, and then only the channel ofValuewill be used for the process of sharpening while the other channels are left unchanged. We then apply a proposed edge detector and low-pass filter to the channel ofValueto pick out pixels around boundaries. After that, those pixels detected as around edges or boundaries are adjusted so that the boundary can be sharpened, and those nonedge pixels are kept unaltered. The increment or decrement magnitude that is to be added to those edge pixels is determined in an adaptive manner based on global statistics of the image and local statistics of the pixel to be sharpened. With the proposed approach, the discontinuities can be highlighted while most of the original information contained in the image can be retained. Finally, the adjusted channel ofValueand that ofHueandSaturationwill be integrated to get the sharpened color image. Extensive experiments on natural images will be given in this paper to highlight the effectiveness and efficiency of the proposed approach.


Author(s):  
S. Borguet ◽  
O. Léonard ◽  
P. Dewallef

Gas-path measurements used to assess the health condition of an engine are corrupted by noise. Generally, a data cleaning step occurs before proceeding with fault detection and isolation. Classical linear filters such as the EWMA filter are traditionally used for noise removal. Unfortunately, these low-pass filters distort trend shifts indicative of faults, which increases the detection delay. The present paper investigates two new approaches to nonlinear filtering of time series. On the one hand, the synthesis approach reconstructs the signal as a combination of elementary signals chosen from a predefined library. On the other hand, the analysis approach imposes a constraint on the shape of the signal (e.g., piecewise constant). Both approaches incorporate prior information about the signal in a different way, but they lead to trend filters that are very capable at noise removal while preserving at the same time sharp edges in the signal. This is highlighted through the comparison with a classical linear filter on a batch of synthetic data representative of typical engine fault profiles.


2020 ◽  
Vol 6 (4) ◽  
pp. 16 ◽  
Author(s):  
Mihai Ivanovici ◽  
Radu-Mihai Coliban ◽  
Cosmin Hatfaludi ◽  
Irina Emilia Nicolae

It is said that image segmentation is a very difficult or complex task. First of all, we emphasize the subtle difference between the notions of difficulty and complexity. Then, in this article, we focus on the question of how two widely used color image complexity measures correlate with the number of segments resulting in over-segmentation. We study the evolution of both the image complexity measures and number of segments as the image complexity is gradually decreased by means of low-pass filtering. In this way, we tackle the possibility of predicting the difficulty of color image segmentation based on image complexity measures. We analyze the complexity of images from the point of view of color entropy and color fractal dimension and for color fractal images and the Berkeley data set we correlate these two metrics with the segmentation results, more specifically the number of quasi-flat zones and the number of JSEG regions in the resulting segmentation map. We report on our experimental results and draw conclusions.


Geophysics ◽  
1979 ◽  
Vol 44 (9) ◽  
pp. 1531-1540 ◽  
Author(s):  
Oliver G. Jensen ◽  
Alex Becker

Low‐pass filters based on reactive integrators necessarily introduce temporal phase delays into data. The use of such filters in airborne geophysical surveys, therefore, produces equivalent spatial phase delays which displace the high‐wavenumber Fourier components of anomalies downstream along the flight line, causing distortion of the anomaly shapes. A linear filter is described which eliminates this distortion by introducing a compensating, frequency‐dependent phase advance. The restored data provide anomalies as would be seen from an aircraft moving with zero speed. Exact phase compensation for each stage of reactive low‐pass filtering requires an acausal filter whose coefficients are a weighted sum of zeroth‐ and first‐order modified Hankel functions, the weighting being determined by the integrator’s time constant. A very short and useful approximation to the ideal phase‐compensating filter is also described and subsequently applied in the restoration of data obtained from an airborne electromagnetic (EM) survey flown near Hawksbury, Ontario. The example clearly demonstrates the usefulness of applying phase compensation to geophysical data obtained in high‐resolution airborne surveys.


Sign in / Sign up

Export Citation Format

Share Document