scholarly journals Comparing samples from the G distribution using a geodesic distance

2020 ◽  
Author(s):  
Alejandro Frery ◽  
J Gambini

© 2019, Sociedad de Estadística e Investigación Operativa. The G distribution is widely used for monopolarized SAR image modeling because it can characterize regions with different degrees of texture accurately. It is indexed by three parameters: the number of looks (which can be estimated for the whole image), a scale parameter and a texture parameter. This paper presents a new proposal for comparing samples from the G distribution using a geodesic distance (GD) as a measure of dissimilarity between models. The objective is quantifying the difference between pairs of samples from SAR data using both local parameters (scale and texture) of the G distribution. We propose three tests based on the GD which combine the tests presented in Naranjo-Torres et al. (IEEE J Sel Top Appl Earth Obs Remote Sens 10(3):987–997, 2017), and we estimate their probability distributions using permutation methods.

2020 ◽  
Author(s):  
Alejandro Frery ◽  
J Gambini

© 2019, Sociedad de Estadística e Investigación Operativa. The G distribution is widely used for monopolarized SAR image modeling because it can characterize regions with different degrees of texture accurately. It is indexed by three parameters: the number of looks (which can be estimated for the whole image), a scale parameter and a texture parameter. This paper presents a new proposal for comparing samples from the G distribution using a geodesic distance (GD) as a measure of dissimilarity between models. The objective is quantifying the difference between pairs of samples from SAR data using both local parameters (scale and texture) of the G distribution. We propose three tests based on the GD which combine the tests presented in Naranjo-Torres et al. (IEEE J Sel Top Appl Earth Obs Remote Sens 10(3):987–997, 2017), and we estimate their probability distributions using permutation methods.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 878
Author(s):  
C. T. J. Dodson ◽  
John Soldera ◽  
Jacob Scharcanski

Secure user access to devices and datasets is widely enabled by fingerprint or face recognition. Organization of the necessarily large secure digital object datasets, with objects having content that may consist of images, text, video or audio, involves efficient classification and feature retrieval processing. This usually will require multidimensional methods applicable to data that is represented through a family of probability distributions. Then information geometry is an appropriate context in which to provide for such analytic work, whether with maximum likelihood fitted distributions or empirical frequency distributions. The important provision is of a natural geometric measure structure on families of probability distributions by representing them as Riemannian manifolds. Then the distributions are points lying in this geometrical manifold, different features can be identified and dissimilarities computed, so that neighbourhoods of objects nearby a given example object can be constructed. This can reveal clustering and projections onto smaller eigen-subspaces which can make comparisons easier to interpret. Geodesic distances can be used as a natural dissimilarity metric applied over data described by probability distributions. Exploring this property, we propose a new face recognition method which scores dissimilarities between face images by multiplying geodesic distance approximations between 3-variate RGB Gaussians representative of colour face images, and also obtaining joint probabilities. The experimental results show that this new method is more successful in recognition rates than published comparative state-of-the-art methods.


1998 ◽  
Vol 5 (2) ◽  
pp. 93-104 ◽  
Author(s):  
D. Harris ◽  
M. Menabde ◽  
A. Seed ◽  
G. Austin

Abstract. The theory of scale similarity and breakdown coefficients is applied here to intermittent rainfall data consisting of time series and spatial rain fields. The probability distributions (pdf) of the logarithm of the breakdown coefficients are the principal descriptor used. Rain fields are distinguished as being either multiscaling or multiaffine depending on whether the pdfs of breakdown coefficients are scale similar or scale dependent, respectively. Parameter  estimation techniques are developed which are applicable to both multiscaling and multiaffine fields. The scale parameter (width), σ, of the pdfs of the log-breakdown coefficients is a measure of the intermittency of a field. For multiaffine fields, this scale parameter is found to increase with scale in a power-law fashion consistent with a bounded-cascade picture of rainfall modelling. The resulting power-law exponent, H, is indicative of the smoothness of the field. Some details of breakdown coefficient analysis are addressed and a theoretical link between this analysis and moment scaling analysis is also presented. Breakdown coefficient properties of cascades are also investigated in the context of parameter estimation for modelling purposes.


Sensors ◽  
2018 ◽  
Vol 18 (7) ◽  
pp. 2265 ◽  
Author(s):  
Qingqing Feng ◽  
Huaping Xu ◽  
Zhefeng Wu ◽  
Wei Liu

Deceptive jamming against synthetic aperture radar (SAR) can create false targets or deceptive scenes in the image effectively. Based on the difference in interferometric phase between the target and deceptive jamming signals, a novel method for detecting deceptive jamming using cross-track interferometry is proposed, where the echoes with deceptive jamming are received by two SAR antennas simultaneously and the false targets are identified through SAR interferometry. Since the derived false phase is close to a constant in interferogram, it is extracted through phase filtering and frequency detection. Finally, the false targets in the SAR image are obtained according to the detected false part in the interferogram. The effectiveness of the proposed method is validated by simulation results based on the TanDEM-X system.


Author(s):  
Xiaoqian Yuan ◽  
Chao Chen ◽  
Shan Tian ◽  
Jiandan Zhong

In order to improve the contrast of the difference image and reduce the interference of the speckle noise in the synthetic aperture radar (SAR) image, this paper proposes a SAR image change detection algorithm based on multi-scale feature extraction. In this paper, a kernel matrix with weights is used to extract features of two original images, and then the logarithmic ratio method is used to obtain the difference images of two images, and the change area of the images are extracted. Then, the different sizes of kernel matrix are used to extract the abstract features of different scales of the difference image. This operation can make the difference image have a higher contrast. Finally, the cumulative weighted average is obtained to obtain the final difference image, which can further suppress the speckle noise in the image.


Author(s):  
Yanjun Zhang ◽  
Tingting Xia ◽  
Mian Li

Abstract Various types of uncertainties, such as parameter uncertainty, model uncertainty, metamodeling uncertainty may lead to low robustness. Parameter uncertainty can be either epistemic or aleatory in physical systems, which have been widely represented by intervals and probability distributions respectively. Model uncertainty is formally defined as the difference between the true value of the real-world process and the code output of the simulation model at the same value of inputs. Additionally, metamodeling uncertainty is introduced due to the usage of metamodels. To reduce the effects of uncertainties, robust optimization (RO) algorithms have been developed to obtain solutions being not only optimal but also less sensitive to uncertainties. Based on how parameter uncertainty is modeled, there are two categories of RO approaches: interval-based and probability-based. In real-world engineering problems, both interval and probabilistic parameter uncertainties are likely to exist simultaneously in a single problem. However, few works have considered mixed interval and probabilistic parameter uncertainties together with other types of uncertainties. In this work, a general RO framework is proposed to deal with mixed interval and probabilistic parameter uncertainties, model uncertainty, and metamodeling uncertainty simultaneously in design optimization problems using the intervals-of-statistics approaches. The consideration of multiple types of uncertainties will improve the robustness of optimal designs and reduce the risk of inappropriate decision-making, low robustness and low reliability in engineering design. Two test examples are utilized to demonstrate the applicability and effectiveness of the proposed RO approach.


2018 ◽  
Vol 33 (2) ◽  
pp. 186-204 ◽  
Author(s):  
Jianping Yang ◽  
Wanwan Xia ◽  
Taizhong Hu

The relation between extropy and variational distance is studied in this paper. We determine the distribution which attains the minimum or maximum extropy among these distributions within a given variation distance from any given probability distribution, obtain the tightest upper bound on the difference of extropies of any two probability distributions subject to the variational distance constraint, and establish an analytic formula for the confidence interval of an extropy. Such a study parallels to that of Ho and Yeung [3] concerning entropy. However, the proofs of the main results in this paper are different from those in Ho and Yeung [3]. In fact, our arguments can simplify several proofs in Ho and Yeung [3].


2014 ◽  
Vol 15 (3) ◽  
pp. 1274-1292 ◽  
Author(s):  
Viviana Maggioni ◽  
Mathew R. P. Sapiano ◽  
Robert F. Adler ◽  
Yudong Tian ◽  
George J. Huffman

Abstract This study proposes a new framework, Precipitation Uncertainties for Satellite Hydrology (PUSH), to provide time-varying, global estimates of errors for high-time-resolution, multisatellite precipitation products using a technique calibrated with high-quality validation data. Errors are estimated for the widely used Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42 product at daily/0.25° resolution, using the NOAA Climate Prediction Center (CPC) Unified gauge dataset as the benchmark. PUSH estimates the probability distribution of reference precipitation given the satellite observation, from which the error can be computed as the difference (or ratio) between the satellite product and the estimated reference. The framework proposes different modeling approaches for each combination of rain and no-rain cases: correct no-precipitation detection (both satellite and gauges measure no precipitation), missed precipitation (satellite records a zero, but the gauges detect precipitation), false alarm (satellite detects precipitation, but the reference is zero), and hit (both satellite and gauges detect precipitation). Each case is explored and explicitly modeled to create a unified approach that combines all four scenarios. Results show that the estimated probability distributions are able to reproduce the probability density functions of the benchmark precipitation, in terms of both expected values and quantiles of the distribution. The spatial pattern of the error is also adequately reproduced by PUSH, and good agreement between observed and estimated errors is observed. The model is also able to capture missed precipitation and false detection uncertainties, whose contribution to the total error can be significant. The resulting error estimates could be attached to the corresponding high-resolution satellite precipitation products.


Sign in / Sign up

Export Citation Format

Share Document