discrepancy measure
Recently Published Documents


TOTAL DOCUMENTS

39
(FIVE YEARS 0)

H-INDEX

9
(FIVE YEARS 0)

Entropy ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. 1258
Author(s):  
Yong Wang ◽  
Xiao Guo

This paper studies simultaneous inference for factor loadings in the approximate factor model. We propose a test statistic based on the maximum discrepancy measure. Taking advantage of the fact that the test statistic can be approximated by the sum of the independent random variables, we develop a multiplier bootstrap procedure to calculate the critical value, and demonstrate the asymptotic size and power of the test. Finally, we apply our result to multiple testing problems by controlling the family-wise error rate (FWER). The conclusions are confirmed by simulations and real data analysis.





2019 ◽  
Vol 36 (9) ◽  
pp. 3029-3046 ◽  
Author(s):  
Islam A. ElShaarawy ◽  
Essam H. Houssein ◽  
Fatma Helmy Ismail ◽  
Aboul Ella Hassanien

Purpose The purpose of this paper is to propose an enhanced elephant herding optimization (EEHO) algorithm by improving the exploration phase to overcome the fast-unjustified convergence toward the origin of the native EHO. The exploration and exploitation of the proposed EEHO are achieved by updating both clan and separation operators. Design/methodology/approach The original EHO shows fast unjustified convergence toward the origin specifically, a constant function is used as a benchmark for inspecting the biased convergence of evolutionary algorithms. Furthermore, the star discrepancy measure is adopted to quantify the quality of the exploration phase of evolutionary algorithms in general. Findings In experiments, EEHO has shown a better performance of convergence rate compared with the original EHO. Reasons behind this performance are: EEHO proposes a more exploitative search method than the one used in EHO and the balanced control of exploration and exploitation based on fixing clan updating operator and separating operator. Operator γ is added to EEHO assists to escape from local optima, which commonly exist in the search space. The proposed EEHO controls the convergence rate and the random walk independently. Eventually, the quantitative and qualitative results revealed that the proposed EEHO outperforms the original EHO. Research limitations/implications Therefore, the pros and cons are reported as follows: pros of EEHO compared to EHO – 1) unbiased exploration of the whole search space thanks to the proposed update operator that fixed the unjustified convergence of the EHO toward the origin and the proposed separating operator that fixed the tendency of EHO to introduce new elephants at the boundary of the search space; and 2) the ability to control exploration–exploitation trade-off by independently controverting the convergence rate and the random walk using different parameters – cons EEHO compared to EHO: 1) suitable values for three parameters (rather than two only) have to be found to use EEHO. Originality/value As the original EHO shows fast unjustified convergence toward the origin specifically, the search method adopted in EEHO is more exploitative than the one used in EHO because of the balanced control of exploration and exploitation based on fixing clan updating operator and separating operator. Further, the star discrepancy measure is adopted to quantify the quality of exploration phase of evolutionary algorithms in general. Operator γ that added EEHO allows the successive local and global searching (exploration and exploitation) and helps escaping from local minima that commonly exist in the search space.



Author(s):  
Seiichi Kuroki ◽  
Nontawat Charoenphakdee ◽  
Han Bao ◽  
Junya Honda ◽  
Issei Sato ◽  
...  

Unsupervised domain adaptation is the problem setting where data generating distributions in the source and target domains are different and labels in the target domain are unavailable. An important question in unsupervised domain adaptation is how to measure the difference between the source and target domains. Existing discrepancy measures for unsupervised domain adaptation either require high computation costs or have no theoretical guarantee. To mitigate these problems, this paper proposes a novel discrepancy measure called source-guided discrepancy (S-disc), which exploits labels in the source domain unlike the existing ones. As a consequence, S-disc can be computed efficiently with a finitesample convergence guarantee. In addition, it is shown that S-disc can provide a tighter generalization error bound than the one based on an existing discrepancy measure. Finally, experimental results demonstrate the advantages of S-disc over the existing discrepancy measures.



ORiON ◽  
2019 ◽  
Vol 35 (1) ◽  
pp. 33-56
Author(s):  
IJH Visagie ◽  
GL Grobler

A technique known as calibration is often used when a given option pricing model is fitted to observed financial data. This entails choosing the parameters of the model so as to minimise some discrepancy measure between the observed option prices and the prices calculated under the model in question. This procedure does not take the historical values of the underlying asset into account. In this paper, the density function of the log-returns obtained using the calibration procedure is compared to a density estimate of the observed historical log-returns. Three models within the class of geometric Lévy process models are fitted to observed data; the Black-Scholes model as well as the geometric normal inverse Gaussian and Meixner process models. The numerical results obtained show a surprisingly large discrepancy between the resulting densities when using the latter two models. An adaptation of the calibration methodology is also proposed based on both option price data and the observed historical log-returns of the underlying asset. The implementation of this methodology limits the discrepancy between the densities in question.



2019 ◽  
Vol 485 (3) ◽  
pp. 4343-4358
Author(s):  
Germán Chaparro-Molano ◽  
Juan Carlos Cuervo ◽  
Oscar Alberto Restrepo Gaitán ◽  
Sergio Torres Arzayús

ABSTRACT We propose the use of robust, Bayesian methods for estimating extragalactic distance errors in multimeasurement catalogues. We seek to improve upon the more commonly used frequentist propagation-of-error methods, as they fail to explain both the scatter between different measurements and the effects of skewness in the metric distance probability distribution. For individual galaxies, the most transparent way to assess the variance of redshift independent distances is to directly sample the posterior probability distribution obtained from the mixture of reported measurements. However, sampling the posterior can be cumbersome for catalogue-wide precision cosmology applications. We compare the performance of frequentist methods versus our proposed measures for estimating the true variance of the metric distance probability distribution. We provide pre-computed distance error data tables for galaxies in three catalogues: NED-D, HyperLEDA, and Cosmicflows-3. Additionally, we develop a Bayesian model that considers systematic and random effects in the estimation of errors for Tully–Fisher (TF) relation derived distances in NED-D. We validate this model with a Bayesian p-value computed using the Freeman–Tukey discrepancy measure as a posterior predictive check. We are then able to predict distance errors for 884 galaxies in the NED-D catalogue and 203 galaxies in the HyperLEDA catalogue that do not report TF distance modulus errors. Our goal is that our estimated and predicted errors are used in catalogue-wide applications that require acknowledging the true variance of extragalactic distance measurements.



2018 ◽  
Vol 2 (4) ◽  
pp. 73 ◽  
Author(s):  
Tianyi Li ◽  
Jean-François Luyé

In this paper, we propose a novel systematic procedure to minimize the discrepancy between the numerically predicted and the experimentally measured fiber orientation results on an injection-molded part. Fiber orientation model parameters are optimized simultaneously using Latin hypercube sampling and kriging-based adaptive surrogate modeling techniques. Via an adequate discrepancy measure, the optimized solution possesses correct skin–shell–core structure and global orientation evolution throughout the considered center-gated disk. Some non-trivial interaction between these parameters and flow-fiber coupling effects as well as their quantitative importance are illustrated. The parametric fine-tuning of orientation models mostly leads to a better agreement in the skin and shell regions, while the coupling effect via a fiber-dependent viscosity improves prediction in the core.



Author(s):  
Kazushi Ikeda ◽  
Takatomi Kubo ◽  
Hiroaki Sasaki ◽  
Masataka Mori ◽  
Kentaro Hitomi ◽  
...  


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-18 ◽  
Author(s):  
R. E. Rolón ◽  
I. E. Gareis ◽  
L. E. Di Persia ◽  
R. D. Spies ◽  
H. L. Rufiner

In recent years, an increasing interest in the development of discriminative methods based on sparse representations with discrete dictionaries for signal classification has been observed. It is still unclear, however, what is the most appropriate way for introducing discriminative information into the sparse representation problem. It is also unknown which is the best discrepancy measure for classification purposes. In the context of feature selection problems, several complexity-based measures have been proposed. The main objective of this work is to explore a method that uses such measures for constructing discriminative subdictionaries for detecting apnea-hypopnea events using pulse oximetry signals. Besides traditional discrepancy measures, we study a simple one called Difference of Conditional Activation Frequency (DCAF). We additionally explore the combined effect of overcompleteness and redundancy of the dictionary as well as the sparsity level of the representation. Results show that complexity-based measures are capable of adequately pointing out discriminative atoms. Particularly, DCAF yields competitive averaged detection accuracy rates of 72.57% at low computational cost. Additionally, ROC curve analyses show averaged diagnostic sensitivity and specificity of 81.88% and 87.32%, respectively. This shows that discriminative subdictionary construction methods for sparse representations of pulse oximetry signals constitute a valuable tool for apnea-hypopnea screening.



Sign in / Sign up

Export Citation Format

Share Document