scholarly journals A Simple Method for Testing Independence of High-Dimensional Random Vectors

2016 ◽  
Vol 37 (1) ◽  
Author(s):  
Gintautas Jakimauskas ◽  
Marijus Radavičius ◽  
Jurgis Sušinskas

A simple, data-driven and computationally efficient procedure for testing independence of high-dimensional random vectors is proposed. The procedure is based on interpretation of testing goodness-of-fit as the classification problem, a special sequential partition procedure, elements of sequential testing, resampling and randomization. Monte Carlo simulations are carried out to assess the performance of the procedure.

2009 ◽  
Vol 50 ◽  
Author(s):  
Gintautas Jakimauskas

Let us have a sample satisfying d-dimensional Gaussian mixture model (d is supposed to be large). The problem of classification of the sample is considered. Because of large dimension it is natural to project the sample to k-dimensional (k = 1,  2, . . .) linear subspaces using projection pursuit method which gives the best selection of these subspaces. Having an estimate of the discriminant subspace we can perform classification using projected sample thus avoiding ’curse of dimensionality’.  An essential step in this method is testing goodness-of-fit of the estimated d-dimensional model assuming that distribution on the complement space is standard Gaussian. We present a simple, data-driven and computationally efficient procedure for testing goodness-of-fit. The procedure is based on well-known interpretation of testing goodness-of-fit as the classification problem, a special sequential data partition procedure, randomization and resampling, elements of sequentialtesting.Monte-Carlosimulations are used to assess the performance of the procedure.


Author(s):  
Hongyi Xu ◽  
Zhen Jiang ◽  
Daniel W. Apley ◽  
Wei Chen

Data-driven random process models have become increasingly important for uncertainty quantification (UQ) in science and engineering applications, due to their merit of capturing both the marginal distributions and the correlations of high-dimensional responses. However, the choice of a random process model is neither unique nor straightforward. To quantitatively validate the accuracy of random process UQ models, new metrics are needed to measure their capability in capturing the statistical information of high-dimensional data collected from simulations or experimental tests. In this work, two goodness-of-fit (GOF) metrics, namely, a statistical moment-based metric (SMM) and an M-margin U-pooling metric (MUPM), are proposed for comparing different stochastic models, taking into account their capabilities of capturing the marginal distributions and the correlations in spatial/temporal domains. This work demonstrates the effectiveness of the two proposed metrics by comparing the accuracies of four random process models (Gaussian process (GP), Gaussian copula, Hermite polynomial chaos expansion (PCE), and Karhunen–Loeve (K–L) expansion) in multiple numerical examples and an engineering example of stochastic analysis of microstructural materials properties. In addition to the new metrics, this paper provides insights into the pros and cons of various data-driven random process models in UQ.


2014 ◽  
Vol 109 (506) ◽  
pp. 600-612 ◽  
Author(s):  
Guangming Pan ◽  
Jiti Gao ◽  
Yanrong Yang

2020 ◽  
Vol 10 (3) ◽  
pp. 306-315
Author(s):  
Rupa Mazumder ◽  
Swarnali Das Paul

Background: Atenolol is a commonly used antihypertensive drug of class III BCS category. It suffers from the problem of poor intestinal absorption or permeability thus low bioavailability. The objective of the present study was to enhance the permeability of atenolol by using a suitable technique, which is economical and devoid of using any organic solvent. Methods: The nanocrystal technology by high-pressure homogenization was chosen for this purpose, which is a less expensive and simple method. In this technique, no organic solvent was used. The study was further aimed to characterize prepared nanocrystals in the solid state by Fourier Transform Infrared Spectroscopy (FTIR), Powder X-Ray Diffraction (PXRD) patterns, particle size, zeta potential, %yield and drug permeation study through isolated goat’s intestine. An in-vivo study was carried out to determine the pharmacokinetic property in comparison to pure drug powder using rats as experimental animals. The formulation design was optimized by a 3(2) factorial design. In these designs, two factors namely surfactant amount (X1) and speed of homogenizer (X2) were evaluated on three dependent variables namely particle size (y1), zeta potential (y2) and production yield (y3). Results: PXRD study indicated the presence of high crystal content in the prepared formulation. These nanocrystal formulations were found with a narrow size range from 125 nm to 652 nm and positive zeta potential of 16-18 mV. Optimized formulations showed almost 90% production yield. Permeability study revealed 90.88% drug release for optimized formulation in comparison to the pure drug (31.22%). The FTIR study also exposed that there was no disturbance in the principal peaks of the pure drug atenolol. This confirmed the integrity of the pure drug and its compatibility with the excipients used. A significant increase in the area under the concentration-time curve Cpmax and MRT for nanocrystals was observed in comparison to the pure drug. The higher values of the determination coefficient (R2) of all three parameters indicated the goodness of fit of the 3(2) factorial model. The factorial analysis also revealed that speed of homogenizer had a bigger effect on particle size (-0.2812), zeta potential (-0.0004) and production yield (0.0192) whereas amount of surfactant had a lesser effect on production yield (-370.4401), zeta potential (-43.3651) as well as particle size (-6169.2601). Conclusion: It is concluded that the selected method of nanocrystal formation and its further optimization by factorial design was effective to increase the solubility, as well as permeability of atenolol. Further, the systematic approach of factorial design provides rational evaluation and prediction of nanocrystals formulation on the selected limited number of smart experimentation.


2016 ◽  
Vol 76 (4) ◽  
pp. 512-531 ◽  
Author(s):  
Xiaoguang Feng ◽  
Dermot Hayes

Purpose Portfolio risk in crop insurance due to the systemic nature of crop yield losses has inhibited the development of private crop insurance markets. Government subsidy or reinsurance has therefore been used to support crop insurance programs. The purpose of this paper is to investigate the possibility of converting systemic crop yield risk into “poolable” risk. Specifically, this study examines whether it is possible to remove the co-movement as well as tail dependence of crop yield variables by enlarging the risk pool across different crops and countries. Design/methodology/approach Hierarchical Kendall copula (HKC) models are used to model potential non-linear correlations of the high-dimensional crop yield variables. A Bayesian estimation approach is applied to account for estimation risk in the copula parameters. A synthetic insurance portfolio is used to evaluate the systemic risk and diversification effect. Findings The results indicate that the systemic nature – both positive correlation and lower tail dependence – of crop yield risks can be eliminated by combining crop insurance policies across crops and countries. Originality/value The study applies the HKC in the context of agricultural risks. Compared to other advanced copulas, the HKC achieves both flexibility and parsimony. The flexibility of the HKC makes it appropriate to precisely represent various correlation structures of crop yield risks while the parsimony makes it computationally efficient in modeling high-dimensional correlation structure.


Author(s):  
Alexandr Klimchik ◽  
Anatol Pashkevich ◽  
Stéphane Caro ◽  
Damien Chablat

The paper focuses on the extension of the virtual-joint-based stiffness modeling technique for the case of different types of loadings applied both to the robot end-effector and to manipulator intermediate points (auxiliary loading). It is assumed that the manipulator can be presented as a set of compliant links separated by passive or active joints. It proposes a computationally efficient procedure that is able to obtain a non-linear force-deflection relation taking into account the internal and external loadings. It also produces the Cartesian stiffness matrix. This allows to extend the classical stiffness mapping equation for the case of manipulators with auxiliary loading. The results are illustrated by numerical examples.


1982 ◽  
Vol 19 (A) ◽  
pp. 359-365 ◽  
Author(s):  
David Pollard

The theory of weak convergence has developed into an extensive and useful, but technical, subject. One of its most important applications is in the study of empirical distribution functions: the explication of the asymptotic behavior of the Kolmogorov goodness-of-fit statistic is one of its greatest successes. In this article a simple method for understanding this aspect of the subject is sketched. The starting point is Doob's heuristic approach to the Kolmogorov-Smirnov theorems, and the rigorous justification of that approach offered by Donsker. The ideas can be carried over to other applications of weak convergence theory.


1985 ◽  
Vol 107 (1) ◽  
pp. 141-146
Author(s):  
G. Umasankar ◽  
C. R. Mischke

A simple method of computing the effect of a dimensional change at a particular element of a stepped shaft on two bearings, on bending deflections, and on slopes of the neutral axis at any of the nodes of interest is presented. The changes in deflection and slope of the neutral axis are derived as incremental quantities and as functions of the dimension change and the prior deflections and slopes of the neutral axis of the shaft. For shaft synthesis, the implications are that one can begin with a uniform diameter bar subjected to the loading and make a complete deflection analysis with superposed closed-form relations. Then the geometry can be modified element by element and the deflectional changes easily updated. This is computationally efficient. Further, deflections and deflection changes computed using the proposed method are identical to those obtained using a finite beam element model of the shaft.


Sign in / Sign up

Export Citation Format

Share Document