REFINED LATINIZED STRATIFIED SAMPLING: A ROBUST SEQUENTIAL SAMPLE SIZE EXTENSION METHODOLOGY FOR HIGH-DIMENSIONAL LATIN HYPERCUBE AND STRATIFIED DESIGNS

Author(s):  
Michael D. Shields
CATENA ◽  
2021 ◽  
Vol 206 ◽  
pp. 105509
Author(s):  
Shuangshuang Shao ◽  
Huan Zhang ◽  
Manman Fan ◽  
Baowei Su ◽  
Jingtao Wu ◽  
...  

Biometrika ◽  
2020 ◽  
Author(s):  
Oliver Dukes ◽  
Stijn Vansteelandt

Summary Eliminating the effect of confounding in observational studies typically involves fitting a model for an outcome adjusted for covariates. When, as often, these covariates are high-dimensional, this necessitates the use of sparse estimators, such as the lasso, or other regularization approaches. Naïve use of such estimators yields confidence intervals for the conditional treatment effect parameter that are not uniformly valid. Moreover, as the number of covariates grows with the sample size, correctly specifying a model for the outcome is nontrivial. In this article we deal with both of these concerns simultaneously, obtaining confidence intervals for conditional treatment effects that are uniformly valid, regardless of whether the outcome model is correct. This is done by incorporating an additional model for the treatment selection mechanism. When both models are correctly specified, we can weaken the standard conditions on model sparsity. Our procedure extends to multivariate treatment effect parameters and complex longitudinal settings.


2003 ◽  
Vol 125 (2) ◽  
pp. 210-220 ◽  
Author(s):  
G. Gary Wang

This paper addresses the difficulty of the previously developed Adaptive Response Surface Method (ARSM) for high-dimensional design problems. ARSM was developed to search for the global design optimum for computation-intensive design problems. This method utilizes Central Composite Design (CCD), which results in an exponentially increasing number of required design experiments. In addition, ARSM generates a complete new set of CCD points in a gradually reduced design space. These two factors greatly undermine the efficiency of ARSM. In this work, Latin Hypercube Design (LHD) is utilized to generate saturated design experiments. Because of the use of LHD, historical design experiments can be inherited in later iterations. As a result, ARSM only requires a limited number of design experiments even for high-dimensional design problems. The improved ARSM is tested using a group of standard test problems and then applied to an engineering design problem. In both testing and design application, significant improvement in the efficiency of ARSM is realized. The improved ARSM demonstrates strong potential to be a practical global optimization tool for computation-intensive design problems. Inheriting LHD points, as a general sampling strategy, can be integrated into other approximation-based design optimization methodologies.


2021 ◽  
Author(s):  
Xin Chen ◽  
Qingrun Zhang ◽  
Thierry Chekouo

Abstract Background: DNA methylations in critical regions are highly involved in cancer pathogenesis and drug response. However, to identify causal methylations out of a large number of potential polymorphic DNA methylation sites is challenging. This high-dimensional data brings two obstacles: first, many established statistical models are not scalable to so many features; second, multiple-test and overfitting become serious. To this end, a method to quickly filter candidate sites to narrow down targets for downstream analyses is urgently needed. Methods: BACkPAy is a pre-screening Bayesian approach to detect biological meaningful clusters of potential differential methylation levels with small sample size. BACkPAy prioritizes potentially important biomarkers by the Bayesian false discovery rate (FDR) approach. It filters non-informative sites (i.e. non-differential) with flat methylation pattern levels accross experimental conditions. In this work, we applied BACkPAy to a genome-wide methylation dataset with 3 tissue types and each type contains 3 gastric cancer samples. We also applied LIMMA (Linear Models for Microarray and RNA-Seq Data) to compare its results with what we achieved by BACkPAy. Then, Cox proportional hazards regression models were utilized to visualize prognostics significant markers with The Cancer Genome Atlas (TCGA) data for survival analysis. Results: Using BACkPAy, we identified 8 biological meaningful clusters/groups of differential probes from the DNA methylation dataset. Using TCGA data, we also identified five prognostic genes (i.e. predictive to the progression of gastric cancer) that contain some differential methylation probes, whereas no significant results was identified using the Benjamin-Hochberg FDR in LIMMA. Conclusions: We showed the importance of using BACkPAy for the analysis of DNA methylation data with extremely small sample size in gastric cancer. We revealed that RDH13, CLDN11, TMTC1, UCHL1 and FOXP2 can serve as predictive biomarkers for gastric cancer treatment and the promoter methylation level of these five genes in serum could have prognostic and diagnostic functions in gastric cancer patients.


Author(s):  
Ken Kobayashi ◽  
Naoki Hamada ◽  
Akiyoshi Sannai ◽  
Akinori Tanaka ◽  
Kenichi Bannai ◽  
...  

Multi-objective optimization problems require simultaneously optimizing two or more objective functions. Many studies have reported that the solution set of an M-objective optimization problem often forms an (M − 1)-dimensional topological simplex (a curved line for M = 2, a curved triangle for M = 3, a curved tetrahedron for M = 4, etc.). Since the dimensionality of the solution set increases as the number of objectives grows, an exponentially large sample size is needed to cover the solution set. To reduce the required sample size, this paper proposes a Bézier simplex model and its fitting algorithm. These techniques can exploit the simplex structure of the solution set and decompose a high-dimensional surface fitting task into a sequence of low-dimensional ones. An approximation theorem of Bézier simplices is proven. Numerical experiments with synthetic and real-world optimization problems demonstrate that the proposed method achieves an accurate approximation of high-dimensional solution sets with small samples. In practice, such an approximation will be conducted in the postoptimization process and enable a better trade-off analysis.


2019 ◽  
Vol 35 (14) ◽  
pp. i31-i40 ◽  
Author(s):  
Erfan Sayyari ◽  
Ban Kawas ◽  
Siavash Mirarab

Abstract Motivation Learning associations of traits with the microbial composition of a set of samples is a fundamental goal in microbiome studies. Recently, machine learning methods have been explored for this goal, with some promise. However, in comparison to other fields, microbiome data are high-dimensional and not abundant; leading to a high-dimensional low-sample-size under-determined system. Moreover, microbiome data are often unbalanced and biased. Given such training data, machine learning methods often fail to perform a classification task with sufficient accuracy. Lack of signal is especially problematic when classes are represented in an unbalanced way in the training data; with some classes under-represented. The presence of inter-correlations among subsets of observations further compounds these issues. As a result, machine learning methods have had only limited success in predicting many traits from microbiome. Data augmentation consists of building synthetic samples and adding them to the training data and is a technique that has proved helpful for many machine learning tasks. Results In this paper, we propose a new data augmentation technique for classifying phenotypes based on the microbiome. Our algorithm, called TADA, uses available data and a statistical generative model to create new samples augmenting existing ones, addressing issues of low-sample-size. In generating new samples, TADA takes into account phylogenetic relationships between microbial species. On two real datasets, we show that adding these synthetic samples to the training set improves the accuracy of downstream classification, especially when the training data have an unbalanced representation of classes. Availability and implementation TADA is available at https://github.com/tada-alg/TADA. Supplementary information Supplementary data are available at Bioinformatics online.


Econometrica ◽  
2019 ◽  
Vol 87 (3) ◽  
pp. 1055-1069 ◽  
Author(s):  
Anders Bredahl Kock ◽  
David Preinerstorfer

Fan, Liao, and Yao (2015) recently introduced a remarkable method for increasing the asymptotic power of tests in high‐dimensional testing problems. If applicable to a given test, their power enhancement principle leads to an improved test that has the same asymptotic size, has uniformly non‐inferior asymptotic power, and is consistent against a strictly broader range of alternatives than the initially given test. We study under which conditions this method can be applied and show the following: In asymptotic regimes where the dimensionality of the parameter space is fixed as sample size increases, there often exist tests that cannot be further improved with the power enhancement principle. However, when the dimensionality of the parameter space increases sufficiently slowly with sample size and a marginal local asymptotic normality (LAN) condition is satisfied, every test with asymptotic size smaller than 1 can be improved with the power enhancement principle. While the marginal LAN condition alone does not allow one to extend the latter statement to all rates at which the dimensionality increases with sample size, we give sufficient conditions under which this is the case.


Sign in / Sign up

Export Citation Format

Share Document