scholarly journals Improving Bayesian statistics understanding in the age of Big Data with the bayesvl R package

2020 ◽  
Vol 4 ◽  
pp. 100016 ◽  
Author(s):  
Quan-Hoang Vuong ◽  
Viet-Phuong La ◽  
Minh-Hoang Nguyen ◽  
Manh-Toan Ho ◽  
Manh-Tung Ho ◽  
...  
2020 ◽  
Vol 11 ◽  
Author(s):  
Jure Demšar ◽  
Grega Repovš ◽  
Erik Štrumbelj

SoftwareX ◽  
2022 ◽  
Vol 17 ◽  
pp. 100944
Author(s):  
Parichit Sharma ◽  
Hasan Kurban ◽  
Mehmet Dalkilic

2021 ◽  
Vol 4 ◽  
Author(s):  
Frédéric Bertrand ◽  
Myriam Maumy-Bertrand

Fitting Cox models in a big data context -on a massive scale in terms of volume, intensity, and complexity exceeding the capacity of usual analytic tools-is often challenging. If some data are missing, it is even more difficult. We proposed algorithms that were able to fit Cox models in high dimensional settings using extensions of partial least squares regression to the Cox models. Some of them were able to cope with missing data. We were recently able to extend our most recent algorithms to big data, thus allowing to fit Cox model for big data with missing values. When cross-validating standard or extended Cox models, the commonly used criterion is the cross-validated partial loglikelihood using a naive or a van Houwelingen scheme —to make efficient use of the death times of the left out data in relation to the death times of all the data. Quite astonishingly, we will show, using a strong simulation study involving three different data simulation algorithms, that these two cross-validation methods fail with the extensions, either straightforward or more involved ones, of partial least squares regression to the Cox model. This is quite an interesting result for at least two reasons. Firstly, several nice features of PLS based models, including regularization, interpretability of the components, missing data support, data visualization thanks to biplots of individuals and variables —and even parsimony or group parsimony for Sparse partial least squares or sparse group SPLS based models, account for a common use of these extensions by statisticians who usually select their hyperparameters using cross-validation. Secondly, they are almost always featured in benchmarking studies to assess the performance of a new estimation technique used in a high dimensional or big data context and often show poor statistical properties. We carried out a vast simulation study to evaluate more than a dozen of potential cross-validation criteria, either AUC or prediction error based. Several of them lead to the selection of a reasonable number of components. Using these newly found cross-validation criteria to fit extensions of partial least squares regression to the Cox model, we performed a benchmark reanalysis that showed enhanced performances of these techniques. In addition, we proposed sparse group extensions of our algorithms and defined a new robust measure based on the Schmid score and the R coefficient of determination for least absolute deviation: the integrated R Schmid Score weighted. The R-package used in this article is available on the CRAN, http://cran.r-project.org/web/packages/plsRcox/index.html. The R package bigPLS will soon be available on the CRAN and, until then, is available on Github https://github.com/fbertran/bigPLS.


2018 ◽  
Author(s):  
Dominique Makowski

There is now a general agreement that the Bayesian statistical framework is the right way to go for psychological science. Nevertheless, its flexible nature is its power and weakness, for there is no agreement about what indices should be computed or reported. Moreover, the lack of a consensual index of effect existence, such as the frequentist p value, possibly contributes to the unnecessary murkiness that many non-familiar readers perceive in Bayesian statistics. Thus, this study describes and compares several indices of effect existence, provide intuitive visual representation of the "behaviour" of such indices in relationship with traditional metrics such as sample size, effect size and frequentist significance. The results contribute to develop the intuitive understanding of the values that researchers report and allow to draw recommendations for Bayesian statistics description, critical for the standardization of scientific reporting. We also provide a beginner-friendly implementation of automatic reports within the psycho R package.


2021 ◽  
Vol 1 (1) ◽  
pp. 28-59
Author(s):  
V. L. Gorokhov ◽  
◽  
Yu. V. Baryshev ◽  
Pekka Teerikorpi ◽  
V. V. Vitkovsky ◽  
...  

The article offers an overview and methodology for combining Neumann–Pearson statistics and Bayesian statistics with integrated visualization of cognitive images for processing multidimensional data of astronomical observations. These methods are very successfully applied in astrophysics and can be used for a wide range of problems in BIG DATA. The technique of such a combination can be oriented towards identifying and forecasting emergency situations in complex systems. In the proposed approach, Bayesian integration and visualization of cognitive images is based on the statistical capabilities of algorithms and programs to identify and objectify in cognitive probabilistic images signs of differences in the spatial or temporal structure of objects of observation.


PLoS ONE ◽  
2014 ◽  
Vol 9 (9) ◽  
pp. e108425 ◽  
Author(s):  
Alexey Miroshnikov ◽  
Erin M. Conlon
Keyword(s):  
Big Data ◽  

2019 ◽  
Author(s):  
Justin L. Balsor ◽  
David G. Jones ◽  
Kathryn M. Murphy

AbstractNew techniques for quantifying large numbers of proteins or genes are transforming the study of plasticity mechanisms in visual cortex (V1) into the era of big data. With those changes comes the challenge of applying new analytical methods designed for high-dimensional data. Studies of V1, however, can take advantage of the known functions that many proteins have in regulating experience-dependent plasticity to facilitate linking big data analyses with neurobiological functions. Here we discuss two workflows and provide example R code for analyzing high-dimensional changes in a group of proteins (or genes) using two data sets. The first data set includes 7 neural proteins, 9 visual conditions, and 3 regions in V1 from an animal model for amblyopia. The second data set includes 23 neural proteins and 31 ages (20d-80yrs) from human post-mortem samples of V1. Each data set presents different challenges and we describe using PCA, tSNE, and various clustering algorithms including sparse high-dimensional clustering. Also, we describe a new approach for identifying high-dimensional features and using them to construct a plasticity phenotype that identifies neurobiological differences among clusters. We include an R package “v1hdexplorer” that aggregates the various coding packages and custom visualization scripts written in R Studio.


ASHA Leader ◽  
2013 ◽  
Vol 18 (2) ◽  
pp. 59-59
Keyword(s):  

Find Out About 'Big Data' to Track Outcomes


Sign in / Sign up

Export Citation Format

Share Document