scholarly journals Statistical Inference, Learning and Models in Big Data

2016 ◽  
Vol 84 (3) ◽  
pp. 371-389 ◽  
Author(s):  
Beate Franke ◽  
Jean‐François Plante ◽  
Ribana Roscher ◽  
En‐shiun Annie Lee ◽  
Cathal Smyth ◽  
...  
Technometrics ◽  
2016 ◽  
Vol 58 (3) ◽  
pp. 393-403 ◽  
Author(s):  
Elizabeth D. Schifano ◽  
Jing Wu ◽  
Chun Wang ◽  
Jun Yan ◽  
Ming-Hui Chen

Author(s):  
J.L. Peñaloza Figueroa ◽  
C. Vargas Perez

<p>The  increasing  automation  in  data  collection,  either  in  structured  or</p><p>unstructured formats, as well as the development of reading, concatenation and comparison algorithms and the growing analytical skills which characterize the era of Big Data, cannot not only be considered a technological achievement, but an organizational, methodological and analytical challenge for knowledge as well, which is necessary to generate opportunities and added value.</p><p>In fact, exploiting the potential of Big-Data includes all fields of community activity; and given its ability to extract behaviour patterns, we are interested in the challenges for the field of teaching and learning, particularly in the field of statistical inference and economic theory.</p><p>Big-Data can improve the understanding of concepts, models and techniques used in both statistical inference and economic theory, and it can also generate reliable and robust short and long term predictions. These facts have led to the demand for analytical capabilities, which in turn encourages teachers and students to demand access to massive information produced by individuals, companies and public and private organizations in their transactions and inter- relationships.</p><p>Mass data (Big Data) is changing the way people access, understand and organize knowledge, which in turn is causing a shift in the approach to statistics and economics teaching, considering them as a real way of thinking rather than just operational and technical disciplines. Hence, the question is how teachers can use automated collection and analytical skills to their advantage when teaching statistics and economics; and whether it will lead to a change in what is taught and how it is taught.</p>


Author(s):  
Jonathan I Watson

We present a novel technique for learning behaviors from ahuman provided feedback signal that is distorted by system-atic bias. Our technique, which we refer to as BASIL, modelsthe feedback signal as being separable into a heuristic evalu-ation of the utility of an action and a bias value that is drawnfrom a parametric distribution probabilistically, where thedistribution is defined by unknown parameters. We presentthe general form of the technique as well as a specific algo-rithm for integrating the technique with the TAMER algo-rithm for bias values drawn from a normal distribution. Wetest our algorithm against standard TAMER in the domain ofTetris using a synthetic oracle that provides feedback undervarying levels of distortion. We find our algorithm can learnvery quickly under bias distortions that entirely stymie thelearning of classic TAMER.


2021 ◽  
Vol 13 ◽  
pp. 164-169
Author(s):  
Yan Liu

Under the current statistical environment and technical conditions, there is a certain lag in the publication of statistical data. This means that there is a time lag in the completion of reports, which may delay the judgment of the current economic situation. Network real-time analysis based on big data analysis has gradually become the main force of data analysis. This paper puts forward the basic idea of understanding the statistical inference problem of non-probabilistic sampling. Sampling methods can consider sample selection based on sample matching, link tracking sampling method, etc., so that the obtained non-probabilistic samples are like probabilistic samples, so the statistical inference theory of probabilistic samples can be adopted. Random sampling technology and non-random sampling technology still have many applicable scenes, which are not only the scenes of traditional sampling survey in the past, but also applied to more modern information scenes with the times.


2020 ◽  
Vol 18 (1) ◽  
pp. 2-35
Author(s):  
Miodrag M. Lovric

The Jeffreys-Lindley paradox is the most quoted divergence between the frequentist and Bayesian approaches to statistical inference. It is embedded in the very foundations of statistics and divides frequentist and Bayesian inference in an irreconcilable way. This paradox is the Gordian Knot of statistical inference and Data Science in the Zettabyte Era. If statistical science is ready for revolution confronted by the challenges of massive data sets analysis, the first step is to finally solve this anomaly. For more than sixty years, the Jeffreys-Lindley paradox has been under active discussion and debate. Many solutions have been proposed, none entirely satisfactory. The Jeffreys-Lindley paradox and its extent have been frequently misunderstood by many statisticians and non-statisticians. This paper aims to reassess this paradox, shed new light on it, and indicates how often it occurs in practice when dealing with Big data.


ASHA Leader ◽  
2013 ◽  
Vol 18 (2) ◽  
pp. 59-59
Keyword(s):  

Find Out About 'Big Data' to Track Outcomes


Sign in / Sign up

Export Citation Format

Share Document