statistical knowledge
Recently Published Documents


TOTAL DOCUMENTS

195
(FIVE YEARS 58)

H-INDEX

18
(FIVE YEARS 2)

2022 ◽  
Author(s):  
Andrea Kóbor ◽  
Karolina Janacsek ◽  
Petra Hermann ◽  
Zsofia Zavecz ◽  
Vera Varga ◽  
...  

Previous research recognized that humans could extract statistical regularities of the environment to automatically predict upcoming events. However, it has remained unexplored how the brain encodes the distribution of statistical regularities if it continuously changes. To investigate this question, we devised an fMRI paradigm where participants (N = 32) completed a visual four-choice reaction time (RT) task consisting of statistical regularities. Two types of blocks involving the same perceptual elements alternated with one another throughout the task: While the distribution of statistical regularities was predictable in one block type, it was unpredictable in the other. Participants were unaware of the presence of statistical regularities and of their changing distribution across the subsequent task blocks. Based on the RT results, although statistical regularities were processed similarly in both the predictable and unpredictable blocks, participants acquired less statistical knowledge in the unpredictable as compared with the predictable blocks. Whole-brain random-effects analyses showed increased activity in the early visual cortex and decreased activity in the precuneus for the predictable as compared with the unpredictable blocks. Therefore, the actual predictability of statistical regularities is likely to be represented already at the early stages of visual cortical processing. However, decreased precuneus activity suggests that these representations are imperfectly updated to track the multiple shifts in predictability throughout the task. The results also highlight that the processing of statistical regularities in a changing environment could be habitual.


2021 ◽  
Vol 20 (2) ◽  
pp. 12
Author(s):  
FRANCISCA M. UBILLA ◽  
CLAUDIA VÁSQUEZ ◽  
FRANCISCO FRANCISCO ROJAS ◽  
NÚRIA GORGORIÓ

We consider the ability to complete an investigative cycle as an indicator of the robustness of students’ statistical knowledge. From this standpoint, we analyzed the written reports of primary education student teachers when they developed an investigative cycle in a Chilean and a Spanish university. In their development of the stages of the cycle we observed characteristics common to the centers (for example, summary research questions and conclusions that are a simple concatenation of results), and differential features (the data collection tools and techniques, among others). Armed with a knowledge of how future teachers approach and understand an investigative cycle we are able to contribute ideas that influence their training, building bridges between what they learn and what they should teach. Abstract: Spanish Consideramos la capacidad para completar un ciclo de investigación estadística como indicador de la robustez del conocimiento estadístico del estudiante. Desde este posicionamiento, analizamos las producciones escritas de profesores de enseñanza básica en formación cuando desarrollan un ciclo de investigación en una universidad Chilena y una Española. En las concreciones de las fases del ciclo observamos características comunes entre los centros (por ejemplo, preguntas de investigación tipo resumen y conclusiones como simple concatenación de resultados), y elementos diferenciales (instrumentos y técnicas de recogida de datos, entre otros). Conocer cómo abordan y comprenden el ciclo de investigación los futuros profesores nos permite aportar ideas para incidir en su formación, tendiendo puentes entre lo que aprenden y lo que deberán enseñar.  


2021 ◽  
Vol 1 (2) ◽  
pp. 108-112
Author(s):  
Nyoman Sridana ◽  
Amrullah Amrullah ◽  
Hapipi Hapipi ◽  
Deni Hamdani ◽  
Nourma Pramestie Wulandari

This service is based on the competency standard of statistical knowledge and the evaluation of teachers is still low, so socialization is needed to teachers regarding the application of statistics to analyze data on evaluating students’ learning outcomes. Professional teachers should always try to improve their knowledge in the field of teaching matherials and pedagogics. The method used in this service activity is mentoring offline and online (including lectures, question-and-answer, assigntments, and presentations method). This service activity is arranged in four stages, where one stage is carried out online and three stages are carried out online. Offline activities are carried out using lecture and question-and-answer methods, while onine activities are carried out with FGD, question-and-answer, and presentations method. The expected output of this service activity is that the teachers are proficient in using statistical application programs in analyzing valid and significant evaluation data of students’ learning outcomes.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Sijia Chen ◽  
Zhizeng Luo ◽  
Tong Hua

Electromyography (EMG) signals can be used for clinical diagnosis and biomedical applications. It is very important to reduce noise and to acquire accurate signals for the usage of the EMG signals in biomedical engineering. Since EMG signal noise has the time-varying and random characteristics, the present study proposes an adaptive Kalman filter (AKF) denoising method based on an autoregressive (AR) model. The AR model is built by applying the EMG signal, and the relevant parameters are integrated to find the state space model required to optimally estimate AKF, eliminate the noise in the EMG signal, and restore the damaged EMG signal. To be specific, AR autoregressive dynamic modeling and repair for distorted signals are affected by noise, and AKF adaptively can filter time-varying noise. The denoising method based on the self-learning mechanism of AKF exhibits certain capabilities to achieve signal tracking and adaptive filtering. It is capable of adaptively regulating the model parameters in the absence of any prior statistical knowledge regarding the signal and noise, which is aimed at achieving a stable denoising effect. By comparatively analyzing the denoising effects exerted by different methods, the EMG signal denoising method based on the AR-AKF model is demonstrated to exhibit obvious advantages.


2021 ◽  
Vol 31 (4) ◽  
pp. 1-15
Author(s):  
Christine S. M. Currie ◽  
Thomas Monks

We describe a practical two-stage algorithm, BootComp, for multi-objective optimization via simulation. Our algorithm finds a subset of good designs that a decision-maker can compare to identify the one that works best when considering all aspects of the system, including those that cannot be modeled. BootComp is designed to be straightforward to implement by a practitioner with basic statistical knowledge in a simulation package that does not support sequential ranking and selection. These requirements restrict us to a two-stage procedure that works with any distributions of the outputs and allows for the use of common random numbers. Comparisons with sequential ranking and selection methods suggest that it performs well, and we also demonstrate its use analyzing a real simulation aiming to determine the optimal ward configuration for a UK hospital.


Author(s):  
Alexander Pastukhov ◽  
Lisa Koßmann ◽  
Claus-Christian Carbon

AbstractWhen several multistable displays are viewed simultaneously, their perception is synchronized, as they tend to be in the same perceptual state. Here, we investigated the possibility that perception may reflect embedded statistical knowledge of physical interaction between objects for specific combinations of displays and layouts. We used a novel display with two ambiguously rotating gears and an ambiguous walker-on-a-ball display. Both stimuli produce a physically congruent perception when an interaction is possible (i.e., gears counterrotate, and the ball rolls under the walker’s feet). Next, we gradually manipulated the stimuli to either introduce abrupt changes to the potential physical interaction between objects or keep it constant despite changes in the visual stimulus. We characterized the data using four different models that assumed (1) independence of perception of the stimulus, (2) dependence on the stimulus’s properties, (3) dependence on physical configuration alone, and (4) an interaction between stimulus properties and a physical configuration. We observed that for the ambiguous gears, the perception was correlated with the stimulus changes rather than with the possibility of physical interaction. The perception of walker-on-a-ball was independent of the stimulus but depended instead on whether participants responded about a relative motion of two objects (perception was biased towards physically congruent motion) or the absolute motion of the walker alone (perception was independent of the rotation of the ball). None of the two experiments supported the idea of embedded knowledge of physical interaction.


Synthese ◽  
2021 ◽  
Author(s):  
Alexandru Baltag ◽  
Soroush Rafiee Rad ◽  
Sonja Smets

AbstractWe propose a new model for forming and revising beliefs about unknown probabilities. To go beyond what is known with certainty and represent the agent’s beliefs about probability, we consider a plausibility map, associating to each possible distribution a plausibility ranking. Beliefs are defined as in Belief Revision Theory, in terms of truth in the most plausible worlds (or more generally, truth in all the worlds that are plausible enough). We consider two forms of conditioning or belief update, corresponding to the acquisition of two types of information: (1) learning observable evidence obtained by repeated sampling from the unknown distribution; and (2) learning higher-order information about the distribution. The first changes only the plausibility map (via a ‘plausibilistic’ version of Bayes’ Rule), but leaves the given set of possible distributions essentially unchanged; the second rules out some distributions, thus shrinking the set of possibilities, without changing their plausibility ordering.. We look at stability of beliefs under either of these types of learning, defining two related notions (safe belief and statistical knowledge), as well as a measure of the verisimilitude of a given plausibility model. We prove a number of convergence results, showing how our agent’s beliefs track the true probability after repeated sampling, and how she eventually gains in a sense (statistical) knowledge of that true probability. Finally, we sketch the contours of a dynamic doxastic logic for statistical learning.


2021 ◽  
Author(s):  
Harry Drewes ◽  
Gotthold Flaeschner ◽  
Peter Moeller

The Covid-19 pandemic impacted the human life all over the globe, starting in the year of its emergence, 2019, and in the following years. A epidemiological key indicator that gained particular recognition in politics and decision making is the time-dependent reproduction number R_t, which is commonly calculated by institutions responsible for disease control following a method presented by Cori et. al. Here, we propose an improved as well as an alternative method, which makes the calculation more stable against oscillations arising from daily variations in testing. Both methods can be used without great statistical knowledge or effort. The methods provide a smoother result without increasing the time-lag, and provides an advantage particularly in the timeframe of weeks, which might serve as a better ground for forecasts and the raising of alarms.


2021 ◽  
Vol 13 (8) ◽  
pp. 203
Author(s):  
Klaus Kammerer ◽  
Manuel Göster ◽  
Manfred Reichert ◽  
Rüdiger Pryss

A deep understanding about a field of research is valuable for academic researchers. In addition to technical knowledge, this includes knowledge about subareas, open research questions, and social communities (networks) of individuals and organizations within a given field. With bibliometric analyses, researchers can acquire quantitatively valuable knowledge about a research area by using bibliographic information on academic publications provided by bibliographic data providers. Bibliometric analyses include the calculation of bibliometric networks to describe affiliations or similarities of bibliometric entities (e.g., authors) and group them into clusters representing subareas or communities. Calculating and visualizing bibliometric networks is a nontrivial and time-consuming data science task that requires highly skilled individuals. In addition to domain knowledge, researchers must often provide statistical knowledge and programming skills or use software tools having limited functionality and usability. In this paper, we present the ambalytics bibliometric platform, which reduces the complexity of bibliometric network analysis and the visualization of results. It accompanies users through the process of bibliometric analysis and eliminates the need for individuals to have programming skills and statistical knowledge, while preserving advanced functionality, such as algorithm parameterization, for experts. As a proof-of-concept, and as an example of bibliometric analyses outcomes, the calculation of research fronts networks based on a hybrid similarity approach is shown. Being designed to scale, ambalytics makes use of distributed systems concepts and technologies. It is based on the microservice architecture concept and uses the Kubernetes framework for orchestration. This paper presents the initial building block of a comprehensive bibliometric analysis platform called ambalytics, which aims at a high usability for users as well as scalability.


2021 ◽  
Vol 14 (7) ◽  
pp. 5199-5224
Author(s):  
Niklas Benedikt Blum ◽  
Bijan Nouri ◽  
Stefan Wilbert ◽  
Thomas Schmidt ◽  
Ontje Lünsdorf ◽  
...  

Abstract. Cloud base height (CBH) is an important parameter for many applications such as aviation, climatology or solar irradiance nowcasting (forecasting for the next seconds to hours ahead). The latter application is of increasing importance for the operation of distribution grids and photovoltaic power plants, energy storage systems and flexible consumers. To nowcast solar irradiance, systems based on all-sky imagers (ASIs), cameras monitoring the entire sky dome above their point of installation, have been demonstrated. Accurate knowledge of the CBH is required to nowcast the spatial distribution of solar irradiance around the ASI's location at a resolution down to 5 m. To measure the CBH, two ASIs located at a distance of usually less than 6 km can be combined into an ASI pair. However, the accuracy of such systems is limited. We present and validate a method to measure the CBH using a network of ASIs to enhance accuracy. To the best of our knowledge, this is the first method to measure the CBH with a network of ASIs which is demonstrated experimentally. In this study, the deviations of 42 ASI pairs are studied in comparison to a ceilometer and are characterized by camera distance. The ASI pairs are formed from seven ASIs and feature camera distances of 0.8…5.7 km. Each of the 21 tuples of two ASIs formed from seven ASIs yields two independent ASI pairs as the ASI used as the main and auxiliary camera, respectively, is swapped. Deviations found are compiled into conditional probabilities that tell how probable it is to receive a certain reading of the CBH from an ASI pair given that the true CBH takes on some specific value. Based on such statistical knowledge, in the inference, the likeliest actual CBH is estimated from the readings of all 42 ASI pairs. Based on the validation results, ASI pairs with a small camera distance (especially if <1.2 km) are accurate for low clouds (CBH<4 km). In contrast, ASI pairs with a camera distance of more than 3 km provide smaller deviations for greater CBH. No ASI pair provides the most accurate measurements under all conditions. The presented network of ASIs at different distances proves that, under all cloud conditions, the measurements of the CBH are more accurate than using a single ASI pair.


Sign in / Sign up

Export Citation Format

Share Document