scholarly journals A Simple Model to Estimate the Percentage of Motor Plan Reuse From Hysteresis Effect Size

2019 ◽  
Vol 10 ◽  
Author(s):  
Christoph Schütz ◽  
Thomas Schack
2020 ◽  
Vol 12 (1) ◽  
pp. 14-29
Author(s):  
László Bognár ◽  
Antal Joós ◽  
Bálint Nagy

AbstractIn this paper the conditions and the findings of a simulation study is presented for assessing the effect size of users’ consciousness to the computer network vulnerability in risky cyber attack situations at a certain business. First a simple model is set up to classify the groups of users according to their skills and awareness then probabilities are assigned to each class describing the likelihood of committing dangerous reactions in case of a cyber attack. To quantify the level of network vulnerability a metric developed in a former work is used. This metric shows the approximate probability of an infection at a given business with well specified parameters according to its location, the type of the attack, the protections used at the business etc. The findings mirror back the expected tendencies namely if the number of conscious user is on the


2017 ◽  
Author(s):  
Andrei Gorea ◽  
Lionel Granjon ◽  
Dov Sagi

ABSTRACTAre we aware of the outcome of our actions? The participants pointed rapidly at a screen location marked by a transient visual target (T), with and without seeing their hand, and were asked to estimate (E) their landing location (L) using the same finger but without time constraints. We found that L and E are systematically and idiosyncratically shifted away from their corresponding targets (T, L), suggesting unawareness. Moreover, E was biased away from L, toward T (21% and 37%, with and without visual feedback), in line with a putative Bayesian account of the results, assuming a strong prior in the absence of vision. However, L (the assumed prior) and E (the assumed posterior) precisions were practically identical, arguing against such an account of the results. Instead, the results are well accounted for by a simple model positing that the participants’ E is set to the planned rather than the actual L. When asked to estimate their landing location, participants appeared to reenact their original motor plan.


2016 ◽  
Vol 1 (15) ◽  
pp. 79-83
Author(s):  
Ed Bice ◽  
Kristine E. Galek

Dysphagia is common in patients with dementia. Dysphagia occurs as a result of changes in the sensory and motor function of the swallow (Easterling, 2007). It is known that the central nervous system can undergo experience-dependent plasticity, even in those individuals with dementia (Park & Bischof, 2013). The purpose of this study was to explore whether or not the use of neuroplastic principles would improve the swallow motor plan and produce positive outcomes of a patient in severe cognitive decline. The disordered swallow motor plan was manipulated by focusing on a neuroplastic principles of frequency (repetition), velocity of movement (speed of presentation), reversibility (Use it or Lose it), specificity and adaptation, intensity (bolus size), and salience (Crary & Carnaby-Mann, 2008). After five therapeutic sessions, the patient progressed from holding solids in her mouth with decreased swallow initiation to independently consuming a regular diet with full range of liquids with no oral retention and no verbal cues.


2000 ◽  
Vol 8 (1) ◽  
pp. 18-24 ◽  
Author(s):  
Gert Kaluza ◽  
Hans-Henning Schulze

Zusammenfassung. Die Evaluation von Interventionen zur Prävention und Gesundheitsförderung stellt ein zentrales Aufgabenfeld der gesundheitspsychologischen Forschung dar. Häufige methodische Probleme entsprechender Evaluationsstudien betreffen 1. Ausgangswert-Unterschiede bei nicht randomisierten Studiendesigns, 2. Abhängigkeit von Beobachtungen bei Gruppeninterventionsstudien, 3. Kapitalisierung von Irrtumswahrscheinlichkeiten aufgrund einer Vielzahl von abhängigen Variablen und 4. Beurteilung der praktischen Relevanz statistisch signifikanter Interventionseffekte. Zu deren pragmatischer Lösung werden u.a. 1. die Anwendung kovarianzanalytischer Auswertungsstrategien, 2. die Berechnung von Intraclass-Korrelationen und ggf. eine Datenauswertung auf der Ebene der Gruppenmittelwerte, 3. eine Reduktion der Anzahl der abhängigen Variablen mittels Hauptkomponentenanalyse sowie eine Alpha-Adjustierung unter Berücksichtigung der Teststärke (“compromise power analysis”) und 4. die Umrechnung gängiger Effektstärken in prozentuale Erfolgsraten (“binomial effect size display”) empfohlen.


2006 ◽  
Vol 20 (3) ◽  
pp. 186-194 ◽  
Author(s):  
Susanne Mayr ◽  
Michael Niedeggen ◽  
Axel Buchner ◽  
Guido Orgs

Responding to a stimulus that had to be ignored previously is usually slowed-down (negative priming effect). This study investigates the reaction time and ERP effects of the negative priming phenomenon in the auditory domain. Thirty participants had to categorize sounds as musical instruments or animal voices. Reaction times were slowed-down in the negative priming condition relative to two control conditions. This effect was stronger for slow reactions (above intraindividual median) than for fast reactions (below intraindividual median). ERP analysis revealed a parietally located negativity of the negative priming condition compared to the control conditions between 550-730 ms poststimulus. This replicates the findings of Mayr, Niedeggen, Buchner, and Pietrowsky (2003) . The ERP correlate was more pronounced for slow trials (above intraindividual median) than for fast trials (below intraindividual median). The dependency of the negative priming effect size on the reaction time level found in the reaction time analysis as well as in the ERP analysis is consistent with both the inhibition as well as the episodic retrieval account of negative priming. A methodological artifact explanation of this effect-size dependency is discussed and discarded.


Methodology ◽  
2019 ◽  
Vol 15 (3) ◽  
pp. 97-105
Author(s):  
Rodrigo Ferrer ◽  
Antonio Pardo

Abstract. In a recent paper, Ferrer and Pardo (2014) tested several distribution-based methods designed to assess when test scores obtained before and after an intervention reflect a statistically reliable change. However, we still do not know how these methods perform from the point of view of false negatives. For this purpose, we have simulated change scenarios (different effect sizes in a pre-post-test design) with distributions of different shapes and with different sample sizes. For each simulated scenario, we generated 1,000 samples. In each sample, we recorded the false-negative rate of the five distribution-based methods with the best performance from the point of view of the false positives. Our results have revealed unacceptable rates of false negatives even with effects of very large size, starting from 31.8% in an optimistic scenario (effect size of 2.0 and a normal distribution) to 99.9% in the worst scenario (effect size of 0.2 and a highly skewed distribution). Therefore, our results suggest that the widely used distribution-based methods must be applied with caution in a clinical context, because they need huge effect sizes to detect a true change. However, we made some considerations regarding the effect size and the cut-off points commonly used which allow us to be more precise in our estimates.


2019 ◽  
Vol 227 (4) ◽  
pp. 261-279 ◽  
Author(s):  
Frank Renkewitz ◽  
Melanie Keiner

Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect such bias in meta-analytic results. We present an evaluation of the performance of six of these tools. To assess the Type I error rate and the statistical power of these methods, we simulated a large variety of literatures that differed with regard to true effect size, heterogeneity, number of available primary studies, and sample sizes of these primary studies; furthermore, simulated studies were subjected to different degrees of publication bias. Our results show that across all simulated conditions, no method consistently outperformed the others. Additionally, all methods performed poorly when true effect sizes were heterogeneous or primary studies had a small chance of being published, irrespective of their results. This suggests that in many actual meta-analyses in psychology, bias will remain undiscovered no matter which detection method is used.


2018 ◽  
Vol 49 (5) ◽  
pp. 303-309 ◽  
Author(s):  
Jedidiah Siev ◽  
Shelby E. Zuckerman ◽  
Joseph J. Siev

Abstract. In a widely publicized set of studies, participants who were primed to consider unethical events preferred cleansing products more than did those primed with ethical events ( Zhong & Liljenquist, 2006 ). This tendency to respond to moral threat with physical cleansing is known as the Macbeth Effect. Several subsequent efforts, however, did not replicate this relationship. The present manuscript reports the results of a meta-analysis of 15 studies testing this relationship. The weighted mean effect size was small across all studies (g = 0.17, 95% CI [0.04, 0.31]), and nonsignificant across studies conducted in independent laboratories (g = 0.07, 95% CI [−0.04, 0.19]). We conclude that there is little evidence for an overall Macbeth Effect; however, there may be a Macbeth Effect under certain conditions.


Sign in / Sign up

Export Citation Format

Share Document