The Effects of Mental Practice on Motor Skill Learning and Performance: A Meta-analysis

1983 ◽  
Vol 5 (1) ◽  
pp. 25-57 ◽  
Author(s):  
Deborah L. Feltz ◽  
Daniel M. Landers

A longstanding research question in the sport psychology literature has been whether a given amount of mental practice prior to performing a motor skill will enhance one's subsequent performance. The research literature, however, has not provided any clear-cut answers to this question and this has prompted the present, more comprehensive review of existing research using the meta-analytic strategy proposed by Glass (1977). From the 60 studies yielding 146 effect sizes the overall average effect size was .48, which suggests, as did Richardson (1967a), that mentally practicing a motor skill influences performance somewhat better than no practice at all. Effect sizes were also compared on a number of variables thought to moderate the effects of mental practice. Results from these comparisons indicated that studies employing cognitive tasks had larger average effect sizes than motor or strength tasks and that published studies had larger average effect sizes than unpublished studies. These findings are discussed in relation to several existing explanations for mental practice and four theoretical propositions are advanced.

2019 ◽  
Vol 43 (3-4) ◽  
pp. 111-151 ◽  
Author(s):  
Richard P. Phelps

Background: Test frequency, stakes associated with educational tests, and feedback from test results have been identified in the research literature as relevant factors in student achievement. Objectives: Summarize the separate and joint contribution to student achievement of these three treatments and their interactions via multivariable meta-analytic techniques using a database of English-language studies spanning a century (1910–2010), comprising 149 studies and 509 effect size estimates. Research design: Analysis employed robust variance estimation. Considered as potential moderators were hundreds of study features comprising various test designs and test administration, demographic, and source document characteristics. Subjects: Subjects were students at all levels, from early childhood to adult, mostly from the United States but also eight other countries. Results: We find a summary effect size of 0.84 for the three treatments collectively. Further analysis suggests benefits accrue to the incremental addition of combinations of testing and feedback or stakes and feedback. Moderator analysis shows higher effect sizes associated with the following study characteristics: more recent year of publication, summative (rather than formative) testing, constructed (rather than selected) item response formats, alignment of subject matter between pre- and posttests, and recognition/recall (rather than core subjects, art, or physical education). Conversely, lower effect sizes are associated with postsecondary students (rather than early childhood–upper secondary), special education population, larger study population, random assignment (rather than another sampling method), use of shadow test as outcome measure, designation of individuals (rather than groups) as units of analysis, and academic (rather than corporate or government) research.


2016 ◽  
Vol 38 (5) ◽  
pp. 441-457 ◽  
Author(s):  
Jean-Charles Lebeau ◽  
Sicong Liu ◽  
Camilo Sáenz-Moncaleano ◽  
Susana Sanduvete-Chaves ◽  
Salvador Chacón-Moscoso ◽  
...  

Research linking the “quiet eye” (QE) period to subsequent performance has not been systematically synthesized. In this paper we review the literature on the link between the two through nonintervention (Synthesis 1) and intervention (Synthesis 2) studies. In the first synthesis, 27 studies with 38 effect sizes resulted in a large mean effect (d = 1.04) reflecting differences between experts’ and novices’ QE periods, and a moderate effect size (d = 0.58) comparing QE periods for successful and unsuccessful performances within individuals. Studies reporting QE duration as a percentage of the total time revealed a larger mean effect size than studies reporting an absolute duration (in milliseconds). The second synthesis of 9 articles revealed very large effect sizes for both the quiet-eye period (d = 1.53) and performance (d = 0.84). QE also showed some ability to predict performance effects across studies.


2015 ◽  
Vol 24 (2) ◽  
pp. 237-255 ◽  
Author(s):  
Patricia L. Cleave ◽  
Stephanie D. Becker ◽  
Maura K. Curran ◽  
Amanda J. Owen Van Horne ◽  
Marc E. Fey

Purpose This systematic review and meta-analysis critically evaluated the research evidence on the effectiveness of conversational recasts in grammatical development for children with language impairments. Method Two different but complementary reviews were conducted and then integrated. Systematic searches of the literature resulted in 35 articles for the systematic review. Studies that employed a wide variety of study designs were involved, but all examined interventions where recasts were the key component. The meta-analysis only included studies that allowed the calculation of effect sizes, but it did include package interventions in which recasts were a major part. Fourteen studies were included, 7 of which were also in the systematic review. Studies were grouped according to research phase and were rated for quality. Results Study quality and thus strength of evidence varied substantially. Nevertheless, across all phases, the vast majority of studies provided support for the use of recasts. Meta-analyses found average effect sizes of .96 for proximal measures and .76 for distal measures, reflecting a positive benefit of about 0.75 to 1.00 standard deviation. Conclusion The available evidence is limited, but it is supportive of the use of recasts in grammatical intervention. Critical features of recasts in grammatical interventions are discussed.


2021 ◽  
Vol 12 ◽  
Author(s):  
Peipei Mao ◽  
Zhihui Cai ◽  
Jinbo He ◽  
Xinjie Chen ◽  
Xitao Fan

Science education is attracting increasing attention and many researchers focus on the issue about the attitude-achievement relationship in science, but there is still no consistent conclusion. By using a three-level meta-analytic approach, the aim of the current study was to investigate the relationship between attitude toward science and academic achievement in learning science among primary and secondary school students, and to explore if some study characteristics could have contributed to the inconsistent findings with regard to this relationship as observed in the research literature. A total of 37 studies with 132 effect sizes involving a total of 1,042,537 participants were identified. The meta-analytic results revealed that there was an overall positive and moderate relationship between attitude toward science and learning achievement in science (r = 0.248, p < 0.001). The results further found that this association was moderated by the type of attitude and larger effect sizes were shown in self-efficacy than in interest, societal relevance of attitude toward science, and mixed attitude. Moreover, the effect sizes of studies with unstandardized measure to assess science achievement were larger than those with standardized measure. Possible explanations for these findings and its implications for future research directions were also discussed in this review.


2021 ◽  
Author(s):  
Loretta Gasparini ◽  
Sho Tsuji ◽  
Christina Bergmann

Meta-analyses provide researchers with an overview of the body of evidence in a topic, with quantified estimates of effect sizes and the role of moderators, and weighting studies according to their precision. We provide a guide for conducting a transparent and reproducible meta-analysis in the field of developmental psychology within the framework of the MetaLab platform, in 10 steps: 1) Choose a topic for your meta-analysis, 2) Formulate your research question and specify inclusion criteria, 3) Preregister and carefully document all stages of your meta-analysis, 4) Conduct the literature search, 5) Collect and screen records, 6) Extract data from eligible studies, 7) Read the data into analysis software and compute effect sizes, 8) Create meta-analytic models to assess the strength of the effect and investigate possible moderators, 9) Visualize your data, 10) Write up and promote your meta-analysis. Meta-analyses can inform future studies, through power calculations, by identifying robust methods and exposing research gaps. By adding a new meta-analysis to MetaLab, datasets across multiple topics of developmental psychology can be synthesized, and the dataset can be maintained as a living, community-augmented meta-analysis to which researchers add new data, allowing for a cumulative approach to evidence synthesis.


1992 ◽  
Vol 17 (4) ◽  
pp. 363-374 ◽  
Author(s):  
Donald B. Rubin

A traditional meta-analysis can be thought of as a literature synthesis, in which a collection of observed studies is analyzed to obtain summary judgments about overall significance and size of effects. Many aspects of the current set of statistical tools for meta-analysis are highly useful—for example, the development of clear and concise effect-size indicators with associated standard errors. I am less happy, however, with more esoteric statistical techniques and their implied objects of estimation (i.e., their estimands) which are tied to the conceptualization of average effect sizes, weighted or otherwise, in a population of studies. In contrast to these average effect sizes of literature synthesis, I believe that the proper estimand is an effect-size surface, which is a function only of scientifically relevant factors, and which can only be estimated by extrapolating a response surface of observed effect sizes to a region of ideal studies. This effect-size surface perspective is presented and contrasted with the literature synthesis perspective. The presentation is entirely conceptual. Moreover, it is designed to be provocative, thereby prodding researchers to rethink traditional meta-analysis and ideally stimulating meta-analysts to attempt effect-surface estimations.


1982 ◽  
Vol 4 (1) ◽  
pp. 52-63 ◽  
Author(s):  
Steven G. Zecker

Although mental practice has often been demonstrated to result in improved learning of a motor skill, theoretical accounts of the reasons for this improvement are lacking. The present experiment examined the role of knowledge of results (KR) in motor skill learning, because KR is believed to be crucial to such learning, yet is lacking during mental practice. Subjects in four conditions (mental practice, physical practice, physical practice without KR, and control), tossed beanbags at a target. Results showed that of the four conditions, mental practice showed the largest performance increment, whereas physical practice showed a decrement attributed to massed practice without adequate rest periods. Results suggest that (a) knowledge of results is not always essential for improved performance; (b) mental practice is most beneficial following sufficient experience with the task; and (c) mental practice may be best suited for a massed practice learning situation.


BMJ Open ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. e045841
Author(s):  
David Matthews ◽  
Edith Elgueta Cancino ◽  
Deborah Falla ◽  
Ali Khatibi

IntroductionMotor skill learning is intrinsic to living. Pain demands attention and may disrupt non-pain-related goals such as learning new motor skills. Although rehabilitation approaches have used motor skill learning for individuals in pain, there is uncertainty on the impact of pain on learning motor skills.Methods and analysisThe protocol of this systematic review has been designed and is reported in accordance with criteria set out by the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols guidelines. Web of Science, Scopus, MEDLINE, Embase and CINAHL databases; key journals; and grey literature will be searched up until March 2021, using subject-specific searches. Two independent assessors will oversee searching, screening and extracting of data and assessment of risk of bias. Both behavioural and activity-dependent plasticity outcome measures of motor learning will be synthesised and presented. The quality of evidence will be assessed using the Grading of Recommendations Assessment, Development and Evaluation approach.Ethics and disseminationNo patient data will be collected, and therefore, ethical approval was not required for this review. The results of this review will provide further understanding into the complex effects of pain and may guide clinicians in their use of motor learning strategies for the rehabilitation of individuals in pain. The results of this review will be published in a peer-reviewed journal and presented at scientific conferences.PROSPERO registration numberCRD42020213240.


2018 ◽  
Author(s):  
Timothy Bartkoski ◽  
Ellen Herrmann ◽  
Chelsea Witt ◽  
Cort Rudolph

Muslim and Arab individuals are discriminated against in almost all domains. Recently, there hasbeen a focus on examining the treatment of these groups in the work setting. Despite the great number of primary studies examining this issue, there has not yet been a quantitative review of the research literature. To fill this gap, this meta-analysis examined the presence and magnitude of hiring discrimination against Muslim and Arab individuals. Using 46 independent effect sizes from 26 sources, we found evidence of discrimination against Muslim and Arab people in employment judgments, behaviors, and decisions across multiple countries. Moderator analyses revealed that discrimination is stronger in field settings, when actual employment decisions are made, and when experimental studies used “Arab” (vs. “Muslim”) targets. However, primary studies provide inconsistent and inaccurate distinctions between Arabs and Muslims, therefore future work should be cautious in categorizing the exact aspect of identity being studied.


Sign in / Sign up

Export Citation Format

Share Document