Intra- and Inter-Rater Reliability of a Ballet-based Dance Technique Screening Instrument

2020 ◽  
Vol 35 (1) ◽  
pp. 28-34
Author(s):  
Shaw Bronner ◽  
Ivetta Lassey ◽  
Jessie R Lesar ◽  
Zachary G Shaver ◽  
Catherine Turner

OBJECTIVES: To investigate intra- and inter-rater reliability of a ballet-based Dance Technique Screening Instrument used by physical therapists (PTs) and student PTs (SPTs) with prior dance medicine or dance experience. METHODS: Ten pre-professional dancers were video-recorded in the sagittal and frontal planes while performing four dance sequences: 1) second position grand plié; 2) développé à la seconde; 3) single-limb passé relevé balance; and 4) jumps in first position. Dance videos and electronic versions of the demographics and scoring forms were provided through a secure online survey to 28 PTs and SPTs who served as raters. Raters reviewed a training video prior to scoring the 10 dancers. Raters were asked to repeat their assessments 1–2 wks later. Intraclass correlations (ICC) were assessed for all-raters, PTs, and SPTs for total and sequence scores. RESULTS: Twenty-eight raters assessed the videos one time. Inter-rater reliability was ICC=0.98 (CI95=0.96–0.99) (all-raters), with PTs and SPTs displaying similar values (ICC=0.96 and 0.96, respectively). Eighteen raters (11 PTs, 7 SPTs) repeated the video analysis. Intra-rater reliability was ICC=0.78 (CI95=0.72–0.83) with PTs ICC=0.81 and SPTs ICC=0.70. CONCLUSIONS: Correlations were high for all-raters. SPTs were as reliable as PTs in inter-rater comparisons. PTs exhibited higher intra-rater reliability compared to SPTs. These results substantiate the reliability of a standardized testing instrument to conduct dance technique assessment. Validity of this instrument was demonstrated in a previous study which found dancers with better technique were less likely to sustain injury. The ability to identify technique deficits can guide preventative programs that may reduce injury risk. LEVEL OF EVIDENCE: Level III.

2021 ◽  
pp. bjsports-2020-103131
Author(s):  
Celeste Geertsema ◽  
Liesel Geertsema ◽  
Abdulaziz Farooq ◽  
Joar Harøy ◽  
Chelsea Oester ◽  
...  

ObjectivesThis study assessed knowledge, beliefs and practices of elite female footballers regarding injury prevention.MethodsA survey was sent to players participating in the FIFA Women’s World Cup France 2019. Questions covered three injury prevention domains: (1) knowledge; (2) attitudes and beliefs; (3) prevention practices in domestic clubs. Additionally, ACL injury history was assessed.ResultsOut of 552 players, 196 women responded (35.5%). More than 80% of these considered injury risk to be moderate or high. Players listed knee, ankle, thigh, head and groin as the most important injuries in women’s football. The most important risk factors identified were low muscle strength, followed by poor pitch quality, playing on artificial turf, too much training, reduced recovery and hard tackles. In these elite players, 15% did not have any permanent medical staff in their domestic clubs, yet more than 75% had received injury prevention advice and more than 80% performed injury prevention exercises in their clubs. Players identified the two most important implementation barriers as player motivation and coach attitude. Two-thirds of players used the FIFA 11+ programme in their clubs.ConclusionsThis diverse group of elite players demonstrated good knowledge of risk level and injury types in women’s football. Of the risk factors emphasised by players, there was only one intrinsic risk factor (strength), but several factors out of their control (pitch quality and type, training volume and hard tackles). Still players had positive attitudes and beliefs regarding injury prevention exercises and indicated a high level of implementation, despite a lack of medical support.


2021 ◽  
pp. 194173812110560
Author(s):  
Neeru Jayanthi ◽  
Stacey Schley ◽  
Sean P. Cumming ◽  
Gregory D. Myer ◽  
Heather Saffel ◽  
...  

Context: Most available data on athletic development training models focus on adult or professional athletes, where increasing workload capacity and performance is a primary goal. Development pathways in youth athletes generally emphasize multisport participation rather than sport specialization to optimize motor skill acquisition and to minimize injury risk. Other models emphasize the need for accumulation of sport- and skill-specific hours to develop elite-level status. Despite recommendations against sport specialization, many youth athletes still specialize and need guidance on training and competition. Medical and sport professionals also recommend progressive, gradual increases in workloads to enhance resilience to the demands of high-level competition. There is no accepted model of risk stratification and return to play for training a specialized youth athlete through periods of injury and maturation. In this review, we present individualized training models for specialized youth athletes that (1) prioritize performance for healthy, resilient youth athletes and (2) are adaptable through vulnerable maturational periods and injury. Evidence Acquisition: Nonsystematic review with critical appraisal of existing literature. Study Design: Clinical review. Level of Evidence: Level 4. Results: A number of factors must be considered when developing training programs for young athletes: (1) the effect of sport specialization on athlete development and injury, (2) biological maturation, (3) motor and coordination deficits in specialized youth athletes, and (4) workload progressions and response to load. Conclusion: Load-sensitive athletes with multiple risk factors may need medical evaluation, frequent monitoring, and a program designed to restore local tissue and sport-specific capacity. Load-naive athletes, who are often skeletally immature, will likely benefit from serial monitoring and should train and compete with caution, while load-tolerant athletes may only need occasional monitoring and progress to optimum loads. Strength of Recommendation Taxonomy (SORT): B.


2020 ◽  
Vol Volume 13 ◽  
pp. 2031-2041
Author(s):  
Masushi Kohta ◽  
Takehiko Ohura ◽  
Kunio Tsukada ◽  
Yoshinori Nakamura ◽  
Mishiho Sukegawa ◽  
...  

2020 ◽  
Vol 9 (6) ◽  
pp. 1958 ◽  
Author(s):  
Anna Folli ◽  
Alessandro Schneebeli ◽  
Simone Ballerini ◽  
Francesca Mena ◽  
Emiliano Soldini ◽  
...  

Dry needling (DN) is a minimally invasive treatment technique widely used by physical therapists to treat myofascial trigger points (MTrP). Even if its safety has been commonly declared and the majority of adverse events are considered mild, serious adverse events cannot be excluded and DN treatments of several trunk muscles can potentially result in pneumothorax. Ultrasound imaging (US) skin-to-rib measurement could ensure the safety of this treatment procedure. Therefore, the aim of this study was to determine the inter-rater reliability of depth measurement of different trunk muscles (i.e., rhomboid, lower trapezius, iliocostalis, and pectoralis major) between an expert and two novice physiotherapists. Skin-to-rib distance of 26 asymptomatic and normal weights subjects was consecutively, independently, and randomly measured for each muscle by the three examiners (1 expert and 2 novice physical therapists) with a handheld US wireless probe. Intraclass correlation coefficient (ICC3,k) and standard error of measurement (SEM) were used to assess inter-rater reliability. Inter-rater reliability of skin-to-rib measurements between the three examiners was good to excellent or excellent for every muscle, with an ICC3,k ranging from 0.92 and 0.98 (95% CI 0.86–0.99). The SEM never exceeded 10% of the skin-to-rib distance. In conclusion, skin-to-rib US measurements of the trunk muscles can be reliably performed by novice physical therapists using a handheld US device. These measures could be used as an innovative and reliable technique to improve the safety of some potential dangerous DN treatments.


2019 ◽  
Vol 7 (9) ◽  
pp. 232596711987012 ◽  
Author(s):  
Alison E. Field ◽  
Frances A. Tepolt ◽  
Daniel S. Yang ◽  
Mininder S. Kocher

Background: Sports specialization has become increasingly common among youth. Purpose/Hypothesis: To investigate the relative importance of specialization vs volume of activity in increasing risk of injury. Hypotheses were that specialization increases the risk of injury and that risk varies by sport. Study Design: Cohort study; Level of evidence, 2. Methods: A prospective analysis was conducted with data collected from 10,138 youth in the Growing Up Today Study—a prospective cohort study of youth throughout the United States—and their mothers. Activity was assessed via questionnaires in 1997, 1998, 1999, and 2001. Sports specialization was defined as engaging in a single sport in the fall, winter, and spring. Injury history was provided by participants’ mothers via questionnaire in 2004. The outcome was incident stress fracture, tendinitis, chondromalacia patella, anterior cruciate ligament tear, or osteochondritis dissecans or osteochondral defect. Results: Females who engaged in sports specialization were at increased risk of injury (hazard ratio [HR], 1.31; 95% CI, 1.07-1.61), but risk varied by sport. Sports specialization was associated with greater volume of physical activity in both sexes ( P < .0001). Total hours per week of vigorous activity was predictive of developing injury, regardless of what other variables were included in the statistical model (males: HR, 1.04; 95% CI, 1.02-1.06; females: HR, 1.06; 95% CI, 1.05-1.08). Among females, even those engaging in 3 to 3.9 hours per week less than their age were at a significantly increased risk of injury (HR, 1.93; 95% CI, 1.34-2.77). In males, there was no clear pattern of risk. Conclusion: Sports specialization is associated with a greater volume of vigorous sports activity and increased risk of injury. Parents, coaches, and medical providers need to be made aware of the volume threshold above which physical activity is excessive.


2020 ◽  
Vol 12 (2) ◽  
pp. 132-138
Author(s):  
Joshua K. Helmkamp ◽  
Garrett S. Bullock ◽  
Allison Rao ◽  
Ellen Shanley ◽  
Charles Thigpen ◽  
...  

Context: Humeral torsion (HT) has been linked to various injuries and benefits. However, the exact interplay between HT, shoulder range of motion (ROM), competition level differences, and injury risk is unclear. Objective: To determine the relationship between HT, ROM, and injury risk in baseball players. Secondarily, to determine HT based on competition level. Data Sources: PubMed, Embase, Web of Science, CINAHL, and Cochrane databases were searched from inception until November 4, 2018. Study Selection: Inclusion criteria consisted of (1) HT measurements and (2) arm injury or shoulder ROM. Study Design: Systematic review. Level of Evidence: Level 3. Data Extraction: Two reviewers recorded patient demographics, competition level, HT, shoulder ROM, and injury data. Results: A total of 32 studies were included. There was no difference between baseball players with shoulder and elbow injuries and noninjured players (side-to-side HT difference: mean difference [MD], 1.75 [95% CI, –1.83 to 2.18]; dominant arm: MD, 0.17 [95% CI, –1.83 to 2.18]). Meta-regression determined that for every 1° increase in shoulder internal rotation (IR), there was a subsequent increase of 0.65° in HT (95% CI, 0.28 to 1.02). HT did not explain external rotation (ER ROM: 0.19 [95% CI, –0.24 to 0.61]) or horizontal adduction (HA ROM: 0.18 [95% CI, –0.46 to 0.82]). There were no differences between HT at the high school, college, or professional levels. Conclusion: No relationship was found between HT and injury risk. However, HT explained 65% of IR ROM but did not explain ER ROM or HA ROM. There were no differences in HT pertaining to competition level. The majority of IR may be nonmodifiable. Treatment to restore and maintain clinical IR may be important, especially in players with naturally greater torsion. HT adaptation may occur prior to high school, which can assist in decisions regarding adolescent baseball participation.


2019 ◽  
Vol 25 (3) ◽  
pp. 202-206
Author(s):  
Verena Nista-Piccolo ◽  
José Robertto Zaffalon Júnior ◽  
Mario Cesar Nascimento ◽  
Michelle Sartori ◽  
Kátia De Angelis

ABSTRACT Introduction Some studies suggest that playing tennis brings benefits for the anthropometric and metabolic profile of those who practice it, reducing the risk of mortality more significantly than other sports. In addition, changes in cardiovascular autonomic regulation have been highlighted as a common factor in the development of cardiometabolic disorders. Objective To evaluate and compare hemodynamic parameters and cardiovascular autonomic modulation among former tennis players who still play the sport (ET), adults who play recreational tennis (TR), and adults classified as sedentary (S). Methods Thirty-four men aged between 23 and 45 years participated in the study. They were divided into 3 groups: ET, TR and S. Anthropometric parameters and blood pressure were evaluated and the R-R interval was recorded to quantify the cardiac autonomic modulation at rest. Results Similar values were observed between groups for blood pressure, waist circumference and body mass index. The amount of moderate and vigorous physical activities of the ET group was higher than that of the TR group. The ET presented resting bradycardia associated with increased pulse interval (PI) variance and high-frequency PI, and a reduction in low-frequency PI compared to the other groups studied. Reduced cardiac sympathovagal balance was observed in the ET group (1.7 ± 0.1) and TR group (2.5 ± 0.2) compared to the S group (3.2 ± 0.2); however, this change was exacerbated in the ET group compared to the TR group. Conclusion The results suggest that playing tennis induces beneficial changes in cardiac autonomic modulation that appear to be intensified as the volume of physical activity increases, suggesting that this practice is beneficial in the management of cardiovascular risk. Level of Evidence II; Diagnostic Studies - Investigating a Diagnostic Test.


2020 ◽  
pp. 205141582097039
Author(s):  
Samuel Stephen Folkard ◽  
Paul Sturch ◽  
Tharani Mahesan ◽  
Stephen Garnett

Introduction: The coronavirus disease 2019 (COVID-19) pandemic is having significant effects on health services globally, including on urological surgery for which the British Association of Urological Surgeons (BAUS) has provided national guidance. Kent, Surrey and Sussex (KSS) is one of the regions most affected by COVID-19 in the UK to date. Methods: An anonymous online survey of all KSS urology trainees was conducted. The primary outcome was to assess the effects on urology services, both malignant and benign, across the region in the acceleration phase and at the peak of the pandemic compared to standard care. The second was to quantify the effects on urology training, especially regarding operative exposure. Results: There were significant decreases in urological services provided at the peak of the pandemic across KSS compared to standard care ( p<0.0001). Only 22% of urology units were able to continue operating for low-risk cancer and to continue cystoscopy for two-week wait non-visible haematuria referrals in line with BAUS escalation guidelines. A third (33%) did not complete any prostate biopsies at the peak. The majority of urology units continued clinics by telephone. Urology trainees reported completing substantially fewer operating procedures and workplace-based assessments. A third (33%) had moved to consultant-only operating by the peak. Conclusions: The COVID-19 pandemic has caused significant changes to urological surgery services and training in KSS, with heterogeneity across the region. We suggest further work to quantify the effects nationally. Level of evidence: 4.


2016 ◽  
Vol 96 (7) ◽  
pp. 995-1005 ◽  
Author(s):  
Nolan Auchstaetter ◽  
Juliana Luc ◽  
Stacey Lukye ◽  
Kaylea Lynd ◽  
Shelby Schemenauer ◽  
...  

Abstract Background Best practice guidelines for stroke rehabilitation recommend functional electrical stimulation (FES) to improve gait and upper extremity function. Whether these guidelines have been implemented in practice is unknown. Objective The purposes of this study were: (1) to determine the frequency with which physical therapists use FES to address common therapeutic goals poststroke and (2) to identify the barriers to and facilitators of FES use. Design This was a cross-sectional, survey study. Methods A valid and reliable online survey was sent to Canadian physical therapists. Questions about demographic characteristics, FES use, knowledge of FES literature, and barriers and facilitators were posed. Closed-ended questions were analyzed with descriptive statistics and index scoring to produce summary scores. Pearson or point-biserial correlation coefficients correlated FES use with demographic variables. Open-ended questions about barriers and facilitators were analyzed by 3 researchers using a conventional content analysis. Results Two hundred ninety-eight physical therapists responded. Use of FES for clients with stroke was low for all therapeutic goals queried (improve walking, arm function, muscle strength and endurance, and sensation; prevent shoulder subluxation; and decrease spasticity). However, 52.6% of the respondents stated that they would like to increase their use of FES. More than 40% of the respondents were unsure of the strength of the evidence supporting FES for stroke care. Physical therapists with postgraduate FES training were more likely to use FES (r=.471, P&lt;.001). A lack of access to resources, such as time, equipment, and training, was the most frequently cited barrier to FES use. Limitations As an observational study, cause-and-effect relationships for FES use cannot be identified. Conclusions Functional electrical stimulation is not widely used by physical therapists in stroke rehabilitation. Improving access to resources—in particular, continuing education—may facilitate the implementation of FES into clinical practice.


2019 ◽  
Vol 11 (3) ◽  
pp. 280-285 ◽  
Author(s):  
Bruno Follmer ◽  
Rodolfo Andre Dellagrana ◽  
E. Paul Zehr

Background: Brain injury arising from head trauma is a major concern in mixed martial arts (MMA) because knockout (KO) and technical knockout (TKO) are frequent fight outcomes. Previous studies have shown a high incidence of matches ending due to strikes to the head but did not consider weight categories and female fights. This study aimed at analyzing match stoppages in MMA and the exposure to head trauma distinguished by sex and weight categories. Hypothesis: The heavier the weight class, the greater the risk and incidence of head trauma will be, regardless of sex. Study Design: Descriptive epidemiology study. Level of Evidence: Level 3. Methods: Publicly available data of 167 MMA events from 1903 fights between 2014 and 2017 were assessed, comprising 8 male and 2 female weight categories. Results: The combined KO/TKO rates per 100 athlete-exposures in the middleweight (19.53), light heavyweight (20.8), and heavyweight (26.09) divisions were greater than previously reported for MMA. While stoppage via KO/TKO occurred in 7.9% of combats in the female strawweight division, it occurred in 52.1% of the male heavyweight fights. The male middleweight ( P = 0.001), light heavyweight ( P < 0.001), and heavyweight divisions ( P < 0.001) had an increased risk of KO/TKO due to strikes to the head by 80%, 100%, and 206%, respectively. The risk in the flyweight division decreased 62% ( P = 0.001). All categories were compared with the lightweight division. The female bantamweight category presented a 221% increased risk in matches ending due to KO/TKO compared with the strawweight division ( P = 0.012). Punches to the head were the major technique used to end a combat via KO/TKO, regardless of sex and weight class. Conclusion: Head injury risk and incidence varies considerably according to sex and weight category in MMA. Clinical Relevance: The analysis of head trauma exposure in MMA athletes should be distinguished according to sex and weight category.


Sign in / Sign up

Export Citation Format

Share Document