The Effect of Thursday Night Games on In-Game Injury Rates in the National Football League

2020 ◽  
Vol 48 (8) ◽  
pp. 1999-2003
Author(s):  
Jose R. Perez ◽  
Jonathan Burke ◽  
Abdul K. Zalikha ◽  
Dhanur Damodar ◽  
Joseph S. Geller ◽  
...  

Background: Although claims of increased injury rates with Thursday night National Football League (NFL) games exist, a paucity of data exist substantiating these claims. Purpose: To evaluate the effect of rest between games on in-game injury rates as it pertains to overall injury incidence, location, and player position. Study Design: Descriptive epidemiologic study. Methods: Data were obtained from official NFL game books for regular season games from all 32 teams for the 2013-2016 seasons. All in-game injuries recorded in official game books were included. Rest periods between games were classified as short (4 days), regular (6-8 days), or long (≥10 days). Overall observed injury rates per team-game were analyzed in relation to different rest periods using negative binomial regression. For results with significant overall findings, pairwise comparisons were tested using the Wald chi-square test. Exploratory secondary analyses were performed in a similar fashion to assess differences in injury rates for the different rest periods when stratified by anatomic location and player position. Results: A total of 2846 injuries were identified throughout the 4 seasons. There was an overall significant difference in injuries per team-game between short, regular, and long rest ( P = .01). With short rest, an observed mean of 1.26 injuries per game (95% CI, 1.06-1.49) was significantly different from the 1.53 observed injuries per game with regular rest (95% CI, 1.46-1.60; P = .03), but not compared with the 1.34 observed injuries per game with long rest ( P = .56). For player position, only the tight end, linebacker, and fullback group demonstrated significant differences between the injury rates for different rest categories. Quarterback was the only position with more injuries during games played on Thursday compared with both regular and long rest. This specific analysis was underpowered and the difference was not significant ( P = .08). No differences were found regarding injury rates in correlation with differences in rest periods with different injury locations. Conclusion: A short rest period between games is not associated with increased rates of observed injuries reported in NFL game books; rather, our data suggest there are significantly fewer injuries for Thursday night games compared with games played on regular rest. Future research correlating rest and quarterback injury rates is warranted.

2020 ◽  
Vol 55 (2) ◽  
pp. 195-204 ◽  
Author(s):  
Matthew C. Hess ◽  
David I. Swedler ◽  
Christine S. Collins ◽  
Brent A. Ponce ◽  
Eugene W. Brabston

Context Injuries in professional ultimate Frisbee (ultimate) athletes have never been described. Objective To determine injury rates, profiles, and associated factors using the first injury-surveillance program for professional ultimate. Design Descriptive epidemiology study. Setting American Ultimate Disc League professional ultimate teams during the 2017 season. Patients or Other Participants Sixteen all-male teams. Main Outcome Measure(s) Injury incidence rates (IRs) were calculated as injuries per 1000 athlete-exposures (AEs). Incidence rate ratios were determined to compare IRs with 95% confidence intervals, which were used to calculate differences. Results We observed 299 injuries over 8963 AEs for a total IR of 33.36 per 1000 AEs. Most injuries affected the lower extremity (72%). The most common injuries were thigh-muscle strains (12.7%) and ankle-ligament sprains (11.4%). Running was the most frequent injury mechanism (32%). Twenty-nine percent of injuries involved collisions; however, the concussion rate was low (IR = 0.22 per 1000 AEs). Injuries were more likely to occur during competition and in the second half of games. An artificial turf playing surface did not affect overall injury rates (Mantel-Haenszel incidence rate ratio = 1.28; 95% confidence interval = 0.99, 1.67). Conclusions To our knowledge, this is the first epidemiologic study of professional ultimate injuries. Injury rates were comparable with those of similar collegiate- and professional-level sports.


2019 ◽  
Vol 105 (3) ◽  
pp. 282-287
Author(s):  
Amrita Bandyopadhyay ◽  
Karen Tingay ◽  
Ashley Akbari ◽  
Lucy Griffiths ◽  
Helen Bedford ◽  
...  

ObjectiveTo evaluate long-term associations between early childhood hyperactivity and conduct problems (CP), measured using Strengths and Difficulties Questionnaire (SDQ) and risk of injury in early adolescence.DesignData linkage between a longitudinal birth cohort and routinely collected electronic health records.SettingConsenting Millennium Cohort Study (MCS) participants residing in Wales and Scotland.Patients3119 children who participated in the age 5 MCS interview.Main outcome measuresChildren with parent-reported SDQ scores were linked with hospital admission and Accident & Emergency (A&E) department records for injuries between ages 9 and 14 years. Negative binomial regression models adjusting for number of people in the household, lone parent, residential area, household poverty, maternal age and academic qualification, child sex, physical activity level and country of interview were fitted in the models.Results46% of children attended A&E or were admitted to hospital for injury, and 11% had high/abnormal scores for hyperactivity and CP. High/abnormal or borderline hyperactivity were not significantly associated with risk of injury, incidence rate ratio (IRR) with 95% CI of the high/abnormal and borderline were 0.92 (95% CI 0.74 to 1.14) and 1.16 (95% CI 0.88 to 1.52), respectively. Children with borderline CP had higher injury rates compared with those without CP (IRR 1.31, 95% CI 1.09 to 1.57).ConclusionsChildren with high/abnormal hyperactivity or CP scores were not at increased risk of injury; however, those with borderline CP had higher injury rates. Further research is needed to understand if those with difficulties receive treatment and support, which may reduce the likelihood of injuries.


2019 ◽  
Vol 4 (4) ◽  
pp. 2473011419S0035
Author(s):  
Lauren V. Ready ◽  
Neill Y. Li ◽  
Samantha J. Worobey ◽  
Nicholas J. Lemme ◽  
JaeWon Yang ◽  
...  

Category: Sports, Trauma, Ankle, Achilles Introduction/Purpose: Injuries are an ever-present entity in the National Football League, with recent research highlighting American football with the highest injury incidence among all major sports. A torn Achilles can sideline a player for six to twelve months and reduce their power rankings by over fifty percent. Within Achilles tears, there was a focus on comparing rookie rates to the rest of the players, examining tear rates for different game conditions and studying the day of the week the injury occurred. Due to the impact of the injury and limited research, we sought to examine Achilles tears in the NFL from 2009-2016 to identify trends correlating tears with game and player demographics. Methods: NFL players with a diagnosed Achilles tear between 2009 and 2016 were selected as the study population for this retrospective analysis. Data on NFL injury was collected from an established database, previously comprised of publicly available athlete information. NFL player profiles were then employed to determine position, team and game statistics at time of injury. Injury rates were calculated as a percentage of total league games on Thursdays and Sundays. The proportion of rookies in the NFL was approximated by summing the number of draft picks and the number of signed, undrafted free agents and measured against the total number of roster spots before the commencement of the season. Game surface was discerned at time of injury by consulting a timeline of the field surfaces and cross referencing the date of the game. Game conditions, such as weather and temperature, were discerned from the game logs published on the NFL website. Results: There were 101 documented Achilles tears. Sixty-four percent (65/101) occurred before the official season, in training or pre-season games. Only 1% (1/101) of tears occurring during post-season play-offs. Twenty-nine percent (19/65) of the pre- season tears occurred in rookies and 97% (35/36) of the in-season game tears affected non-rookies. Thirty-six percent (36/101) of all documented tears occurred in undrafted free agents. Of players with Achilles tear, 58.41% (59/101) returned to play in the NFL after injury. Despite an average age of 26.7 years, the tear distribution was bimodal with players, ages 24 and 36, exhibited the highest rates of tear. With regard to tears during games, 43.18% occurred on grass and 56.82% occurred on turf. These values mirror their field representation in games. The average game temperature was 67.04 degrees Fahrenheit with wide stratification (range: 1-91 degrees). When examining rate of tears for players during away versus home games, there was not a significant difference of note; of the 45 in-game tears, 21 (46.67%) occurred in home games and 24 (53.33%) during away games. Conclusion: In our focused analysis of the Achilles in NFL athletes, we show no significant difference in tear rates when comparing grass and artificial turf surfaces and in comparing Thursday and Sunday games. When reviewing experience level, a large percent of the tears occurred in rookie players, especially during the pre-season, despite these players making up less than a quarter of the athletes. We also show that tears were not restricted to certain weather conditions. When analyzing career length post tear, most players that returned to play continued to perform at a high level. This challenges the perception of AT tear as a career-ending injury.


2020 ◽  
Vol 8 (4_suppl3) ◽  
pp. 2325967120S0015
Author(s):  
Aaron J. Zynda ◽  
Jie Liu ◽  
Meagan J. Sabatino ◽  
Jane S. Chung ◽  
Shane M. Miller ◽  
...  

Background: There is limited epidemiologic data on pediatric basketball injuries and the correlation of these injuries with sex-based differences pre- and post-adolescence. Purpose: To describe sex and age-based injury rates associated with common pediatric basketball injuries. Methods: A descriptive epidemiology study was conducted utilizing publicly available injury data from the National Electronic Injury Surveillance System (NEISS) and participation data from the National Sporting Goods Association (NSGA). Data on pediatric basketball injuries from January 2012 – December 2018 in patients ages 7-17 years were extracted and used to calculate national injury incidence rates with 95% confidence intervals. Results: Over 7 years, 9,582 basketball injuries were reported annually in the NEISS in pediatric patients 7-17 years old, which corresponds to an annual national estimate of 294,920 visits. The 5 most common diagnoses were ankle strain/sprain (17.7%), finger strain/sprain and finger fracture (12.1%), concussion/head injury (9.4%), knee strain/sprain (4.5%), and facial laceration (3.3%). There was a notable increase in injury rate in adolescents when compared with childhood ages; 7- to 11-year-old category accounted for 19.1% of estimated injuries (56,242 injuries per year) and the 12- to 17-year-old category accounted for 80.9% (238,678 injuries per year). While boys accounted for the majority of injuries in both age groups [72.6% of all injuries (40,824 injuries per year) in the 7- to 11-year-old category and 74.4% of all injuries (177,572 injuries per year) in the 12- to 17-year-old category], overall, there was no significant difference in injury rate between boys and girls (boys: 91 injuries per 100,000 athlete days, 95% CI = 73-109; girls: 110 injuries per 100,000 athlete days, 95% CI = 92-128; p=0.140). Overall injury rates across the two age groups are reported in Table 1. Head injuries/concussions were a frequent cause of presentation (second only to finger injuries) in 7- to 11-year-olds, and occurred at a similar rate in girls and boys. In adolescents, ankle injuries were the most common injury overall, but there was a most notable increase in the rate of girls’ head and knee injury compared with their boy counterparts within these ages (Table 1). Conclusions: Ankle injuries continue to be the most predominant pediatric basketball injury. However, disproportionate rates of girls’ head and knee injuries during adolescent basketball suggest that style of play and knee injury prevention programs should target girls participating in youth basketball. [Table: see text]


2019 ◽  
Vol 7 (7_suppl5) ◽  
pp. 2325967119S0034
Author(s):  
Jose Raul Perez ◽  
Jonathan Burke ◽  
Abdul Zalikha ◽  
Nicholas Schiller ◽  
Andrew NL Buskard ◽  
...  

Objectives: The objective of this study was to evaluate the impact rest time between games may have on injury rates as it pertains to overall incidence, injury location and player position. Methods: For this descriptive epidemiological study, data was obtained from official NFL gamebooks. In-game injuries were queried for all regular season games from all 32 teams over the course of four seasons (2013 to 2016). Only injuries which resulted in a stoppage of time during gameplay were included. Player position and injured body part were taken from the following week’s injury report. Rest periods between games were classified as short (4 days), regular (6-8 days), or long (10+ days) rest. Positions were categorized into Quarterback, Skill (wide receiver, running back and defensive backs), Lineman, and Other (fullback, linebacker and tightend). Overall observed injury rates, as well as injury rates specific to anatomic location and player position, were analyzed in correlation to different rest periods. Statistical significance was determined using the ANOVAprocedure of observed mean injuries per game. Pairwise analysis, through 2 sample T-test, was conducted to assess statistical significance between short, regular and long rest. Results: A total of 2,846 injuries were identified throughout the four seasons. ANOVA testing of all 3 cohorts taken together demonstrated a statistically significant difference between injuries/game between short, regular, and long rest (p = 0.012). With short rest, a mean of 1.26 injuries/game were observed (95% CI 1.046, 1.470), which was statistically significantly different when compared to the 1.53 observed injuries/game with regular rest (95% CI 1.463, 1.601; p = 0.029). Games with short rest were not found to be significantly different when compared to the 1.34 observed injuries/game associated with long rest (95% CI 1.186, 1.486; p = 0.555). Regarding player positions, only the Other cohort achieved statistically significantly less observed injuries/game with games played on Thursday compared to regular (p=0.0002) and long (p = 0.026). The quarterback position was the only position which sustained more injuries than expected with games played on Thursday compared to both regular and long rest; however, these results did not reach statistical significance (p = 0.09). No statistical difference was found regarding injury location in correlation to differences in rest periods. Conclusion: Our data suggests that there is a significant association between the amount of rest between games and observed injuries in the NFL. Interestingly, Thursday night games were found to have fewer injuries per game when compared to games played on regular rest. Subgroup analysis revealed fewer observed injuries with short rest for linebackers, fullbacks, and tightends. Although quarterbacks were observed to have more injuries than expected on short rest, this did not reach statistical significance. The results of this study do not support that less rest associated with Thursday night games leads to higher injury rates; however, quarterback injury rates may potentially be impacted with shortened rest. [Table: see text]


2019 ◽  
Vol 126 (3) ◽  
pp. 546-558
Author(s):  
P. Ibbott ◽  
N. Ball ◽  
M. Welvaert ◽  
K. G. Thompson

We investigated the variability of strength trained athletes' self-selected rest periods between sets of heavy squat training. Sixteen strength-trained male athletes (Mage = 23, SD = 3 years) completed two squat training sessions 48 hours apart. Each training session consisted of five sets of 5RM squats, interspersed with self-selected interset rest periods. A Gymaware linear optical encoder collected kinetic data for each squat and temporal data for each interset rest period. The participants' subjective ratings of the experience were taken before (Readiness to Lift [RTL]) and after (Rating of Perceived Effort [RPE]) each set. Mean total rest time and mean power output differed significantly between sessions. For both sessions, interset rest period increased, and power output decreased between Sets 3, 4, and 5 (95% CI range [−101, −17]) compared with Set 1. In both sessions, RPE increased significantly in Set 3 compared with Set 1 (95% CI range = [0.68, 2.19]), while RTL decreased significantly from Set 3 (95% CI range [−2.99, −0.58]) compared to Set 1. Interset rest period and power output demonstrated fair reliability between sessions (mean intraclass correlation coefficient = 0.55), while RPE and RTL demonstrated good and excellent reliability, respectively (mean intraclass correlation coefficient = 0.63 and 0.80). In conclusion, highly trained strength athletes demonstrated a significant difference in their between session power output and total rest time when using self-selected interset rest periods, despite stability in their subjective ratings of fatigue and effort. Interset rest periods can be self-selected reliably to complete strength training in heavy squat protocol; however, power output may decline during the set.


2021 ◽  
Vol 56 (7) ◽  
pp. 727-733
Author(s):  
Jacob R. Powell ◽  
Adrian J. Boltz ◽  
Hannah J. Robison ◽  
Sarah N. Morris ◽  
Christy L. Collins ◽  
...  

Context The first men's wrestling National Collegiate Athletic Association (NCAA) Championship was sponsored in 1928; since then, participation has increased. Background Continued study of wrestling injury data is essential to identify areas for intervention based on emerging trends. Methods Exposure and injury data collected in the NCAA Injury Surveillance Program during 2014–2015 through 2018–2019 were analyzed. Injury counts, rates, and proportions were used to describe injury characteristics, and injury rate ratios (IRRs) were used to examine differential injury rates. Results The overall injury rate was 8.82 per 1000 athlete exposures. The competition injury rate was significantly higher than practice injury rate (IRR = 4.11; 95% CI = 3.72, 4.55). The most commonly injured body parts were the knee (21.4%), shoulder (13.4%), and head/face (13.3%), and the most prevalently reported specific injury was concussion. Summary These findings provide the most current update to injury incidence and outcomes in NCAA men's wrestling. We identify notable trends that warrant consideration in future research.


2021 ◽  
pp. jech-2020-215039 ◽  
Author(s):  
Anders Malthe Bach-Mortensen ◽  
Michelle Degli Esposti

IntroductionThe COVID-19 pandemic has disproportionately impacted care homes and vulnerable populations, exacerbating existing health inequalities. However, the role of area deprivation in shaping the impacts of COVID-19 in care homes is poorly understood. We examine whether area deprivation is linked to higher rates of COVID-19 outbreaks and deaths among care home residents across upper tier local authorities in England (n=149).MethodsWe constructed a novel dataset from publicly available data. Using negative binomial regression models, we analysed the associations between area deprivation (Income Deprivation Affecting Older People Index (IDAOPI) and Index of Multiple Deprivation (IMD) extent) as the exposure and COVID-19 outbreaks, COVID-19-related deaths and all-cause deaths among care home residents as three separate outcomes—adjusting for population characteristics (size, age composition, ethnicity).ResultsCOVID-19 outbreaks in care homes did not vary by area deprivation. However, COVID-19-related deaths were more common in the most deprived quartiles of IDAOPI (incidence rate ratio (IRR): 1.23, 95% CI 1.04 to 1.47) and IMD extent (IRR: 1.16, 95% CI 1.00 to 1.34), compared with the least deprived quartiles.DiscussionThese findings suggest that area deprivation is a key risk factor in COVID-19 deaths among care home residents. Future research should look to replicate these results when more complete data become available.


CNS Spectrums ◽  
2021 ◽  
pp. 1-9
Author(s):  
Nina M. Lutz ◽  
Samuel R. Chamberlain ◽  
Ian M. Goodyer ◽  
Anupam Bhardwaj ◽  
Barbara J. Sahakian ◽  
...  

Abstract Background Nonsuicidal self-injury (NSSI) is prevalent among adolescents and research is needed to clarify the mechanisms which contribute to the behavior. Here, the authors relate behavioral neurocognitive measures of impulsivity and compulsivity to repetitive and sporadic NSSI in a community sample of adolescents. Methods Computerized laboratory tasks (Affective Go/No-Go, Cambridge Gambling Task, and Probabilistic Reversal Task) were used to evaluate cognitive performance. Participants were adolescents aged 15 to 17 with (n = 50) and without (n = 190) NSSI history, sampled from the ROOTS project which recruited adolescents from secondary schools in Cambridgeshire, UK. NSSI was categorized as sporadic (1-3 instances per year) or repetitive (4 or more instances per year). Analyses were carried out in a series of linear and negative binomial regressions, controlling for age, gender, intelligence, and recent depressive symptoms. Results Adolescents with lifetime NSSI, and repetitive NSSI specifically, made significantly more perseverative errors on the Probabilistic Reversal Task and exhibited significantly lower quality of decision making on the Cambridge Gambling Task compared to no-NSSI controls. Those with sporadic NSSI did not significantly differ from no-NSSI controls on task performance. NSSI was not associated with behavioral measures of impulsivity. Conclusions Repetitive NSSI is associated with increased behavioral compulsivity and disadvantageous decision making, but not with behavioral impulsivity. Future research should continue to investigate how neurocognitive phenotypes contribute to the onset and maintenance of NSSI, and determine whether compulsivity and addictive features of NSSI are potential targets for treatment.


2021 ◽  
pp. 089976402110014
Author(s):  
Anders M. Bach-Mortensen ◽  
Ani Movsisyan

Social care services are increasingly provisioned in quasi-markets in which for-profit, public, and third sector providers compete for contracts. Existing research has investigated the implications of this development by analyzing ownership variation in latent outcomes such as quality, but little is known about whether ownership predicts variation in more concrete outcomes, such as violation types. To address this research gap, we coded publicly available inspection reports of social care providers regulated by the Care Inspectorate in Scotland and created a novel data set enabling analysis of ownership variation in violations of (a) regulations, and (b) national care standards over an entire inspection year ( n = 4,178). Using negative binomial and logistic regression models, we find that for-profit providers are more likely to violate non-enforceable outcomes (national care standards) relative to other ownership types. We did not identify a statistically significant difference between for-profit and third sector providers with regard to enforceable outcomes (regulations).


Sign in / Sign up

Export Citation Format

Share Document