scholarly journals Elevations in MicroRNA Biomarkers in Serum Are Associated with Measures of Concussion, Neurocognitive Function, and Subconcussive Trauma over a Single National Collegiate Athletic Association Division I Season in Collegiate Football Players

2019 ◽  
Vol 36 (8) ◽  
pp. 1343-1351 ◽  
Author(s):  
Linda Papa ◽  
Semyon M. Slobounov ◽  
Hans C. Breiter ◽  
Alexa Walter ◽  
Tim Bream ◽  
...  
2010 ◽  
Vol 45 (4) ◽  
pp. 327-332 ◽  
Author(s):  
James T. Eckner ◽  
Jeffrey S. Kutcher ◽  
James K. Richardson

Abstract Context: Evidence suggests that concussion prolongs reaction time (RT). We have developed a simple, reliable clinical tool for measuring reaction time that may be of value in the assessment of concussion in athletes. Objective: To compare baseline values of clinical RT (RTclin) obtained using the new clinical reaction time apparatus with computerized RT (RTcomp) obtained using a validated computerized neuropsychological test battery. Design: Cross-sectional study. Setting: Data were collected during a National Collegiate Athletic Association Division I collegiate football team's preparticipation physical examination session. Patients or Other Participants: Ninety-four Division I collegiate football players. Main Outcome Measure(s): The RTclin was measured using a 1.3-m measuring stick embedded in a weighted rubber disk that was released and caught as quickly as possible. The RTcomp was measured using the simple RT component of CogState Sport. Results: For the 68 athletes whose CogState Sport tests passed the program's integrity check, RTclin and RTcomp were correlated (r  =  0.445, P < .001). Overall, mean RTclin was shorter and less variable than mean RTcomp (203 ± 20 milliseconds versus 268 ± 44 milliseconds; P < .001). When RTclin and RTcomp were compared between those athletes with (n  =  68) and those without (n  =  26) valid CogState Sport test sessions, mean RTclin was similar (202 ± 19 milliseconds versus 207 ± 23 milliseconds; P  =  .390), but mean RTcomp was different (258 ± 35 milliseconds versus 290 ± 55 milliseconds; P  =  .009). Conclusions: The RTclin was positively correlated with RTcomp and yielded more consistent reaction time values during baseline testing. Given that RTclin is easy to measure using simple, inexpensive equipment, further prospective study is warranted to determine its clinical utility in the assessment of concussion in athletes.


2012 ◽  
Vol 47 (3) ◽  
pp. 257-263 ◽  
Author(s):  
Jonathan M. Oliver ◽  
Brad S. Lambert ◽  
Steven E. Martin ◽  
John S. Green ◽  
Stephen F. Crouse

Context: The recent increase in athlete size, particularly in football athletes of all levels, coupled with the increased health risk associated with obesity warrants continued monitoring of body composition from a health perspective in this population. Equations developed to predict percentage of body fat (%Fat) have been shown to be population specific and might not be accurate for football athletes. Objective: To develop multiple regression equations using standard anthropometric measurements to estimate dual-energy x-ray absorptiometry %Fat (DEXA%Fat) in collegiate football players. Design: Controlled laboratory study. Patients and Other Participants: One hundred fifty-seven National Collegiate Athletic Association Division IA football athletes (age  =  20 ± 1 years, height  =  185.6 ± 6.5 cm, mass  =  103.1 ± 20.4 kg, DEXA%Fat  =  19.5 ± 9.1%) participated. Main Outcome Measure(s): Participants had the following measures: (1) body composition testing with dual-energy x-ray absorptiometry; (2) skinfold measurements in millimeters, including chest, triceps, subscapular, midaxillary, suprailiac, abdominal (SFAB), and thigh; and (3) standard circumference measurements in centimeters, including ankle, calf, thigh, hip (AHIP), waist, umbilical (AUMB), chest, wrist, forearm, arm, and neck. Regression analysis and fit statistics were used to determine the relationship between DEXA%Fat and each skinfold thickness, sum of all skinfold measures (SFSUM), and individual circumference measures. Results: Statistical analysis resulted in the development of 3 equations to predict DEXA%Fat: model 1, (0.178 • AHIP) + (0.097 • AUMB) + (0.089 • SFSUM) − 19.641; model 2, (0.193 • AHIP) + (0.133 • AUMB) + (0.371 • SFAB) − 23.0523; and model 3, (0.132 • SFSUM) + 3.530. The R2 values were 0.94 for model 1, 0.93 for model 2, and 0.91 for model 3 (for all, P < .001). Conclusions: The equations developed provide an accurate way to assess DEXA%Fat in collegiate football players using standard anthropometric measures so athletic trainers and coaches can monitor these athletes at increased health risk due to increased size.


2015 ◽  
Vol 32 (5) ◽  
pp. 314-326 ◽  
Author(s):  
Christine M. Baugh ◽  
Patrick T. Kiernan ◽  
Emily Kroshus ◽  
Daniel H. Daneshvar ◽  
Philip H. Montenigro ◽  
...  

1989 ◽  
Vol 3 (2) ◽  
pp. 79-89 ◽  
Author(s):  
L. Marlene Mawson ◽  
William T. Bowler

The 1984 Supreme Court ruling in the antitrust suit between the Universities of Oklahoma and Georgia, representing the College Football Association (CFA), versus the National Collegiate Athletic Association (NCAA) provided mat individual institutions had proper authority to sell television rights to their football games. The NCAA had controlled television appearances of collegiate football teams with the rationale of preventing erosion of game attendance due to televised home football games. Records of home games televised, television revenues from football games, and attendance at televised football games were gathered from 57% of NCAA Division I institutions and compared for a 3-year period prior to the 1984 ruling, with a 3-year period following the ruling. Four sets oft tests between mean data for the pre- and posttime periods showed that although the number of games scheduled per season remained the same, the number of televised football games significantly increased, the television revenues from football remained constant, and attendance at televised home football games decreased significantly after the 1984 ruling.


2010 ◽  
Vol 45 (2) ◽  
pp. 128-135 ◽  
Author(s):  
Sandra Fowkes Godek ◽  
Arthur R. Bartolozzi ◽  
Chris Peduzzi ◽  
Scott Heinerichs ◽  
Eugene Garvin ◽  
...  

Abstract Context: Considerable controversy regarding fluid replacement during exercise currently exists. Objective: To compare fluid turnover between National Football League (NFL) players who have constant fluid access and collegiate football players who replace fluids during water breaks in practices. Design: Observational study. Setting: Respective preseason training camps of 1 National Collegiate Athletic Association Division II (DII) football team and 1 NFL football team. Both morning and afternoon practices for DII players were 2.25 hours in length, and NFL players practiced for 2.25 hours in the morning and 1 hour in the afternoon. Environmental conditions did not differ. Patients or Other Participants: Eight NFL players (4 linemen, 4 backs) and 8 physically matched DII players (4 linemen, 4 backs) participated. Intervention(s): All players drank fluids only from their predetermined individual containers. The NFL players could consume both water and sports drinks, and the DII players could only consume water. Main Outcome Measure(s): We measured fluid consumption, sweat rate, total sweat loss, and percentage of sweat loss replaced. Sweat rate was calculated as change in mass adjusted for fluids consumed and urine produced. Results: Mean sweat rate was not different between NFL (2.1 ± 0.25 L/h) and DII (1.8 ± 0.15 L/h) players (F1,12  =  2, P  =  .18) but was different between linemen (2.3 ± 0.2 L/h) and backs (1.6 ± 0.2 L/h) (t14  =  3.14, P  =  .007). We found no differences between NFL and DII players in terms of percentage of weight loss (t7  =  −0.03, P  =  .98) or rate of fluid consumption (t7  =  −0.76, P  =  .47). Daily sweat loss was greater in DII (8.0 ± 2.0 L) than in NFL (6.4 ± 2.1 L) players (t7  =  −3, P  =  .02), and fluid consumed was also greater in DII (5.0 ± 1.5 L) than in NFL (4.0 ± 1.1 L) players (t7  =  −2.8, P  =  .026). We found a correlation between sweat loss and fluids consumed (r  =  0.79, P < .001). Conclusions: During preseason practices, the DII players drinking water at water breaks replaced the same volume of fluid (66% of weight lost) as NFL players with constant access to both water and sports drinks.


2017 ◽  
Vol 45 (4) ◽  
pp. 458-462 ◽  
Author(s):  
Nikolas Sarac ◽  
William Haynes ◽  
Angela Pedroza ◽  
Christopher Kaeding ◽  
James Borchers

2020 ◽  
Vol 8 (8) ◽  
pp. 232596712094249 ◽  
Author(s):  
Barry P. Boden ◽  
Ken M. Fine ◽  
Ilan Breit ◽  
Wendee Lentz ◽  
Scott A. Anderson

Background: Football has the highest number of nontraumatic fatalities of any sport in the United States. Purpose: To compare the incidence of nontraumatic fatalities with that of traumatic fatalities, describe the epidemiology of nontraumatic fatalities in high school (HS) and college football players, and determine the effectiveness of National Collegiate Athletic Association (NCAA) policies to reduce exertional heat stroke (EHS) and exertional sickling (ES) with sickle cell trait (SCT) fatalities in athletes. Study Design: Descriptive epidemiology study. Methods: We retrospectively reviewed 20 academic years (1998-2018) of HS and college nontraumatic fatalities in football players using the National Registry of Catastrophic Sports Injuries (NRCSI). EHS and ES with SCT fatality rates were compared before and after the implementation of the NCAA football out-of-season model (bylaw 17.10.2.4 [2003]) and NCAA Division I SCT screening (bylaw 17.1.5.1 [2010]), respectively. Additionally, we compiled incidence trends for HS and college traumatic and nontraumatic fatalities in football players for the years 1960 through 2018 based on NRCSI data and previously published reports. Results: The risk (odds ratio) of traumatic fatalities in football players in the 2010s was 0.19 (95% CI, 0.13-0.26; P < .0001) lower in HS and 0.29 (95% CI, 0.29-0.72; P = .0078) lower in college compared with that in the 1960s. In contrast, the risk of nontraumatic fatalities in football players in the 2010s was 0.7 (95% CI, 0.50-0.98; P = .0353) in HS and 0.9 (95% CI, 0.46-1.72; P = .7413) in college compared with that in the 1960s. Since 2000, the risk of nontraumatic fatalities has been 1.89 (95% CI, 1.42-2.51; P < .001) and 4.22 (95% CI, 2.04-8.73; P < .001) higher than the risk of traumatic fatalities at the HS and college levels, respectively. During the 20 years studied, there were 187 nontraumatic fatalities (average, 9.4 per year). The causes of death were sudden cardiac arrest (57.7%), EHS (23.6%), ES with SCT (12.1%), asthma (4.9%), and hyponatremia (1.6%). The risk of a nontraumatic fatality was 4.1 (95% CI, 2.8-5.9; P < .0001) higher in NCAA compared with HS athletes. There was no difference in the risk of an EHS fatality in NCAA athletes (0.86 [95% CI, 0.17-4.25]; P = .85) after implementation in 2003 of the NCAA football out-of-season model. The risk of an ES with SCT fatality in Division I athletes was significantly lower after the 2010 NCAA SCT screening bylaw was implemented (0.12 [95% CI, 0.02-0.95]; P = .04). Conclusion: Since the 1960s, the risk of nontraumatic fatalities has declined minimally compared with the reduction in the risk of traumatic fatalities. Current HS and college nontraumatic fatality rates are significantly higher than rates of traumatic fatalities. The 2003 NCAA out-of-season model has failed to significantly reduce EHS fatalities. The 2010 NCAA SCT screening bylaw has effectively prevented ES with SCT fatalities in NCAA Division I football.


Sign in / Sign up

Export Citation Format

Share Document