Shift Length and Shift Length Preference Among Acute Care Surgeons

2021 ◽  
pp. 000313482110257
Author(s):  
John Kepros ◽  
Susan Haag ◽  
Karen Lewandowski ◽  
Frank Bauer ◽  
Hirra Ali ◽  
...  

Background Work hour restrictions have been imposed by the Accreditation Council for Graduate Medical Education since 2003 for medical trainees. Many acute care surgeons currently work longer shifts but their preferred shift length is not known. Methods The purpose of this study was to characterize the distribution of the current shift length among trauma and acute care surgeons and to identify the surgeons’ preference for shift length. Data collection included a questionnaire with a national administration. Frequencies and percentages are reported for categorical variables and medians and means with SDs are reported for continuous variables. A chi-square test of independence was performed to examine the relation between call shift choice and trauma center level (level 1 and level II), age, and gender. Results Data from 301 surgeons in 42 states included high-level trauma centers. Assuming the number of trauma surgeons in the United States is 4129, a sample of 301 gives the survey a 5% margin of error. The median age was 43 years (M = 46, SD = 9.44) and 33% were female. Currently, only 23.3% of acute care surgeons work a 12-hour shift, although 72% prefer the shorter shift. The preference for shorter shifts was statistically significant. There was no significant difference between call shift length preference and trauma center level, age, or gender. Conclusion Most surgeons currently work longer than 12-hour shifts. Yet, there was a preference for 12-hour shifts indicating there is a gap between current and preferred shift length. These findings have the potential to substantially impact staffing models.

Circulation ◽  
2020 ◽  
Vol 141 (Suppl_1) ◽  
Author(s):  
Gloria Kim ◽  
Arati A Gangadharan ◽  
Matthew A Corriere

Introduction: Some approaches to frailty screening use diagnostic or laboratory data that may be incomplete. Grip strength can identify weakness, a component of phenotype-based frailty assessment. We compared grip strength as a reductionist, phenotype-based approach to frailty screening with comorbidity and laboratory-based alternatives. Hypothesis: Grip strength and categorical weakness are correlated with the modified frailty index-5 (mFI-5) and lab values associated with frailty. Methods: Weakness based on grip, BMI, and gender was compared with mFI-5 comorbidities and lab values. Patients with at least 3/5 mFI-5 comorbidities were considered frail. Lab data collected within 6 months of grip measurement was assessed. Associations were evaluated using multivariable models and kappa. Methods: 2,597 patients had grip strength measured over 5 months. Mean age was 64.4±14.6, mean BMI was 29.5±6.9;46% were women, and 87% white. Prevalent comorbidities included hypertension (28%), CHF (22%), diabetes (29%), and COPD (26%); 9% were functionally dependent. 34% were weak, but only 13% were frail based on mFI-5. Hemoglobin, creatinine, and CRP differed significantly based on weakness ( Table ). Laboratory data were missing for 36%- 95% of patients. Multivariable models identified significant associations between weakness, hemoglobin, and all MFI-5 comorbidities. Categorical agreement between weakness and frailty was limited (kappa =0.09; 95% CL 0.0641-0.1232). Conclusion: Weakness based on grip strength provides a practical, inexpensive approach to risk assessment, especially when incomplete data excludes other approaches. Comorbidity-based assessment categorizes many weak patients as non-frail. Table. Demographic, laboratory values, and comorbidities by categorical weakness based on grip 20 th percentile. Mean values for continuous variables by weakness adjusted for gender and BMI, p-value for T-test; frequency and total percent for categorical variables, p-value represents chi-square test.


Objective: Our research article aimed to determine if six-month mortality amongst hepatitis B and C patients undergoing cardiac surgery varied according to gender, post-operatively. Secondarily, we highlighted the significant differences among the two genders in their pre-operative, operative, and post-operative characteristics and deduced significant predictors of mortality. Methods: We obtained approval from the International Review Board of the Dow University of Health Sciences, and conducted a retrospective study targeting hepatitis B and C patients who had undergone cardiac surgery between January 2013 to October 2018 at the Ruth Pfau Civil Hospital, Karachi, Pakistan. The data was analysed using the Statistical Package for Social Sciences (Version 20.0). The population was divided into two groups, based on gender. Chi-squared test was used to compare categorical variables and odd ratios with 95% confidence interval were also computed. Differences in continuous variables were assessed using independent T-test or Mann-Whitney U test. Results: There was no significant difference in six-month mortality between the genders, with a 22.5% mortality in males and 20.0% mortality in females. Post-operatively, males had higher creatinine (p=0.003) levels but females tended to have a longer ward stay (p=0.032). On multivariate logistic regression, duration of intubation (aOR=1.131, 95% CI: 1.002-1.275), cardiopulmonary bypass time (aOR=1.030, 95% CI: 1.002-1.059) and duration of ward stay (aOR=1.100, 95% CI: 1.031-1.175) were found to be significant predictors of mortality. Conclusion: There is no association between six-month mortality and gender among hepatitis B and C patients undergoing cardiac surgery. Additionally, duration of intubation, cardiopulmonary bypass time and duration of ward stay are significant predictors of six-month mortality.


2021 ◽  
Vol 43 (2) ◽  
pp. 50-61
Author(s):  
A. Yakubu ◽  
M. M. Achapu

Goat farming is a veritable source of livelihood of many rural families in Africa. This study aimed at determining prevailing production systems and breeding objectives of rural goat producers in north central Nigeria. A total of 180 rural goat keepers corresponding to 60 per State (Nasarawa, Benue and Plateau) were randomly sampled. Primary data (socioeconomics of respondents, reasons for keeping goats, flock structure, management system, productivity and breeding practices) were collected through individual structured questionnaire administration. Cross tabulations and Chi square (÷2) statistics were used to compare categorical variables, while rank means, arithmetic means and standard deviations were calculated for within- and between-state comparisons of the continuous variables. While more goat producers were involved in crop farming in Benue State (43.6%), only 34.5 and 21.8% engaged in farming in Plateau and Nasarawa State, respectively. Goats were kept for income generation, milk, meat and cultural/religious functions by about 61.1, 12.8, 15.0 and 6.1% of the producers while the relative importance given by respondents to the different objectives varied significantly (Chi-square=6.62; P< 0.05) across the States. The average flock sizes of goats for Nasarawa (9.68±5.63), Benue (8.25±4.73) and Plateau (8.80±3.98) were not significantly (P>0.05) different. Semi-intensive system predominated (P<0.01). Productivity indices showed that for age of parturition, number of kids of Sahel doe and lifespan of goats, there was no significant difference (P>0.05). Among all the breeding traits across the three States, only disease resistance varied (P<0.01). Disease resistance, survival, fertility, number of offspring and body size appeared similar (P>0.05) as preference for production traits. However, growth (83.52-97.68 mean ranks) (Plateau State) and cultural importance (75.28-104.70 mean ranks) (Benue State) varied across the States (P<0.05 and P<0.01, respectively). The present information will be useful in understanding the farmers' production objectives, management and breeding practices as a first step in designing a sustainable breeding programme for rural farmers in the study areas.


2019 ◽  
Vol 6 (Supplement_2) ◽  
pp. S672-S673
Author(s):  
John B McCoury ◽  
Randolph V Fugit ◽  
Mary T Bessesen

Abstract Background Randomized controlled trials of procalcitonin (PCT)-based algorithms for antibacterial therapy have been shown to reduce antimicrobial use and improve survival. Translation of PCT algorithms to clinical settings has often been unsuccessful. Methods We implemented a PCT algorithm, supported by focus groups prior to introduction of the PCT test in April 2016 and clinician training on the PCT algorithm for testing and antimicrobial management after test roll-out. The standard PCT algorithm period (SPAP) was defined as October 1, 2017 to March 31, 2018. The antimicrobial stewardship team (AST) initiated an AST-supported PCT algorithm (ASPA) in August 2018. The AST prospectively evaluated patients admitted to ICU for sepsis and ordered PCT per algorithm if the primary medical team had not ordered them. The ASPA period was defined as October 1, 2018–March 31, 2019. The AST conducted concurrent review and feedback for all antibiotic orders during both periods, using PCT result when available. We compared patient characteristics and outcomes between the two periods. The primary outcome was adherence to the PCT algorithm, with subcomponents of appropriate PCT orders and antimicrobial discontinuation. Secondary outcomes were total antibiotic days, excess antibiotic days avoided, ICU and hospital length of stay (LOS), 30-day readmission and mortality. Continuous variables were analyzed with Student t-test. Categorical variables were analyzed with chi-square or Mann–Whitney test, as appropriate. Results There were 35 cases in the SPAP cohort and 57 cases in the ASPA cohort. There were no differences in demographics or infection site (Table 1). Baseline PCT was ordered in 57% of the SPAP cohort and 90% of the ASPA cohort (P = 0.0006) (Table 2). Follow-up PCT was performed in 23% of SPAP and 76% of ASPA (P < 0.0001). Antibiotics were discontinued per algorithm in 2/35 (7%) in the SPAP cohort and 25/57 (44%) in the ASPA cohort (P < 0.0001). Total antibiotic days was 7 (IQR 4–10) in the SPAP cohort and 5 (IQR 2–7) in the ASPA cohort (P = 0.02). There was no significant difference in LOS, ICU LOS, 30-day readmission, or mortality (Table 4). Conclusion A PCT algorithm successfully implemented by an AST was associated with a significant decrease in total antibiotic days. There were no differences in mortality or LOS. Disclosures All authors: No reported disclosures.


Stroke ◽  
2020 ◽  
Vol 51 (Suppl_1) ◽  
Author(s):  
David Rosenbaum-Halevi ◽  
Sujan T Reddy ◽  
Alyssa D Trevino ◽  
Muhammad Bilal Tariq ◽  
Mahan Shahrivari ◽  
...  

Introduction: Telemedicine (TM) is increasingly implemented in community hospitals acute ischemic stroke (AIS). The efficiency of TM to facilitate thrombectomy (IAT) is unknown. We addressed this question by studying our spoke hospitals which are staffed by both in-person (IP) consultation (Day: 8am-5pm) and TM (Night: 5pm-8am) to analyze differences between TM and IP and comparing to our university hub which has IP staffing day and night. Methods: We performed a retrospective analysis from 3/2016 to 3/2019 of all IAT cases directly admitted to 4 IAT capable centers (1 hub + 3 spokes) in our system. Demographic, clinical, and time metrics were analyzed. Primary outcome was door to groin (DTG) time. Continuous variables were analyzed with Wilcoxon rank sum test, and categorical variables with chi-square or Fischer’s exact test. Results: Table 1 summarizes the cohort. Eval to tPA (ETPA) time was faster at spokes vs hub (p < 0.0001), with no significant difference in DTG between spoke and hub (p= 0.444). At spokes, while DTPA times were no different between IP and TM at spokes, IP achieved faster DTG times (p<0.0001) (Fig.1A). DTG was equal during day vs. night at the hub. At the spokes, day (IP) DTG times were faster than night (TM) at some but not all spokes (Fig.1B). TPA administration did not delay DTG at either the hub or the spokes (Fig. 1C). At spokes, TM-TPA cases were associated with faster DTG than TM-noTPA (Fig. 1D). Conclusions: While no difference is noted between TM and IP in rapid TPA treatment, our data show delayed DTG at spokes during the TM day and night service. While DTG in TM was prolonged, differences in spoke metrics imply that availability of staff and resources play a significant role. Further analysis is needed to identify factors that prolong DTG at a site-specific level.


Author(s):  
Alexander C Fanaroff ◽  
Shuang Li ◽  
Vincent Miller ◽  
Laura Webb ◽  
Ann Marie Navar ◽  
...  

Background: Low patient participation in clinical research undermines the generalizability of findings. Conducting informed consent by video rather than a traditional text format may enhance the appeal of research and break down barriers to participation. Methods: The Patient and Provider Assessment of Lipid Management (PALM) Registry enrolled patients at U.S. cardiology, endocrinology, and primary care clinics to evaluate cholesterol management practices. PALM investigators developed an iPad-based video informed consent tool that included video segments totaling 8 minutes which patients navigated though a “game-ified” interface. At sites whose IRB did not approve the video tool, participants read a 6-page text consent form on the iPad. Characteristics of sites and site activation times were compared between sites that did and did not use the video consent tool using Pearson’s chi-square test for categorical variables and Wilcoxon’s signed rank test for continuous variables. Results: Of 140 sites that enrolled 7904 patients in PALM, 60 (42.9%) used the video informed consent tool. Compared with sites using text consent, sites using the video consent tool were more often rural (16.7 vs. 3.8%, p = 0.01) and used a central IRB (91.7 vs. 80.0%, p = 0.06). Sites using video consent enrolled a greater proportion of patients who were ≥ 75 years old (27.5 vs. 23.6%, p < 0.001) or non-white (17.7 vs. 14.2%, p < 0.001). Sites using video consent had shorter times from site approach to first patient enrollment ( Figure ). Median (IQR) enrollment was 33 (12, 98) patients at sites using video consent versus 24 (12, 86) at sites using text consent only (p = 0.54); there was also no significant difference in median weekly enrollment rate (2.9 [1.1, 7.5] vs. 2.8 [1.3, 6.6], p = 0.73). Conclusions: In this early experience with video consent in a multicenter registry, availability of video informed consent was associated with greater enrollment of older and non-white patients, faster speed to first patient enrolled, and numerically but not significantly more rapid enrollment compared with text informed consent.


2019 ◽  
Vol 3 (4) ◽  
pp. 154-158
Author(s):  
Simran Kaur ◽  
Aseem Singh ◽  
Rahul Singh

INTRODUCTION: The presence of Oral Mucosal Lesions (OMLs) in one’s oral mucosa can lead to unwanted consequences and mostly are due to tobacco use. AIM: To  document the prevalence of OMLs among patients of Delhi NCR and provide health education counselling to those under the grip of this evil practice.  MATERIAL AND METHODS: We retrospectively analyzed the data of a total of 402 subjects visiting various screening camps in Delhi NCR and recalled a total of 174 patients, out of which 161 reported back to us for further diagnosis and screening of OMLs. The examination of patients in the camp were an ADA type III examination. All patients we given a health education while tobacco users were also given a specialized one-on-one health education regarding the tobacco and its ill effects as well as techniques for cessation. A descriptive analysis of the sample was first performed using means (±standard deviation (SD)) for continuous variables  and frequencies (proportions) for categorical variables. The chi-square test was used for statistical analysis and to find significant difference, if any. RESULTS: Among the 402 subjects screened, the mean age was 33.24±6.74 Years and most of the study population belonged to the age group of 25-60 Years [178(44.3%)]. 301(74.8%) of the study population were males. The main chief complaint was periodontal problem [187(46.5%)], while 15 patients (3.73%) came for regular check-up. The prevalence of leukoplakia was found to be 8.70% and OSMF was found to be 6.21%. A significant difference was seen among gingivitis with respect to age and gender (p<.05), leukoplakia and frictional keratosis was seen significant in relation to gender. In all significant cases, Males were more prone to get these OMLs as compared to females CONCLUSION: It is advised that regular Oral Health Drives and counselling sessions be arranged for the people of elhi NCR to reduce the burden of the OMLs.


2021 ◽  
Vol 30 (03) ◽  
pp. 147-151
Author(s):  
Alia Ahmed ◽  
◽  
Usman Anwer Bhatti

OBJECTIVE: The objective of this study was to compare visuospatial and psychomotor skills of second year pre-clinical dental students with final year dental students using an exercise in dentinal pin placement. METHODOLOGY:A total of 120 BDS undergraduate students who had completed second or final year Operative dentistry rotation were included. While students from second and final year who had not consented to participate or had missed the practical demonstration or whose dentinal pins were misplaced after becoming loose from the tooth were excluded. Participating students placed the dentinal pins, following which Adobe Photoshop (version CC 2014) was used to analyze the photographs of the taken radiographs in two dimensions. Parameters assessed were pulpal perforations, periodontal perforations and pin angulation. Independent sample t-test was used to compare continuous variables while chi-square test was used for testing association for categorical variables. RESULTS: Final year students fared better in all categories of pin placement except periodontal perforation which was the same for both years. Statistically significant difference in the angulation for pin placement were observed between the two student groups in mesiodistal direction (p value =0.001) and in buccolingual direction (p value <.001). CONCLUSION: There is a significant difference in the psychomotor and visuospatial skill of second year pre-clinical when compared with the final year clinical undergraduate students. KEYWORDS: curriculum, dental, learning, operative, students.


2021 ◽  
Vol 36 (Supplement_1) ◽  
Author(s):  
Kam Kalantar-Zadeh ◽  
Christine Baker ◽  
J Brian Copley ◽  
Daniel Levy ◽  
Stephen Berasi ◽  
...  

Abstract Background and Aims The burden of disease associated with FSGS has not been well characterized, especially with regard to health care resource utilization (HCRU) and related costs. The aim of this study was to evaluate all-cause HCRU and estimate associated costs in patients with FSGS compared with a matched non-FSGS cohort; a secondary aim was to evaluate the impact of nephrotic range proteinuria on these outcomes. Method Data were from the Optum Clinformatics® Data Mart Database. Patients with ≥ 1 claim (1st claim = index event) for FSGS between April 2016 and December 2018 were identified based on ICD-10-CM codes and matched 1:2 (FSGS:controls) on index date, age, sex, and race to non-FSGS controls; continuous enrollment 6 months pre- and 12 months post-index was required. FSGS nephrotic range (either UPCR &gt;3000 mg/g or ACR &gt;2000 mg/g) and non-nephrotic subpopulations were also identified. Quan-Charlson Comorbidity Index (CCI) and individual comorbidities at baseline, and 12-month post-index all-cause HCRU and associated costs (per patient per year [PPPY]) as well as medication prescriptions related to FSGS treatment were compared between the matched cohorts and between the FSGS subpopulations; t-tests were used for continuous variables and chi-square tests for categorical variables. Results 844 patients with FSGS were matched with 1688 non-FSGS controls; 57.4% male, 56.9% white, mean (SD) age 54.7 (18.4) years. Mean (SD) CCI was higher in the FSGS cohort relative to matched controls (2.72 [2.12] vs 0.55 [1.29]; P &lt; .0001), with prevalence of most individual comorbidities higher in the FSGS cohort. Only 308 FSGS patients (36.5%) had UPCR or ACR tests with available results during the review period; 112 (36.4%) were in the nephrotic range and 196 were non- nephrotic (63.6%). The FSGS cohort was characterized by higher rates of all-cause HCRU across resource categories (all P &lt; .0001) (Table 1); outpatient visits was the most frequently used category (99.1% vs 69.0%), followed by prescription medications. Among patients who used these resources, units of use were significantly higher in FSGS vs matched controls except for length of stay (Table 1). Readmission rates following 1st post-index hospitalization were higher in the FSGS cohort vs matched controls at 30 days (16.1% vs 6.0%; P &lt; .05) and 365 days (39.1% vs 22.9%; P &lt; .05). Glucocorticoids were the most frequently prescribed FSGS-related medication in both cohorts, with a higher rate in FSGS vs matched controls (50.6% vs 23.3%; P &lt; .0001); other FSGS-related medications were infrequently prescribed (&lt; 14%). Inpatient, outpatient, and prescription costs were higher in the FSGS cohort vs matched controls (all P &lt; .0001) resulting in mean total annual medical costs of $59,753 vs $8,431 PPPY (P &lt; .0001) that were driven by outpatient costs (Fig. 1A). Nephrotic range proteinuria was associated with higher all-cause inpatient, outpatient, and prescription costs vs non-nephrotic patients (all P &lt; .0001; Fig. 1B), resulting in higher total costs ($70,481 vs $36,099 PPPY; P &lt; .0001). A higher proportion of nephrotic range patients were prescribed FSGS-modifying medications (73.2% vs 54.1%; P = 0.001), with glucocorticoids the most frequent medication. However, 26.8% of nephrotic range patients were not prescribed any FSGS-related medications. Conclusion FSGS is associated with significant clinical and economic burdens with total annual medical costs &gt; 7-fold higher than matched controls that were driven by outpatient costs. The presence of nephrotic range proteinuria substantially and significantly increased the economic burden. New treatment modalities leading to lower rates of proteinuria may help improve patient outcomes while reducing HCRU and their associated costs.


2019 ◽  
Vol 6 (Supplement_2) ◽  
pp. S198-S198
Author(s):  
Michael Henry ◽  
Milan Kapadia ◽  
Joseph Nguyen; Barry Brause ◽  
Andy O Miller

Abstract Background There is contradicting evidence characterizing the difference in pathogens that cause hip and knee prosthetic joint infection (PJI). A possible difference in microbiology may inform choice in antibiotic etiology, prophylaxis, and empiric treatment. We sought to analyze a large cohort of PJIs to see whether there was a significant difference in pathogen between joints. Methods A retrospective cohort of hip and knee PJIs, from 2008 to 2016, were identified by ICD code and surgical codes. The PJI pathogen was identified from synovial or intra-articular tissue cultures. The Student’s t-test was used to compare continuous variables. Chi-square tests were used to compare the categorical variables to joint. Results 807 PJI cases were identified including 444 knees and 363 hips. There were no significant differences between hip and knee PJIs in age, sex, history of PJI, rheumatoid arthritis, Charlson comorbidity index and laterality. There was a higher frequency of diabetes in knee PJIs (25.3%) compared with hip PJIs (15.7%), P < 0.001. No significant difference was found in the prevalence of fungal, staphylococcal (including Staphylococcus aureus), streptococcal, or enterococcal pathogens between hip and knee PJIs. Conclusion In this single-center cohort, hip and knees PJIs are infected with similar pathogens. Multiple site studies are needed to characterize the microbiology of PJIs at a larger scale. Disclosures All authors: No reported disclosures.


Sign in / Sign up

Export Citation Format

Share Document