scholarly journals The clinial utility of serum leukocyte counts for initial antibiotic use in acute mastitis: the experience from China

Author(s):  
Li Jiang ◽  
Jianjiang Fang ◽  
jinhua ding

Abstract Objectives: This study aimed to evaluate the clinical utility of serum leukocyte counts (SLCs) for initial antibiotic use in Chinese women with mastitis who presented to the emergency department. Materials and Methods: Electronic medical records of breastfeeding women with mastitis were reviewed. Patients were divided into two groups according to the level of SLCs: slight elevated leukocyte counts (SELC, 1.0-1.5*10^9 cells/L); marked elevated leukocyte counts (MELC, >1.5*10^9 cells/L ). Treatment outcomes including rates of treatment failure and breast abscess formation were compared. Results: The rates of treatment failure and breast abscess were 12.7% and 7.1% in overall population and 7.7% and 6.4% in MELC group, respectively. In SELC group, treatment failures were observed in 29.7% and 6.4% patients without or with antibiotics, and the difference was significant (OR=4.207, 95% CI 1.318- 13.424); breast abscess was observed in 12.5% and 2.1% patients, and the difference was not significant (OR=6.571, 95% CI 0.793-54.481). Mean time to normal appearance of the breast or normal temperature was shorter in patients with antibiotics (3.8 ± 1.7 and 4.3 ± 2.1 days) than in patients without antibiotic use (2.5 ± 1.1 and 3.0 ± 1.3 days), p<0.001. Conclusion: Our study found that there were better clinical outcomes in patients with SELC or MELC when antibiotic was initially administrated compared to those without antibiotics, whicha indicates that SLC is an easy and practical reference index for gudiing antibiotic use, and patients who have an elevated SLC should be treated with antibiotic. Key Words: mastitis, antibiotic, serum leukocyte count

Antibiotics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 606
Author(s):  
Fauna Herawati ◽  
Rika Yulia ◽  
Bustanul Arifin ◽  
Ikhwan Frasetyo ◽  
Setiasih ◽  
...  

The inappropriate use or misuse of antibiotics, particularly by outpatients, increases antibiotic resistance. A lack of public knowledge about “Responsible use of antibiotics” and “How to obtain antibiotics” is a major cause of this. This study aimed to assess the effectiveness of an educational video about antibiotics and antibiotic use to increase outpatients’ knowledge shown in two public hospitals in East Java, Indonesia. A quasi-experimental research setting was used with a one-group pre-test—post-test design, carried out from November 2018 to January 2019. The study population consisted of outpatients to whom antibiotics were prescribed. Participants were selected using a purposive sampling technique; 98 outpatients at MZ General Hospital in the S regency and 96 at SG General Hospital in the L regency were included. A questionnaire was used to measure the respondents’ knowledge, and consisted of five domains, i.e., the definition of infections and antibiotics, obtaining the antibiotics, directions for use, storage instructions, and antibiotic resistance. The knowledge test score was the total score of the Guttman scale (a dichotomous “yes” or “no” answer). To determine the significance of the difference in knowledge before and after providing the educational video and in the knowledge score between hospitals, the (paired) Student’s t-test was applied. The educational videos significantly improved outpatients’ knowledge, which increased by 41% in MZ General Hospital, and by 42% in SG General Hospital. It was concluded that an educational video provides a useful method to improve the knowledge of the outpatients regarding antibiotics.


2020 ◽  
Vol 7 (Supplement_1) ◽  
pp. S86-S86
Author(s):  
Ann F Chou ◽  
Yue Zhang ◽  
Makoto M Jones ◽  
Christopher J Graber ◽  
Matthew B Goetz ◽  
...  

Abstract Background About 30–50% of inpatient antimicrobial therapy is sub-optimal. Health care facilities have utilized various antimicrobial stewardship (AS) strategies to optimize appropriate antimicrobial use, improve health outcomes, and promote patient safety. However, little evidence exists to assess relationships between AS strategies and antimicrobial use. This study examined the impact of changes in AS strategies on antimicrobial use over time. Methods This study used data from the Veterans Affairs (VA) Healthcare Analysis & Informatics Group (HAIG) AS survey, administered at 130 VA facilities in 2012 and 2015, and antimicrobial utilization from VA Corporate Data Warehouse. Four AS strategies were examined: having an AS team, feedback mechanism on antimicrobial use, infectious diseases (ID) attending physicians, and clinical pharmacist on wards. Change in AS strategies were computed by taking the difference in the presence of a given strategy in a facility between 2012–2015. The outcome was the difference between antimicrobial use per 1000 patient days in 2012–2013 and 2015–2016. Employing multiple regression analysis, changes in antimicrobial use was estimated as a function of changes in AS strategies, controlling for ID human resources in and organizational complexity. Results Of the 4 strategies, only change in availability of AS teams had an impact on antimicrobial use. Compared to facilities with no AS teams at both time points, antibiotic use decreased by 63.9 uses per 1000 patient days in facilities that did not have a AS team in 2012 but implemented one in 2015 (p=0.0183). Facilities that had an AS team at both time points decreased use by 62.2 per 1000 patient days (p=0.0324). Conclusion The findings showed that AS teams reduced inpatient antibiotic use over time. While changes in having feedback on antimicrobial use and clinical pharmacist on wards showed reduced antimicrobial use between 2012–2015, the differences were not statistically significant. These strategies may already be a part of a comprehensive AS program and employed by AS teams. In further development of stewardship programs within healthcare organizations, the association between AS teams and antibiotic use should inform program design and implementation. Disclosures All Authors: No reported disclosures


2010 ◽  
Vol 55 (3) ◽  
pp. 1114-1119 ◽  
Author(s):  
Jia Liu ◽  
Michael D. Miller ◽  
Robert M. Danovich ◽  
Nathan Vandergrift ◽  
Fangping Cai ◽  
...  

ABSTRACTRaltegravir is highly efficacious in the treatment of HIV-1 infection. The prevalence and impact on virologic outcome of low-frequency resistant mutations among HIV-1-infected patients not previously treated with raltegravir have not been fully established. Samples from HIV treatment-experienced patients entering a clinical trial of raltegravir treatment were analyzed using a parallel allele-specific sequencing (PASS) assay that assessed six primary and six secondary integrase mutations. Patients who achieved and sustained virologic suppression (success patients,n= 36) and those who experienced virologic rebound (failure patients,n= 35) were compared. Patients who experienced treatment failure had twice as many raltegravir-associated resistance mutations prior to initiating treatment as those who achieved sustained virologic success, but the difference was not statistically significant. The frequency of nearly all detected resistance mutations was less than 1% of viral population, and the frequencies of mutations between the success and failure groups were similar. Expansion of pre-existing mutations (one primary and five secondary) was observed in 16 treatment failure patients in whom minority resistant mutations were detected at baseline, suggesting that they might play a role in the development of drug resistance. Two or more mutations were found in 13 patients (18.3%), but multiple mutations were not present in any single viral genome by linkage analysis. Our study demonstrates that low-frequency primary RAL-resistant mutations were uncommon, while minority secondary RAL-resistant mutations were more frequently detected in patients naïve to raltegravir. Additional studies in larger populations are warranted to fully understand the clinical implications of these mutations.


2017 ◽  
Vol 11 (1) ◽  
pp. 1041-1048 ◽  
Author(s):  
Mehmet Bekir Unal ◽  
Kemal Gokkus ◽  
Evrim Sirin ◽  
Eren Cansü

Objective: The main objective of this study is to evaluate the availability of lateral antebrachial cutaneous nerve (LACN) autograft for acute or delayed repair of segmented digital nerve injuries. Patients and Methods: 13 digital nerve defects of 11 patients; treated with interposition of LACN graft that harvested from ipsilateral extremity were included in the study. Mean follow up period was 35, 7 months. The mean time from injury to grafting is 53, 3 days. The results of the mean 2PDT and SWMT values of injured /uninjured finger at the end of follow up period were evaluated with Paired T test. The correlation between the defect length and the difference of 2PDT, SWMT values between the uninjured and injured finger at the end of follow up period; were evaluated with Pearson - correlation analysis. Results: The mean value of our 2PDT and SWMT results are ~5,923, ~3, 52, respectively in which can be interpreted between the normal and diminished light touch. The defect length and difference percentage of SWMT values is positively and significantly correlated statistically. Mean length of interposed nerve grafts was 18.5 mm. The age of the patient and the mean values of 2PDT and SWMT with the difference % of 2PDT and % of SWMT are not statistically correlated. Conclusion: Based on results regarding sensory regaining at recipient side and negligible sensory deficit at harvesting side, we suggest that lateral antebrachial cutaneous nerve might be a valuable graft option for digital nerve defects.


Circulation ◽  
2018 ◽  
Vol 137 (suppl_1) ◽  
Author(s):  
Yao Jie Xie ◽  
Stanley Sai-chuen Hui ◽  
Suzanne C. Ho ◽  
Lorna Kwai Ping Suen

Background: Tai Chi is a body-mind exercise. It’s prophylactic efficacy on migraine attack remains largely unknown. The purpose of this study was to examine the effect of a 12-week Tai Chi training on the migraine attack days per month, body composition, and blood pressure (BP) in a sample of Chinese women with episodic migraine. Method: A two-arm randomized controlled trial was designed. Eighty-two local women aged 18 to 65 years and diagnosed with episodic migraine were randomized to the Tai Chi group or the waiting list control group. A modified 32-short form Yang-style Tai Chi training with 1 hour per day, 5 days per week for 12 weeks was adopted as intervention. An additional 12 weeks follow was conducted. The control group received a “delayed” Tai Chi training at the end of the trial. The difference in migraine days between 1 month before baseline, 3rd month (12nd week) and 6th month (24th week) after the randomization were examined. The changes in weight, body fat, and BP before and after the intervention were also analyzed. Results: Of 189 women screened, 82 eligible women completed the baseline assessment. After randomization, 9 women withdrew immediately, finally 40 in Tai Chi group and 33 in control group were involved in the analysis. On average, women in Tai Chi group had 3.6 (95% CI: -4.7 to -2.5, P<0.01) days reduction of migraine attack. Compared with control group, the difference was statistically significant (P<0.001). Tai Chi group also lost 0.6 kg of body weight and 0.6% of body fat at the 3rd month, and 10.8 mmHg systolic BP at the 6th month, respectively (all p<0.001). The between-group difference of systolic BP was -6.9 mmHg (95% CI: -11.6 mmHg to -2.1mmHg, p<0.05), whereas no significant differences were observed regarding weight and body fat at the 3rd month (all p>0.05). Among Tai Chi group, change in systolic BP was significantly correlated to the change in migraine days (P<0.05). Conclusion: The 12-week Tai Chi training significantly decreased the frequency of migraine attack and improved the systolic BP. The association between migraine attack reduction and BP improvement needs further investigations.


2017 ◽  
Vol 33 (S1) ◽  
pp. 131-132
Author(s):  
Gabriele Vittoria ◽  
Antonio Fascì ◽  
Matteo Ferrario ◽  
Giovanni Giuliani

INTRODUCTION:Payment by result agreements have been quite widely used in Italy to provide access for high costs oncologic drugs and minimize uncertainties of real life benefits (1). The aim of this analysis was to overview the Roche experience in terms of Payment by Result (Pbr) in oncology and investigate the relation between timing for the evaluation of treatment failures and observed Time to Off Treatment (TTOT) from Phase III clinical trials (2).METHODS:A retrospective analysis of the Roche payment by results schemes in place in Italy was conducted. For each drug included in the analysis it was collected: (i) the negotiated timing to assess the treatment failure for payment by result, (ii) the median time to off treatment curve observed in clinical trials for the experimental drug, (iii) the median time to off treatment observed in clinical trials for the control arm. The mean ratios between timing to assess the treatment failure for payment by result and the time to off treatment observed for the experimental drug or the median time to off treatment observed in the control arm were calculated to identify potential correlations. High level of correlation was expected if ratio was close to 1 (±.2).RESULTS:Roche products or different indications of the same product were identified as candidates for the analysis from 2008 to 2016. The timing for the evaluation of treatment failures for Pbr varies between 2 and 9 months, depending on the type of tumor and line of therapy. The mean Time to Payment By Result (TTPbr) / Control arm Time To Off Treatment (cTTOT) ratio was 1.16 (±.37) while the mean Time to Payment By Result (TTPbr) / Experimental arm Time To Off Treatment (eTTOT) ratio was .71 (±.13). Data analysis according to different time periods shows that the mean TTPbr/cTTOT and TTPbr/eTTOT for drugs negotiated from 2008 to 2015 were respectively 1.07 and 1.39 whereas for drugs negotiated in 2016 were respectively and .63 and 1.CONCLUSIONS:Good level of correlation between TTPbr and cTTOT was found. This finding is in line with the methodology used by Italian Medicines Agency so far, leveraging the cTTOT as the most appropriate proxy to assess any incremental effect of a new drug compared to the previous Standard of Care. The analysis over time of TTPbr shows that in the first years of payment by result negotiation TTPbr is more correlated to the cTTOT whereas in the last years is moving closer to the experimental one.


2019 ◽  
Vol 30 (6) ◽  
pp. 1349-1355 ◽  
Author(s):  
Mercedes Molero-Senosiaín ◽  
Laura Morales-Fernández ◽  
Federico Saenz-Francés ◽  
Julian García-Feijoo ◽  
Jose María Martínez-de-la-Casa

Objectives: To analyze the reproducibility of the new iC100 rebound tonometer, to compare its results with the applanation tonometry and iCare PRO and to evaluate the preference between them. Materials and methods: For the study of reproducibility, 15 eyes of 15 healthy Caucasian subjects were included. Three measurements were taken each day in three separate sessions. For the comparative study, 150 eyes of 150 Caucasian subjects were included (75 normal subjects and 75 patients with glaucoma). Three consecutive measurements were collected with each tonometer, randomizing the order of use. The discomfort caused by each tonometer was evaluated using the visual analogue scale. Results: No statistically significant differences were detected between sessions. In the comparison between tonometers, the measurements with iC100 were statistically lower than those of Perkins (−1.35 ± 0.417, p = 0.004) and that iCare PRO (−1.41 ± 0.417, p = 0.002). The difference between PRO and Perkins was not statistically significant ( p = 0.990). The mean time of measurement (in seconds) with iC100 was significantly lower than with Perkins (6.74 ± 1.46 vs 15.53 ± 2.01, p < 0.001) and that PRO (6.74 ± 1.46 vs 11.53 ± 1.85, p < 0.001). Visual analogue scale score with iC100 was lower than Perkins (1.33 ± 0.99 vs 1.73 ± 1.10, p < 0.05). In total, 61.7% preferred iC100 against Perkins. Conclusion: The reproducibility of this instrument has been proven good. iC100 underestimates intraocular pressure compared to applanation tonometry at normal values and tends to overestimate it in high intraocular pressure values. Most of the subjects preferred iC100 tonometer.


2017 ◽  
Vol 2 (3) ◽  
pp. 2473011417S0002
Author(s):  
Jun-Beom Kim ◽  
Chi Ahn ◽  
Byeong-Seop Park

Category: Trauma Introduction/Purpose: The aim of this study was to evaluate and compare the clinical and radiological results of internal fixation with headless cannulated screw and locking compression distal ulna hook plate for the fracture at the base of fifth metatarsal bone, Zone 1. Methods: From April 2012 to April 2015, thirty cases (29 patients) were evaluated retrospectively. The mean follow up periods was 13 months. There were divided two groups based on use of the screw (group A, n=15) or the plate (group B, n=15).We measured the displacement to diastasis of the fracture on the foot oblique radiographs taken pre- and post-operatively in each group, checked the time to bone union and the difference of the reduction distance in each group. Clinical results were evaluated using American Orthopedic Foot and Ankle Society (AOFAS) midfoot score at 12 months postoperative. Results: In group A, the mean time to union was 54.2±9.3 days, the mean displacement to diastasis improved to 0.3±0.4 mm postoperatively (p<0.001), and the mean reduction distance was 2.9±1.0 mm. In group B, the mean time to union was 41.5±7.0 days, the mean displacement to diastasis improved to 0.06±0.2 mm postoperatively (p<0.001), and the mean reduction distance was 4.1±1.6 mm. AOFAS score was verified 97.7±3.4 in group A and 98.2±3.2 in group B. The time to union was significantly different between groups A and B (p=0.01).There were no complications. Conclusion: We suggest that the plate is more effective method for the shorter union time in surgical treatment of fifth metatarsal base fractures.


2021 ◽  
Vol 99 (3) ◽  
Author(s):  
Antonio Reverter ◽  
Brad C Hine ◽  
Laercio Porto-Neto ◽  
Yutao Li ◽  
Christian J Duff ◽  
...  

Abstract In animal breeding and genetics, the ability to cope with disease, here defined as immune competence (IC), with minimal detriment to growth and fertility is a desired objective which addresses both animal production and welfare considerations. However, defining and objectively measuring IC phenotypes using testing methods which are practical to apply on-farm has been challenging. Based on previously described protocols, we measured both cell-mediated immune response (Cell-IR) and antibody-mediated immune response (Ab-IR) and combined these measures to determine an animal’s IC. Using a population of 2,853 Australian Angus steers and heifers, we compared 2 alternative methods to combine both metrics into a single phenotype to be used as a tool for the genetic improvement of IC. The first method, named ZMEAN, is obtained by taking the average of the individual metrics after subjecting each to a Z-score standardization. The second, ImmuneDEX (IDEX), is a weighted average that considers the correlation between Cell-IR and Ab-IR, as well as the difference in ranking of individuals by each metric, and uses these as weights in the averaging. Both simulation and real data were used to understand the behavior of ZMEAN and IDEX. To further ascertain the relationship between IDEX and other traits of economic importance, we evaluated a range of traits related to growth, feedlot performance, and carcass characteristics. We report estimates of heritability of 0.31 ± 0.06 for Cell-IR, 0.42 ± 0.06 for Ab-IR, 0.42 ± 0.06 for ZMEAN and 0.370 ± 0.06 for IDEX, as well as a unity genetic correlation (rg) between ZMEAN and IDEX. While a moderately positive rg was estimated between Cell-IR and Ab-IR (rg = 0.33 ± 0.12), strongly positive estimates were obtained between IDEX and Cell-IR (rg = 0.80 ± 0.05) and between IDEX and Ab-IR (rg = 0.85 ± 0.04). We obtained a moderately negative rg between IC traits and growth including an rg = −0.38 ± 0.14 between IDEX and weaning weight, and negligible with carcass fat measurements, including an rg = −0.03 ± 0.12 between IDEX and marbling. Given that breeding with a sole focus on production might inadvertently increase susceptibility to disease and associated antibiotic use, our analyses suggest that ImmuneDEX will provide a basis to breed animals that are both highly productive and with an enhanced ability to resist disease.


2017 ◽  
Vol 28 (2) ◽  
pp. 285-299
Author(s):  
Travis P. Mountain ◽  
Michael S. Gutter ◽  
Jorge Ruiz-Menjivar ◽  
Zeynep Çopur

The purpose of this study was to determine whether using a financial disclosure form in a controlled setting can influence consumers’ mortgage selection. This study used a 2 × 2 experimental design where participants were assigned randomly to a control or treatment group. Treatment group participants received a Federal Reserve Board document that contained information explaining the difference between an adjustable-rate mortgage (ARM) and a fixed-rate mortgage (FRM). All participants were presented with two distinct scenarios and were asked to determine the most appropriate mortgage for each. Logistic regression results suggested that receiving the Federal Reserve Board document does make a difference in consumers’ mortgage choice in hypothetical scenarios. Financial knowledge and Truth in Lending Act knowledge were also were important predictors.


Sign in / Sign up

Export Citation Format

Share Document