scholarly journals Estimating creatinine clearance for Malaysian critically ill patients with unstable kidney function and impact on dosage adjustment

2020 ◽  
Vol 11 (3) ◽  
pp. 2825-2837
Author(s):  
Yen Ping Ng ◽  
Angel Wei Ling Goh ◽  
Chee Ping Chong

It is an essential requirement to estimate glomerular filtration rate in dosing adjustment of drug treatment for critically ill patients with unstable kidney function. Previous studies showed that Cockcroft-Gault equation was not appropriate for the assessment of unstable kidney function. However, there is a lack of assessment on other equations specifically designed for fluctuating kidney functions. This study is aimed to evaluate the differences between estimated creatinine clearances by using Cockcroft-Gault, Jelliffe, Brater, and Chiou equations and the impact on dosing adjustment of renally excreted drugs for critically ill patients with unstable kidney function. A retrospective observational study was conducted among 103 patients with unstable kidney function who were admitted to intensive care unit of Taiping Hospital, Malaysia. Serum creatinine levels from day 1 to 7 of admission were collected. The median differences of estimated creatinine clearance based on the four different equations were analysed by Friedman-ANOVA test. The median estimated creatinine clearances when patients were having fluctuating kidney functions showed 35.69 ml/min (IQR: 22.57 – 53.97) by Cockcroft-Gault and 22.64 ml/min (IQR: 10.46 – 38.49) by Jelliffe equation, while Brater and Chiou equations showed 35.88 ml/min (IQR: 19.46 – 56.04) and 30.10 ml/min (IQR: 16.55 – 46.82) respectively. Jelliffe and Chiou equation showed a significant 36.56% and 15.66% lower estimated creatinine clearance respectively as compared to Cockcroft-Gault (p < 0.001). Meanwhile, there was no significant difference between Brater and Cockcroft-Gault equation. Jelliffe equation demonstrated the lowest estimated creatinine clearance value with a more intense dosage adjustment required for patients’ drug regimen involving renally excreted drugs. In conclusion, there were clinically significant variations in the estimated creatinine clearance from the different equations.

2020 ◽  
Vol 8 (1) ◽  
Author(s):  
Yukari Aoyagi ◽  
Takuo Yoshida ◽  
Shigehiko Uchino ◽  
Masanori Takinami ◽  
Shoichi Uezono

Abstract Background The choice of intravenous infusion products for critically ill patients has been studied extensively because it can affect prognosis. However, there has been little research on drug diluents in this context. The purpose of this study is to evaluate the impact of diluent choice (saline or 5% dextrose in water [D5W]) on electrolyte abnormalities, blood glucose control, incidence of acute kidney injury (AKI), and mortality. Methods This before-after, two-group comparative, retrospective study enrolled adult patients who stayed for more than 48 h in a general intensive care unit from July 2015 to December 2018. We changed the default diluent for intermittent drug sets in our electronic ordering system from D5W to saline at the end of 2016. Results We included 844 patients: 365 in the D5W period and 479 in the saline period. Drug diluents accounted for 21.4% of the total infusion volume. The incidences of hypernatremia and hyperchloremia were significantly greater in the saline group compared to the D5W group (hypernatremia 27.3% vs. 14.6%, p < 0.001; hyperchloremia 36.9 % vs. 20.4%, p < 0.001). Multivariate analyses confirmed the similar effects (hypernatremia adjusted odds ratio (OR), 2.43; 95% confidence interval (CI), 1.54–3.82; hyperchloremia adjusted OR, 2.09; 95% CI, 1.31–3.34). There was no significant difference in the incidences of hyperglycemia, AKI, and mortality between the two groups. Conclusions Changing the diluent default from D5W to saline had no effect on blood glucose control and increased the incidences of hypernatremia and hyperchloremia.


2014 ◽  
Vol 58 (7) ◽  
pp. 4094-4102 ◽  
Author(s):  
T. W. Felton ◽  
J. A. Roberts ◽  
T. P. Lodise ◽  
M. Van Guilder ◽  
E. Boselli ◽  
...  

ABSTRACTPiperacillin-tazobactam is frequently used for empirical and targeted therapy of infections in critically ill patients. Considerable pharmacokinetic (PK) variability is observed in critically ill patients. By estimating an individual's PK, dosage optimization Bayesian estimation techniques can be used to calculate the appropriate piperacillin regimen to achieve desired drug exposure targets. The aim of this study was to establish a population PK model for piperacillin in critically ill patients and then analyze the performance of the model in the dose optimization software program BestDose. Linear, with estimated creatinine clearance and weight as covariates, Michaelis-Menten (MM) and parallel linear/MM structural models were fitted to the data from 146 critically ill patients with nosocomial infection. Piperacillin concentrations measured in the first dosing interval, from each of 8 additional individuals, combined with the population model were embedded into the dose optimization software. The impact of the number of observations was assessed. Precision was assessed by (i) the predicted piperacillin dosage and by (ii) linear regression of the observed-versus-predicted piperacillin concentrations from the second 24 h of treatment. We found that a linear clearance model with creatinine clearance and weight as covariates for drug clearance and volume of distribution, respectively, best described the observed data. When there were at least two observed piperacillin concentrations, the dose optimization software predicted a mean piperacillin dosage of 4.02 g in the 8 patients administered piperacillin doses of 4.00 g. Linear regression of the observed-versus-predicted piperacillin concentrations for 8 individuals after 24 h of piperacillin dosing demonstrated anr2of >0.89. In conclusion, for most critically ill patients, individualized piperacillin regimens delivering a target serum piperacillin concentration is achievable. Further validation of the dosage optimization software in a clinical trial is required.


2021 ◽  
Vol 8 (Supplement_1) ◽  
pp. S371-S372
Author(s):  
Karthik Gunasekaran ◽  
Jisha S John ◽  
Hanna Alexander ◽  
Naveena Gracelin ◽  
Prasanna Samuel ◽  
...  

Abstract Background Remdesivir (RDV), was included for the treatment of mild to moderate COVID-19 since July 2020 in our institution, following the initial results from ACTT-1 interim analysis report. With the adoption of RDV, there seems to be anecdotal evidence of efficacy as evidenced by early fever defervescence, quick recovery when on oxygen with decreased need for ventilation and ICU care. We aimed to study the impact of RDV on clinical outcomes among patients with moderate to severe COVID –19. Methods Nested case control study in the cohort of consecutive patients with moderate to severe COVID – 19. Cases were patients initiated on RDV and age and sex- matched controls who did not receive RDV were included. The primary outcome was in-hospital mortality. Secondary outcomes were, duration of hospital stay, need for ICU, duration of oxygen therapy and need for ventilation. Results A total of 926 consecutive patients with COVID – 19 were included, among which 411 patients were cases and 515 controls. The mean age of the cohort was 57.05±13.5 years, with male preponderance (75.92%). The overall in-hospital mortality was 22.46%(n=208). On comparison between cases and controls there was no statistically significant difference with respect to primary outcome [22.54% vs. 20.78%, (p value: 0.17)]. Progression to non-invasive ventilation (NIV) was higher among the controls [24.09% vs. 40.78% (p value: &lt; 0.001*)]. Progression to invasive ventilation was also higher among the controls [5.35% vs. 9.71% (p value: 0.014*)]. In subgroup analysis among critically ill patients, the use of RDV showed decrease in mortality (OR 0.32 95% CI; 0.13 – 0.75 p value – 0.009*). Conclusion RDV did not decrease the in-hospital mortality among moderate to severe COVID – 19. However, there seems to be a significant reduction in mortality in critically ill patients. Disclosures All Authors: No reported disclosures


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Christina Scharf ◽  
Ines Schroeder ◽  
Michael Paal ◽  
Martin Winkels ◽  
Michael Irlbeck ◽  
...  

Abstract Background A cytokine storm is life threatening for critically ill patients and is mainly caused by sepsis or severe trauma. In combination with supportive therapy, the cytokine adsorber Cytosorb® (CS) is increasingly used for the treatment of cytokine storm. However, it is questionable whether its use is actually beneficial in these patients. Methods Patients with an interleukin-6 (IL-6) > 10,000 pg/ml were retrospectively included between October 2014 and May 2020 and were divided into two groups (group 1: CS therapy; group 2: no CS therapy). Inclusion criteria were a regularly measured IL-6 and, for patients allocated to group 1, CS therapy for at least 90 min. A propensity score (PS) matching analysis with significant baseline differences as predictors (Simplified Acute Physiology Score (SAPS) II, extracorporeal membrane oxygenation, renal replacement therapy, IL-6, lactate and norepinephrine demand) was performed to compare both groups (adjustment tolerance: < 0.05; standardization tolerance: < 10%). U-test and Fisher’s-test were used for independent variables and the Wilcoxon test was used for dependent variables. Results In total, 143 patients were included in the initial evaluation (group 1: 38; group 2: 105). Nineteen comparable pairings could be formed (mean initial IL-6: 58,385 vs. 59,812 pg/ml; mean SAPS II: 77 vs. 75). There was a significant reduction in IL-6 in patients with (p < 0.001) and without CS treatment (p = 0.005). However, there was no significant difference (p = 0.708) in the median relative reduction in both groups (89% vs. 80%). Furthermore, there was no significant difference in the relative change in C-reactive protein, lactate, or norepinephrine demand in either group and the in-hospital mortality was similar between groups (73.7%). Conclusion Our study showed no difference in IL-6 reduction, hemodynamic stabilization, or mortality in patients with Cytosorb® treatment compared to a matched patient population.


Author(s):  
Răzvan Bologheanu ◽  
Mathias Maleczek ◽  
Daniel Laxar ◽  
Oliver Kimberger

Summary Background Coronavirus disease 2019 (COVID-19) disrupts routine care and alters treatment pathways in every medical specialty, including intensive care medicine, which has been at the core of the pandemic response. The impact of the pandemic is inevitably not limited to patients with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection and their outcomes; however, the impact of COVID-19 on intensive care has not yet been analyzed. Methods The objective of this propensity score-matched study was to compare the clinical outcomes of non-COVID-19 critically ill patients with the outcomes of prepandemic patients. Critically ill, non-COVID-19 patients admitted to the intensive care unit (ICU) during the first wave of the pandemic were matched with patients admitted in the previous year. Mortality, length of stay, and rate of readmission were compared between the two groups after matching. Results A total of 211 critically ill SARS-CoV‑2 negative patients admitted between 13 March 2020 and 16 May 2020 were matched to 211 controls, selected from a matching pool of 1421 eligible patients admitted to the ICU in 2019. After matching, the outcomes were not significantly different between the two groups: ICU mortality was 5.2% in 2019 and 8.5% in 2020, p = 0.248, while intrahospital mortality was 10.9% in 2019 and 14.2% in 2020, p = 0.378. The median ICU length of stay was similar in 2019: 4 days (IQR 2–6) compared to 2020: 4 days (IQR 2–7), p = 0.196. The rate of ICU readmission was 15.6% in 2019 and 10.9% in 2020, p = 0.344. Conclusion In this retrospective single center study, mortality, ICU length of stay, and rate of ICU readmission did not differ significantly between patients admitted to the ICU during the implementation of hospital-wide COVID-19 contingency planning and patients admitted to the ICU before the pandemic.


Antibiotics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 557
Author(s):  
Matthias Gijsen ◽  
Erwin Dreesen ◽  
Ruth Van Daele ◽  
Pieter Annaert ◽  
Yves Debaveye ◽  
...  

The impact of ceftriaxone pharmacokinetic alterations on protein binding and PK/PD target attainment still remains unclear. We evaluated pharmacokinetic/pharmacodynamic (PK/PD) target attainment of unbound ceftriaxone in critically ill patients with severe community-acquired pneumonia (CAP). Besides, we evaluated the accuracy of predicted vs. measured unbound ceftriaxone concentrations, and its impact on PK/PD target attainment. A prospective observational cohort study was carried out in adult patients admitted to the intensive care unit with severe CAP. Ceftriaxone 2 g q24h intermittent infusion was administered to all patients. Successful PK/PD target attainment was defined as unbound trough concentrations above 1 or 4 mg/L throughout the whole dosing interval. Acceptable overall PK/PD target attainment was defined as successful target attainment in ≥90% of all dosing intervals. Measured unbound ceftriaxone concentrations (CEFu) were compared to unbound concentrations predicted from various protein binding models. Thirty-one patients were included. The 1 mg/L and 4 mg/L targets were reached in 26/32 (81%) and 15/32 (47%) trough samples, respectively. Increased renal function was associated with the failure to attain both PK/PD targets. Unbound ceftriaxone concentrations predicted by the protein binding model developed in the present study showed acceptable bias and precision and had no major impact on PK/PD target attainment. We showed suboptimal (i.e., <90%) unbound ceftriaxone PK/PD target attainment when using a standard 2 g q24h dosing regimen in critically ill patients with severe CAP. Renal function was the major driver for the failure to attain the predefined targets, in accordance with results found in general and septic ICU patients. Interestingly, CEFu was reliably predicted from CEFt without major impact on clinical decisions regarding PK/PD target attainment. This suggests that, when carefully selecting a protein binding model, CEFu does not need to be measured. As a result, the turn-around time and cost for ceftriaxone quantification can be substantially reduced.


2017 ◽  
Vol 3 (1) ◽  
pp. 24-28
Author(s):  
Claudiu Puiac ◽  
Janos Szederjesi ◽  
Alexandra Lazăr ◽  
Codruța Bad ◽  
Lucian Pușcașiu

Abstract Introduction: Elevated intraabdominal pressure (IAP) it is known to have an impact on renal function trough the pressure transmitted from the abdominal cavity to the vasculature responsible for the renal blood flow. Intraabdominal pressure is found to be frequent in intensive care patients and also to be a predictor of mortality. Intra-abdominal high pressure is an entity that can have serious impact on intensive care admitted patients, studies concluding that if this condition progresses to abdominal compartment syndrome mortality is as high as 80%. Aim: The aim of this study was to observe if a link between increased intraabdominal pressure and modification in renal function exists (NGAL, creatinine clearance). Material and Method: The study enrolled 30 critically ill patients admitted in the Intensive Care Unit of SCJU Tîrgu Mures between November 2015 and August 2016. The study enrolled adult, hemodynamically stable patients admitted in intensive critical care - defined by a normal blood pressure maintained without any vasopressor or inotropic support, invasive monitoring using PICCO device and abdominal pressure monitoring. Results: The patients were divided into two groups based on the intraabdominal pressure values: normal intraabdominal pressure group= 52 values and increased intraabdominal group= 35 values. We compared the groups in the light of NGAL values, 24 hours diuresis, GFR and creatinine clearance. The groups are significantly different when compared in the light of NGAL values and GFR values. We obtained a statistically significant correlation between NGAL value and 24 hour diuresis. No other significant correlations were encountered between the studied items. Conclusions: NGAL values are increased in patients with high intraabdominal pressure which may suggest its utility as a cut off marker for patients with increased intraabdominal pressure. There is a significant decreased GFR in patient with elevated intraabdominal pressure, observation which can help in early detection of renal injury in patients due to high intraabdominal pressure. No correlation was found between creatinine clearance and increased intraabdominal pressure.


Author(s):  
Alexandra Jayne Nelson ◽  
Brian W Johnston ◽  
Alicia Achiaa Charlotte Waite ◽  
Gedeon Lemma ◽  
Ingeborg Dorothea Welters

Background. Atrial fibrillation (AF) is the most common cardiac arrhythmia in critically ill patients. There is a paucity of data assessing the impact of anticoagulation strategies on clinical outcomes for general critical care patients with AF. Our aim was to assess the existing literature to evaluate the effectiveness of anticoagulation strategies used in critical care for AF. Methodology. A systematic literature search was conducted using MEDLINE, EMBASE, CENTRAL and PubMed databases. Studies reporting anticoagulation strategies for AF in adults admitted to a general critical care setting were assessed for inclusion. Results. Four studies were selected for data extraction. A total of 44087 patients were identified with AF, of which 17.8-49.4% received anticoagulation. The reported incidence of thromboembolic events was 0-1.4% for anticoagulated patients, and 0-1.3% in non-anticoagulated patients. Major bleeding events were reported in three studies and occurred in 7.2-8.6% of the anticoagulated patients and up to 7.1% of the non-anticoagulated patients. Conclusions. There was an increased incidence of major bleeding events in anticoagulated patients with AF in critical care compared to non-anticoagulated patients. There was no significant difference in the incidence of reported thromboembolic events within studies, between patients who did and did not receive anticoagulation. However, the outcomes reported within studies were not standardised, therefore, the generalisability of our results to the general critical care population remains unclear. Further data is required to facilitate an evidence-based assessment of the risks and benefits of anticoagulation for critically ill patients with AF.


Sign in / Sign up

Export Citation Format

Share Document