Treatment in patients with cat receiving LMWH: Learnings from predicare, a non-interventional prospective cohort study.

2020 ◽  
Vol 38 (15_suppl) ◽  
pp. e24154-e24154
Author(s):  
Florian Scotte ◽  
Silvy Laporte ◽  
Céline Chapelle ◽  
Anne Lamblin ◽  
Guy Meyer

e24154 Background: Venous thromboembolism (VTE) is a difficult to treat condition in cancer patients with a persisting risk of recurrent VTE (rVTE). We conducted a prospective multicenter observational cohort study in VTE cancer patients treated with tinzaparin for 6 months in order to validate the Ottawa score. The Ottawa score is composed of 5 items, female (+1), lung (+1), breast (-1), cancer stage I (-2) and previous VTE (+1). Methods: Adult cancer patients with recent diagnosis of documented VTE treated with tinzaparin for 6 months were included. The primary endpoint was the rVTE within the first 6 months. Other endpoints were symptomatic rVTE and major bleeding (MB). All events were adjudicated by a Central Adjudication Committee. To validate the Ottawa score, the area under the curve (AUC) and its 95% CI were calculated on receiver operating characteristic curve analysis. Results: 409 patients were included; median age: 68 years; pulmonary embolism: 60.4%; lung cancer: 31.3%; digestive cancer: 18.3%; stage IV cancers: 67.0%. According to Ottawa score, 58% were classified at high clinical probability of recurrence (score ≥ 1). Among all patients, during the 6 months treatment period, 7.3% and 3.7% had rVTE and MB, respectively. 9.1% had rVTE for patients with a score ≥ 1 compared to 5 % for other patients (score < 1). AUC of the Ottawa score was 0.60 (95% CI 0.55-0.65). Conclusions: This prospective cohort of patients with cancer receiving tinzaparin for VTE reported that the Ottawa score did not accurately predict rVTE. Clinical trial information: NCT03099031 .

Author(s):  
Aya Isumi ◽  
Kunihiko Takahashi ◽  
Takeo Fujiwara

Identifying risk factors from pregnancy is essential for preventing child maltreatment. However, few studies have explored prenatal risk factors assessed at pregnancy registration. This study aimed to identify prenatal risk factors for child maltreatment during the first three years of life using population-level survey data from pregnancy notification forms. This prospective cohort study targeted all mothers and their infants enrolled for a 3- to 4-month-old health check between October 2013 and February 2014 in five municipalities in Aichi Prefecture, Japan, and followed them until the child turned 3 years old. Administrative records of registration with Regional Councils for Children Requiring Care (RCCRC), which is suggestive of child maltreatment cases, were linked with survey data from pregnancy notification forms registered at municipalities (n = 893). Exact logistic regression was used for analysis. A total of 11 children (1.2%) were registered with RCCRC by 3 years of age. Unmarried marital status, history of artificial abortion, and smoking during pregnancy were significantly associated with child maltreatment. Prenatal risk scores calculated as the sum of these prenatal risk factors, ranging from 0 to 7, showed high predictive power (area under receiver operating characteristic curve 0.805; 95% confidence interval (CI), 0.660–0.950) at a cut-off score of 2 (sensitivity = 72.7%, specificity = 83.2%). These findings suggest that variables from pregnancy notification forms may be predictors of the risk for child maltreatment by the age of three.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Zhi-Yong Zeng ◽  
Shao-Dan Feng ◽  
Gong-Ping Chen ◽  
Jiang-Nan Wu

Abstract Background Early identification of patients who are at high risk of poor clinical outcomes is of great importance in saving the lives of patients with novel coronavirus disease 2019 (COVID-19) in the context of limited medical resources. Objective To evaluate the value of the neutrophil to lymphocyte ratio (NLR), calculated at hospital admission and in isolation, for the prediction of the subsequent presence of disease progression and serious clinical outcomes (e.g., shock, death). Methods We designed a prospective cohort study of 352 hospitalized patients with COVID-19 between January 9 and February 26, 2020, in Yichang City, Hubei Province. Patients with an NLR equal to or higher than the cutoff value derived from the receiver operating characteristic curve method were classified as the exposed group. The primary outcome was disease deterioration, defined as an increase of the clinical disease severity classification during hospitalization (e.g., moderate to severe/critical; severe to critical). The secondary outcomes were shock and death during the treatment. Results During the follow-up period, 51 (14.5%) patients’ conditions deteriorated, 15 patients (4.3%) had complicated septic shock, and 15 patients (4.3%) died. The NLR was higher in patients with deterioration than in those without deterioration (median: 5.33 vs. 2.14, P < 0.001), and higher in patients with serious clinical outcomes than in those without serious clinical outcomes (shock vs. no shock: 6.19 vs. 2.25, P < 0.001; death vs. survival: 7.19 vs. 2.25, P < 0.001). The NLR measured at hospital admission had high value in predicting subsequent disease deterioration, shock and death (all the areas under the curve > 0.80). The sensitivity of an NLR ≥ 2.6937 for predicting subsequent disease deterioration, shock and death was 82.0% (95% confidence interval, 69.0 to 91.0), 93.3% (68.0 to 100), and 92.9% (66.0 to 100), and the corresponding negative predictive values were 95.7% (93.0 to 99.2), 99.5% (98.6 to 100) and 99.5% (98.6 to 100), respectively. Conclusions The NLR measured at admission and in isolation can be used to effectively predict the subsequent presence of disease deterioration and serious clinical outcomes in patients with COVID-19.


2016 ◽  
Vol 116 (11) ◽  
pp. 1926-1934 ◽  
Author(s):  
Raquel Revuelta Iniesta ◽  
Ilenia Paciarotti ◽  
Isobel Davidson ◽  
Jane M. McKenzie ◽  
Celia Brand ◽  
...  

AbstractChildren with cancer are potentially at a high risk of plasma 25-hydroxyvitamin D (25(OH)D) inadequacy, and despite UK vitamin D supplementation guidelines their implementation remains inconsistent. Thus, we aimed to investigate 25(OH)D concentration and factors contributing to 25(OH)D inadequacy in paediatric cancer patients. A prospective cohort study of Scottish children aged <18 years diagnosed with, and treated for, cancer (patients) between August 2010 and January 2014 was performed, with control data from Scottish healthy children (controls). Clinical and nutritional data were collected at defined periods up to 24 months. 25(OH)D status was defined by the Royal College of Paediatrics and Child Health as inadequacy (<50 nmol/l: deficiency (<25 nmol/l), insufficiency (25–50 nmol/l)), sufficiency (51–75 nmol/l) and optimal (>75 nmol/l). In all, eighty-two patients (median age 3·9, interquartile ranges (IQR) 1·9–8·8; 56 % males) and thirty-five controls (median age 6·2, IQR 4·8–9·1; 49 % males) were recruited. 25(OH)D inadequacy was highly prevalent in the controls (63 %; 22/35) and in the patients (64 %; 42/65) at both baseline and during treatment (33–50 %). Non-supplemented children had the highest prevalence of 25(OH)D inadequacy at every stage with 25(OH)D median ranging from 32·0 (IQR 21·0–46·5) to 45·0 (28·0–64·5) nmol/l. Older age at baseline (R −0·46; P<0·001), overnutrition (BMI≥85th centile) at 3 months (P=0·005; relative risk=3·1) and not being supplemented at 6 months (P=0·04; relative risk=4·3) may have contributed to lower plasma 25(OH)D. Paediatric cancer patients are not at a higher risk of 25(OH)D inadequacy than healthy children at diagnosis; however, prevalence of 25(OH)D inadequacy is still high and non-supplemented children have a higher risk. Appropriate monitoring and therapeutic supplementation should be implemented.


2017 ◽  
Vol 24 (4) ◽  
pp. 891-897
Author(s):  
Dominika Kozak ◽  
Iwona Głowacka-Mrotek ◽  
Tomasz Nowikiewicz ◽  
Zygmunt Siedlecki ◽  
Wojciech Hagner ◽  
...  

2017 ◽  
Vol 210 (6) ◽  
pp. 429-436 ◽  
Author(s):  
Leah Quinlivan ◽  
Jayne Cooper ◽  
Declan Meehan ◽  
Damien Longson ◽  
John Potokar ◽  
...  

BackgroundScales are widely used in psychiatric assessments following self-harm. Robust evidence for their diagnostic use is lacking.AimsTo evaluate the performance of risk scales (Manchester Self-Harm Rule, ReACT Self-Harm Rule, SAD PERSONS scale, Modified SAD PERSONS scale, Barratt Impulsiveness Scale); and patient and clinician estimates of risk in identifying patients who repeat self-harm within 6 months.MethodA multisite prospective cohort study was conducted of adults aged 18 years and over referred to liaison psychiatry services following self-harm. Scale a priori cut-offs were evaluated using diagnostic accuracy statistics. The area under the curve (AUC) was used to determine optimal cut-offs and compare global accuracy.ResultsIn total, 483 episodes of self-harm were included in the study. The episode-based 6-month repetition rate was 30% (n = 145). Sensitivity ranged from 1% (95% CI 0–5) for the SAD PERSONS scale, to 97% (95% CI 93–99) for the Manchester Self-Harm Rule. Positive predictive values ranged from 13% (95% CI 2–47) for the Modified SAD PERSONS Scale to 47% (95% CI 41–53) for the clinician assessment of risk. The AUC ranged from 0.55 (95% CI 0.50–0.61) for the SAD PERSONS scale to 0.74 (95% CI 0.69–0.79) for the clinician global scale. The remaining scales performed significantly worse than clinician and patient estimates of risk (P < 0.001).ConclusionsRisk scales following self-harm have limited clinical utility and may waste valuable resources. Most scales performed no better than clinician or patient ratings of risk. Some performed considerably worse. Positive predictive values were modest. In line with national guidelines, risk scales should not be used to determine patient management or predict self-harm.


Sign in / Sign up

Export Citation Format

Share Document