scholarly journals Lymphocyte levels following COVID-19 vaccine BNT162b2

2021 ◽  
Author(s):  
Oren Miron ◽  
Nadav Davidovitch

Introduction: The BNT162b2 vaccine has been shown to be effective in reducing the incidence, severity and mortality of Coronavirus Disease 2019 (COVID-19). The clinical trial report of BNT162b2 suggested that the mechanism of BNT162b2 includes lymphocytes migration from the blood to the lymph nodes, and that it relates to the clinical trial finding of decreased blood lymphocytes in the 3 days following dose-1 of BNT162b2. A decrease in blood lymphocytes was also shown in the second day after dose-1 BNT162b2 in another study, and in studies of BNT162b1 and other mRNA vaccines. The BNT162b2 clinical trial also found that lymphocytes were normal in the 6-8 days following dose-1 and dose-2, but it did not test lymphocytes in the 3 days following dose-2. Our study aims to estimate the lymphocytes in the 3 days following dose-2 using existing electronic health records, to help improve the understanding of the BNT162b2 dose-2 mechanism.Methods: We extracted values of lymphocyte blood tests and BNT162b2 immunization from Electronic Health Record data including diagnosis, procedures, labs, vitals, medications and histories sourced from participating members of the Healthjump network, which is situated in the United States. Absolute lymphocytes were calculated as 10^3/mm3 (thousands per cubic millimeter), and vaccines were extracted from December 10th 2020 to September 30th 2021 based on the CXV code 208, the CPT code 91300, and the vaccine description from a database of 900 thousand vaccinations. We included BNT162b2 first dose administration and second dose BNT162b2 administrations that were done 21 days after a BNT162b2 first dose administration to resemble the clinical trial protocol (excluding dose-2 before or after day 21). We calculated the median lymphocyte values in each of the 14 days before and after the vaccination to determine its lowest median value, and we also compared the lymphocyte values in the 3 days before and after the administration using Wilcox rank test (significance at p<0.05). Results: We extracted 13,329 records, with a mean age of 59 years. The median lymphocyte in the 14 days before the 1st and 2nd dose was 1.85 (10^3/mm3). For the first dose, the lowest median value was 1.75 (10^3/mm3) and it was reached 3 days after administration, while for the second dose the lowest median value was 1.35 (10^3/mm3) and it was reached 2 days after administration. The lymphocyte value in the 3 days after dose-2 administration were significantly higher than those in the 3 days before the vaccination (p<0.01). Discussion: Our analysis suggests that lymphocyte blood levels have a temporary decrease in the 3 days following the second dose of BNT162b, which resembles the decrease reported after the first dose in the clinical trial. This could relate to the high increase in antibodies that is often found following the second dose. The main limitation of the study is that the lymphocyte tests were done for a medical reason, such as an annual exam or to diagnose a disease, unlike the clinical trial that tested each participant. Future vaccine studies could examine blood lymphocytes in the 3 days following the second dose to verify this finding and further examine the dose-2 mechanism and its effect.

2021 ◽  
Vol 12 (04) ◽  
pp. 816-825
Author(s):  
Yingcheng Sun ◽  
Alex Butler ◽  
Ibrahim Diallo ◽  
Jae Hyun Kim ◽  
Casey Ta ◽  
...  

Abstract Background Clinical trials are the gold standard for generating robust medical evidence, but clinical trial results often raise generalizability concerns, which can be attributed to the lack of population representativeness. The electronic health records (EHRs) data are useful for estimating the population representativeness of clinical trial study population. Objectives This research aims to estimate the population representativeness of clinical trials systematically using EHR data during the early design stage. Methods We present an end-to-end analytical framework for transforming free-text clinical trial eligibility criteria into executable database queries conformant with the Observational Medical Outcomes Partnership Common Data Model and for systematically quantifying the population representativeness for each clinical trial. Results We calculated the population representativeness of 782 novel coronavirus disease 2019 (COVID-19) trials and 3,827 type 2 diabetes mellitus (T2DM) trials in the United States respectively using this framework. With the use of overly restrictive eligibility criteria, 85.7% of the COVID-19 trials and 30.1% of T2DM trials had poor population representativeness. Conclusion This research demonstrates the potential of using the EHR data to assess the clinical trials population representativeness, providing data-driven metrics to inform the selection and optimization of eligibility criteria.


2020 ◽  
Vol 15 (11) ◽  
pp. 1557-1565 ◽  
Author(s):  
Kumardeep Chaudhary ◽  
Akhil Vaid ◽  
Áine Duffy ◽  
Ishan Paranjpe ◽  
Suraj Jaladanki ◽  
...  

Background and objectivesSepsis-associated AKI is a heterogeneous clinical entity. We aimed to agnostically identify sepsis-associated AKI subphenotypes using deep learning on routinely collected data in electronic health records.Design, setting, participants, & measurementsWe used the Medical Information Mart for Intensive Care III database, which consists of electronic health record data from intensive care units in a tertiary care hospital in the United States. We included patients ≥18 years with sepsis who developed AKI within 48 hours of intensive care unit admission. We then used deep learning to utilize all available vital signs, laboratory measurements, and comorbidities to identify subphenotypes. Outcomes were mortality 28 days after AKI and dialysis requirement.ResultsWe identified 4001 patients with sepsis-associated AKI. We utilized 2546 combined features for K-means clustering, identifying three subphenotypes. Subphenotype 1 had 1443 patients, and subphenotype 2 had 1898 patients, whereas subphenotype 3 had 660 patients. Subphenotype 1 had the lowest proportion of liver disease and lowest Simplified Acute Physiology Score II scores compared with subphenotypes 2 and 3. The proportions of patients with CKD were similar between subphenotypes 1 and 3 (15%) but highest in subphenotype 2 (21%). Subphenotype 1 had lower median bilirubin levels, aspartate aminotransferase, and alanine aminotransferase compared with subphenotypes 2 and 3. Patients in subphenotype 1 also had lower median lactate, lactate dehydrogenase, and white blood cell count than patients in subphenotypes 2 and 3. Subphenotype 1 also had lower creatinine and BUN than subphenotypes 2 and 3. Dialysis requirement was lowest in subphenotype 1 (4% versus 7% [subphenotype 2] versus 26% [subphenotype 3]). The mortality 28 days after AKI was lowest in subphenotype 1 (23% versus 35% [subphenotype 2] versus 49% [subphenotype 3]). After adjustment, the adjusted odds ratio for mortality for subphenotype 3, with subphenotype 1 as a reference, was 1.9 (95% confidence interval, 1.5 to 2.4).ConclusionsUtilizing routinely collected laboratory variables, vital signs, and comorbidities, we were able to identify three distinct subphenotypes of sepsis-associated AKI with differing outcomes.


2012 ◽  
Vol 2012 ◽  
pp. 1-5 ◽  
Author(s):  
Mansor Shakiba ◽  
Hoshang Sanadgol ◽  
Hamid Reza Azmoude ◽  
Mohamad Ali Mashhadi ◽  
Hassan Sharifi

Background. Although uremic pruritus is a common and upsetting problem of chronic kidney disease, there is no approved treatment for it. This study was undertaken to find the efficiency of sertraline as a possible treatment for uremic pruritus.Methods. 19 ESRD patients under hemodialysis with severe chronic pruritus were randomly selected to participate in this before-after clinical trial. Before and after starting treatment with sertraline, a detailed pruritus history was obtained and pruritus graded by the 30-item inventory of pruritus that patients based on priorities grade allocated to 3 classes. Subjects were treated with sertraline 50 mg oral daily for four months, with monthly assessments of pruritus symptoms.Results. Before treatment with sertraline, the grade of pruritus in 9 (47.4%) patients was moderate and severe in 10 (52.6%) patients. After treatment, grade of pruritus in 11 (57.8%) patients was weak, 6 (31.5%) have moderate and only 2 (10.7%) patients have severe pruritus. Of 10 patients with severe pruritus, 5 (50%) patients experiencing weak pruritus, and 4 (40%) patients have moderate pruritus after treatment. Based on Wilcoxon signed-rank test, the difference between the grade of pruritus before and after treatment with sertraline was significant (P=0.001).Conclusions. Although no definitive recommendation can be made regarding treatment of uremic pruritus, we found an increased antipruritic effect of sertraline in ESRD patients.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0256428
Author(s):  
Aixia Guo ◽  
Nikhilesh R. Mazumder ◽  
Daniela P. Ladner ◽  
Randi E. Foraker

Objective Liver cirrhosis is a leading cause of death and effects millions of people in the United States. Early mortality prediction among patients with cirrhosis might give healthcare providers more opportunity to effectively treat the condition. We hypothesized that laboratory test results and other related diagnoses would be associated with mortality in this population. Our another assumption was that a deep learning model could outperform the current Model for End Stage Liver disease (MELD) score in predicting mortality. Materials and methods We utilized electronic health record data from 34,575 patients with a diagnosis of cirrhosis from a large medical center to study associations with mortality. Three time-windows of mortality (365 days, 180 days and 90 days) and two cases with different number of variables (all 41 available variables and 4 variables in MELD-NA) were studied. Missing values were imputed using multiple imputation for continuous variables and mode for categorical variables. Deep learning and machine learning algorithms, i.e., deep neural networks (DNN), random forest (RF) and logistic regression (LR) were employed to study the associations between baseline features such as laboratory measurements and diagnoses for each time window by 5-fold cross validation method. Metrics such as area under the receiver operating curve (AUC), overall accuracy, sensitivity, and specificity were used to evaluate models. Results Performance of models comprising all variables outperformed those with 4 MELD-NA variables for all prediction cases and the DNN model outperformed the LR and RF models. For example, the DNN model achieved an AUC of 0.88, 0.86, and 0.85 for 90, 180, and 365-day mortality respectively as compared to the MELD score, which resulted in corresponding AUCs of 0.81, 0.79, and 0.76 for the same instances. The DNN and LR models had a significantly better f1 score compared to MELD at all time points examined. Conclusion Other variables such as alkaline phosphatase, alanine aminotransferase, and hemoglobin were also top informative features besides the 4 MELD-Na variables. Machine learning and deep learning models outperformed the current standard of risk prediction among patients with cirrhosis. Advanced informatics techniques showed promise for risk prediction in patients with cirrhosis.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Rishi J. Desai ◽  
Michael E. Matheny ◽  
Kevin Johnson ◽  
Keith Marsolo ◽  
Lesley H. Curtis ◽  
...  

AbstractThe Sentinel System is a major component of the United States Food and Drug Administration’s (FDA) approach to active medical product safety surveillance. While Sentinel has historically relied on large quantities of health insurance claims data, leveraging longitudinal electronic health records (EHRs) that contain more detailed clinical information, as structured and unstructured features, may address some of the current gaps in capabilities. We identify key challenges when using EHR data to investigate medical product safety in a scalable and accelerated way, outline potential solutions, and describe the Sentinel Innovation Center’s initiatives to put solutions into practice by expanding and strengthening the existing system with a query-ready, large-scale data infrastructure of linked EHR and claims data. We describe our initiatives in four strategic priority areas: (1) data infrastructure, (2) feature engineering, (3) causal inference, and (4) detection analytics, with the goal of incorporating emerging data science innovations to maximize the utility of EHR data for medical product safety surveillance.


Blood ◽  
2019 ◽  
Vol 134 (Supplement_1) ◽  
pp. 2411-2411
Author(s):  
Maureen Watt ◽  
Scott Milligan

Introduction: The safety and efficacy of rurioctocog alfa pegol (BAX 855, SHP-660, TAK-660; Adynovate®; Baxalta US Inc., a Takeda company, Lexington, MA, USA) in patients with severe hemophilia A has been reported previously (Konkle BA et al., Blood 2015, 126:1078-85; Brand B et al., Haemophilia 2016, 22:e251-8; Mullins ES et al., Haemophilia 2017, 23:238-46); however, research describing patient experience with extended half-life (EHL) recombinant factor VIII (FVIII) products outside clinical trials is limited. The objective of this study was to assess real-world utilization of TAK-660 in patients with hemophilia A and describe their clinical profiles before and after switching to TAK-660. Factor consumption and bleed outcomes stratified by age (<18 and ≥18 years) are reported herein. Methods: This was a retrospective, observational database study of patient data from US specialty pharmacies. Pharmacy data sources included patient records, prescriptions, and patient-reported bleed logs. Informed consent was obtained for all analyzed patient data. Eligible patients with hemophilia A were treated with prophylactic TAK-660 with on-label dosing from November 2015 to September 2018, and had received ≥90 days of FVIII (standard half-life [SHL] or EHL) therapy before switching to TAK-660. Main exclusion criteria were participation in a TAK-660 clinical trial before/during this study, only on-demand treatment before switching to TAK-660, or presence of active FVIII inhibitor requiring treatment and/or use of immune tolerance induction during the study period. Assessments included prior hemophilia therapy, FVIII administration frequency and consumption, and annualized bleeding rate (ABR) before and after switching to TAK-660. Results: Data was collected from 82 patients (of 61 providers in 44 practices across 25 states in the United States): 44% of the patients (36/82) were <18 years old; 56% (46/82) were ≥18 years old (none were ≥60 years old); 83% (68/82) had severe hemophilia A; and 88% (72/82) had received prior SHL-FVIII treatment. The SHL antihemophilic factor (recombinant) (Advate®; Baxalta US Inc., a Takeda company, Lexington, MA, USA) was used by 67% (55/82) of patients overall, of whom 47% (26/55) were <18 years old and 53% (29/55) were ≥18 years old. Compared with any prior FVIII therapy, switching to TAK-660 increased FVIII dose per administration in patients <18 and ≥18 years old (+39.5% and +22.9%, respectively), while their weekly administration frequency decreased (-21.4% and -28.1%, respectively; Table 1). Weekly FVIII consumption increased in patients aged <18 years (+11.2%) and decreased in those aged ≥18 years (-12.8%). FVIII administration frequency and consumption by prior SHL- or EHL-FVIII are reported in Table 1. ABR data before and after switching were available in 47 of 82 patients. Compared with any prior FVIII therapy, mean ABR decreased in patients aged <18 years (-39.5%; 2.8 to 1.7) and ≥18 years (-50.3%; 3.4 to 1.7) with TAK-660 treatment (Table 2). Changes in mean ABR by prior FVIII therapy and disease severity are reported in Table 2. The small number of patients who received prior EHL FVIII was a limiting factor in the comparison of patients who received prior SHL- and EHL-FVIII therapy. Conclusions: In patients with hemophilia A previously treated with SHL- or EHL-FVIII products, switching to TAK-660 prophylaxis resulted in a significant decrease in ABR of 40-50% in both age groups analyzed. The adult population (ie, ≥18 years old) showed a tendency for reduced weekly FVIII consumption. These findings from real-world data are in agreement with TAK-660 clinical trial results. The observed differences in FVIII consumption between patients <18 and ≥18 years old may have been in part a result of age-related changes in bleeding patters, growth, and other factors. Disclosures Watt: Shire International GmbH, a Takeda company: Employment, Other: a Takeda stock owner. Milligan:Sanofi: Research Funding; Merck: Research Funding; Gilead: Research Funding; Amgen: Research Funding; AbbVie: Research Funding; Trio Health: Employment; Viiv: Research Funding.


2014 ◽  
Vol 05 (02) ◽  
pp. 445-462 ◽  
Author(s):  
K. H. Bowles ◽  
M. C. Adelsberger ◽  
J. L. Chittams ◽  
C. Liao ◽  
P. S. Sockolow

SummaryBackground: Homecare is an important and effective way of managing chronic illnesses using skilled nursing care in the home. Unlike hospitals and ambulatory settings, clinicians visit patients at home at different times, independent of each other. Twenty-nine percent of 10,000 homecare agencies in the United States have adopted point-of-care EHRs. Yet, relatively little is known about the growing use of homecare EHRs.Objective: Researchers compared workflow, financial billing, and patient outcomes before and after implementation to evaluate the impact of a homecare point-of-care EHR.Methods: The design was a pre/post observational study embedded in a mixed methods study. The setting was a Philadelphia-based homecare agency with 137 clinicians. Data sources included: (1) clinician EHR documentation completion; (2) EHR usage data; (3) Medicare billing data; (4) an EHR Nurse Satisfaction survey; (5) clinician observations; (6) clinician interviews; and (7) patient outcomes.Results: Clinicians were satisfied with documentation timeliness and team communication. Following EHR implementation, 90% of notes were completed within the 1-day compliance interval (n = 56,702) compared with 30% of notes completed within the 7-day compliance interval in the pre-implementation period (n = 14,563; OR 19, p < .001). Productivity in the number of clinical notes documented post-implementation increased almost 10-fold compared to pre-implementation. Days to Medicare claims fell from 100 days pre-implementation to 30 days post-implementation, while the census rose. EHR implementation impact on patient outcomes was limited to some behavioral outcomes.Discussion: Findings from this homecare EHR study indicated clinician EHR use enabled a sustained increase in productivity of note completion, as well as timeliness of documentation and billing for reimbursement with limited impact on improving patient outcomes. As EHR adoption increases to better meet the needs of the growing population of older people with chronic health conditions, these results can inform homecare EHR development and implementation.Citation: Sockolow PS, Bowles KH, Adelsberger MC, Chittams JL, Liao C. Impact of homecare electronic health record on timeliness of clinical documentation, reimbursement, and patient outcomes. Appl Clin Inf 2014; 5: 445–462 http://dx.doi.org/10.4338/ACI-2013-12-RA-0106


Trials ◽  
2014 ◽  
Vol 15 (1) ◽  
pp. 18 ◽  
Author(s):  
Justin Doods ◽  
Florence Botteri ◽  
Martin Dugas ◽  
Fleur Fritz ◽  

Sign in / Sign up

Export Citation Format

Share Document