cohort group
Recently Published Documents


TOTAL DOCUMENTS

72
(FIVE YEARS 25)

H-INDEX

9
(FIVE YEARS 1)

Author(s):  
David Cawthorpe ◽  
David Cawthorpe

Objective: The study objective was to examine the relationship between dental caries diagnosed before the age of four and ICD diseases over a 16-year period. Methods: The sample of approximately 33,531 (48% female) individuals having a total of 2,864,790 physician diagnoses over 16 years comprised a the cohort two groups, one with (2.7% of the sample) and one without dental caries (dependent variable) that were under the age of four years in the first two years of the sample data. Categories of dental caries and associated gingivitis and periodontal disease were based on the International Classification of Disease (ICD Version 9) diagnostic codes 521-523. The sample was described. Odds ratios comparing those with and without dental caries and the main ICD classes were calculated. Additionally, the ratio of each ICD diagnosis frequency comparing the cohort groups were calculated and represented the diagnoses assigned over the first 15 physician visits. Results: Males had proportionally more dental caries diagnosed. Diagnoses were made predominantly by general practitioners. Within the dental caries cohort group, associated ICD diagnoses were over-represented in both odds ratios and within individual ICD diagnoses on the first diagnosis and over the first 15 diagnoses in time. Conclusion: Dental caries diagnosed in very young children before the age of four are associated with multi-morbidity over subsequent years. Sex differences and patterns of associated morbidity may contribute to a better understanding of early life vulnerability to dental caries and their sequelae.


2021 ◽  
Author(s):  
Sang Hun Eum ◽  
Hanbi Lee ◽  
Eun Jeong Ko ◽  
Hyuk Jin Cho ◽  
Chul Woo Yang ◽  
...  

Abstract Computed tomography (CT) and nuclear renography are used to determine kidney procurement in living kidney donors (LKDs). The present study investigated which modality better predicts kidney function after donation. This study included 835 LKDs and they were divided into two subgroups based on whether the left-right dominance of kidney volume was concordant with kidney function (concordant group) or not (discordant group). The predictive value for post-donation kidney function between the two imaging modalities was compared at 1 month, 6 months, and > 1 year in total cohort, concordant, and discordant groups. Split kidney function (SKF) measured by both modalities showed significant correlation with each other at baseline. SKFs of remaining kidney measured using both modalities before donation showed significant correlation with eGFR (estimated glomerular filtration rate) after donation in the total cohort group and two subgroups, respectively. CT volumetry was superior to nuclear renography for predicting post-donation kidney function in the total cohort group and both subgroups. In the discordant subgroup, a higher tendency of kidney function recovery was observed when kidney procurement was determined based on CT volumetry. In conclusion, CT volumetry is preferred when determining procurement strategy especially when discordance is found between the two imaging modalities.


PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0259486
Author(s):  
Tarun Reddy Katapally ◽  
Nour Hammami ◽  
Luan Manh Chu

Background This study aims to understand how participants’ compliance and response rates to both traditional validated surveys and ecological momentary assessments (EMAs) vary across 4 cohorts who participated in the same mHealth study and received the same surveys and EMAs on their smartphones, however with cohort-specific time-triggers that differed across the 4 cohorts. Methods As part of the Smart Platform, adult citizen scientists residing in Regina and Saskatoon, Canada, were randomly assigned to 4 cohorts in 2018. Citizen Scientists provided a complex series of subjective and objective data during 8 consecutive days using a custom-built smartphone application. All citizen scientists responded to both validated surveys and EMAs that captured physical activity. However, using Smart Platform, we varied the burden of responding to validated surveys and EMAs across cohorts by using different time-triggered push notifications. Participants in Cohort 1 (n = 10) received the full baseline 209-item validated survey on day 1 of the study; whereas participants in cohorts 2 (n = 26), 3 (n = 10), and 4 (n = 25) received the same survey in varied multiple sections over a period of 4 days. We used weighted One-way Analysis of Variance (ANOVA) tests and weighted, linear regression models to assess for differences in compliance rate across the cohort groups controlling for age, gender, and household income. Results Compliance to EMAs that captured prospective physical activity varied across cohorts 1 to 4: 50.0% (95% Confidence Interval [C.I.] = 31.4, 68.6), 63.0% (95% C.I. = 50.7, 75.2), 37.5% (95% C.I. = 18.9, 56.1), and 61.2% (95% C.I. = 47.4, 75.0), respectively. The highest completion rate of physical activity validated surveys was observed in Cohort 4 (mean = 97.9%, 95% C.I. = 95.5, 100.0). This was also true after controlling for age, gender, and household income. The regression analyses showed that citizen scientists in Cohorts 2, 3, and 4 had significantly higher compliance with completing the physical activity validated surveys relative to citizen scientists in cohort group 1 who completed the full survey on the first day. Conclusions & significances The findings show that maximizing the compliance rates of research participants for digital epidemiological and mHealth studies requires a balance between rigour of data collection, minimization of survey burden, and adjustment of time- and user-triggered notifications based on citizen or patient input.


2021 ◽  
Vol 108 (Supplement_7) ◽  
Author(s):  
A Gendia ◽  
A Tam ◽  
W Faux

Abstract Aim To compare the proportions of malignancy between two modelled cohorts of referred and investigated by our colorectal 2 WW referrals pathway. Methods Two modelled cohorts were analysed from our prospectively maintained colorectal 2WW referrals database from August 2018 to July 2019. One cohort (group A) included patients without anemia, rectal mass or overt rectal bleeding. The other (group B) included the rest of referrals. Data collected and analysed in each group included total numbers of referrals, investigated referrals and malignancy proportion in each group. One tailed Z test was used to analysis statistical difference. Results 4240 referrals were made to our colorectal 2 WW pathway during the given period. 1333 (31%) were group A and 2907 (69%) were group B. Total number of patients investigated in group A was 1227, of those only 34 (2.8%) were colorectal cancer and 18 (1.5%) were extracolonic cancer. One the other hand, 2705 patients were investigated in group B, colorectal malignancy were found in 142 (5.3%) patients and 33 (1.2%) were extracolonic. There was a significant difference (p < 0.05) in total number of malignancies between Group A (53/4.3%) and Group B (175/6.5%). Conclusion While the 2 Week-Wait referral pathway plays an important role in rapid testing and identifying colorectal cancer, there was a difference between malignancy distribution within the referrals. this difference doesn’t reflect a clinical significance but it can be a good stratification tool.


2021 ◽  
Author(s):  
Masataka Sawaki ◽  
Naruto Taira ◽  
Yukari Uemura ◽  
Tsuyoshi Saito ◽  
Shinichi Baba ◽  
...  

Abstract Purpose To gauge the effects of treatment practices on prognosis for all older patients with HER2-positive breast cancer, particularly to determine whether adjuvant trastuzumab alone can offer benefit over no adjuvant therapy. This report accompanies the RESPECT study, a randomized-controlled trial (RCT) comparing trastuzumab monotherapy with trastuzumab-plus-chemotherapy for early HER2-positive breast cancer.Patients and methods Patients who declined the RCT were treated based on the physician’s discretion. We studied the (1) trastuzumab-plus-chemotherapy group, (2) trastuzumab-monotherapy group, and (3) non-trastuzumab group (no therapy or anticancer therapy without trastuzumab). The primary endpoint was disease-free survival (DFS), which was compared using the propensity-score method.Results We enrolled 398 eligible patients, aged over 70 years, with HER2-positive invasive breast cancer, of whom 275 (69%) were in the RCT, and 123 (31%) were in this cohort group. The median age was 74.5 years. Among cohort group treatment categories were as follows: (1) trastuzumab-plus-chemotherapy group (n = 36, 30%), (2) trastuzumab-monotherapy group (n = 52, 43%), and (3) non-trastuzumab group (n = 32, 27%). A total of 73% of patients received trastuzumab-containing regimens, with or without chemotherapy. The 3-year DFS was 92.3% in the trastuzumab-plus-chemotherapy group, 89.2% in the trastuzumab-monotherapy group, and 82.5% in the non-trastuzumab group. DFS in the non-trastuzumab group was lower than in the trastuzumab-plus-chemotherapy and trastuzumab-monotherapy groups (propensity-adjusted HR: 3.29; 95% CI: 1.15–9.39; P = 0.026). The relapse-free survival in the non-trastuzumab group was lower than in the trastuzumab-plus-chemotherapy and trastuzumab monotherapy groups (propensity-adjusted HR = 7.80; 95% CI: 2.32–26.2, P < 0.0001). Chemotherapy with trastuzumab or trastuzumab monotherapy did not affect health-related quality of life (HRQoL) at 36 months.Conclusions Trastuzumab-treated patients had better prognoses than patients not treated with trastuzumab without deterioration of HRQoL. Thus, trastuzumab monotherapy can be considered for patients who reject chemotherapy.Trial registration number The protocol was registered on the website of the University Hospital Medical Information Network (UMIN), Japan (protocol ID: UMIN 000028476).


Author(s):  
Jose Irazuzta ◽  
Nicolas Chiriboga Salazar

A misguided auto-reactive injury is responsible for diverse types of central nervous system (CNS) conditions. We suspect that, in some of these conditions, the adaptive immune system have a common cellular immune pathogenesis, driven predominantly by T cells, despite variability on the phenotypical clinical presentation. Aim: the main goal of this study is to characterize a portion of the adaptive immune response (AIR) on patients presenting with clinical symptoms compatible with monophasic acute neuroimmune disorders (NID) including Psychotic Disorders (PD). Methodology: flow cytometry with deep immunophenotyping of T effector (Teff) and T regulatory (Treg) cells was performed on peripheral blood obtained during the acute clinical phase and compared it to the one from an age-matched cohort group [Co). Results: our preliminary findings point toward the presence of common “immunosignature” in individuals affected by NID or PD.  We also found a shared dysregulation of immune related neurogenes in NID and PD that were not present in normal cohorts. Conclusions: this preliminary report gives some insights into the underlying shared pathobiology. If we can improve our capacity for early accurate diagnosis and meaningful disease monitoring of pathogenic T cell subsets, we will both expedite disease detection and may serve as a guide the administration of effective immunotherapeutic agents.


2021 ◽  
Vol 29 (02) ◽  
pp. 115-118
Author(s):  
Uzma Urooj ◽  
Sumaira Khan ◽  
Rabia Imran

Objective: To evaluate the efficacy of intra-operative wound irrigation with normal saline in reducing surgical site infections in gynaecological surgeries. Methods: It is a prospective cohort study carried out at Obstetrics and Gynaecology Department, Pak-Emirates Military Hospital, Rawalpindi from 1st November 2019 to 30th April 2020. A total of 400 patients undergoing abdominal surgery for gynecological reasons were recruited by consecutive non-probability technique. Patients with known comorbidities were excluded. Participants of study were allocated cohort and control groups at the end of the surgery after closing the abdominal fascia. In cohort group, the subcutaneous soft tissue was irrigated with 1000 ml of Normal saline solution before skin closure and sterile dressing. No intra-operative wound irrigation was performed in the control group. The primary and secondary endpoint measures (SSI up to 10th Post-Operative day) and (SSI up to 30th Post-Operative day) respectively, were assessed clinically. Results: The study included 400 patients, with 200 in the cohort group and 200 in the control group with Mean Age of (Mean ± SD) 33.6±8.1 years. Majority of the patients had Pre-Op Hemoglobin of >11 g/dl (216)54%. Most common surgeries were Caesarean section (324)81% and Hysterectomy (40)10%. Maximum surgeries were performed between 30-30 min (312)78% with mean hospital stay of (Mean ± SD) 2.9±0.5 days. Analysis of the results showed that Intra-operative wound irrigation with normal saline significantly lesser rate of postoperative SSIs in comparison to no irrigation at both primary outcome measure that was SSI at 10th Post-operative day (POD)(RR=0.417, 95 % CI [0.15;1.161]) and secondary outcome measure that was SSI at 30th POD(RR=0.286, 95 % CI [0.060;1.359]). Conclusion: Intra-operative wound irrigation with Normal Saline decreases the risk of SSI by 58.3% (AR) at 10thPOD and by 71.4% (AR) at 30th POD in otherwise healthy women with no comorbidities.


2021 ◽  
Vol 11 (8) ◽  
pp. 3371
Author(s):  
Bartosz Bielecki-Kowalski ◽  
Marcin Kozakiewicz

Open reduction and internal fixation (ORIF) is becoming increasingly common in treatment of the condylar process, including mandible head fractures. This approach significantly improves the results in terms of anatomical reduction of bone fragments, and shortens the treatment time, allowing for early functional recovery. The success of ORIF is largely determined by the stability of the osteosynthesis. The stabilization effect depends on the screw type and length of the plate used, in addition to the diameter and length of the screws used. The aim of this study was to determine the largest possible screw length that can be used in ORIF of the mandibular condyle considering the variable bone thickness. A total of 500 condyles were examined using computer tomography (CT)-based 3D models in Caucasians. For all models, three measurements were made in the frontal projection in places typical for the stabilization of osteosynthesis plates in the fractures of the condylar process: the base, the top, and the sigmoid notch. In addition, one measurement of the mandible head was made in the place of the greatest width. The results showed that 8 mm screws should be used in the region of the condylar base as the longest anatomically justified screw, whereas in the area of a sigmoid notch only 1.5–2 mm screws should be used. Measurements in the area of the neck top revealed statistically significant differences in the measurements between the sex of patients, with average differences below 1 mm (p < 0.05). In this area, the maximal length of the screw was found to be 10 mm. In mandibular head fractures, the use of long screws is extremely important due to the desired effect of fragment compression. Statistically significant differences were found in the measurement results between women and men. The maximal screw length for bicortical fixation was found to be 22 mm in men and 20 mm in women. In post-traumatic patients, the ability to obtain a clear measurement is often limited by a deformed anatomy. Taking into account the fact that the fracture stability is influenced by both the plate length and the length of the fixation screws, an assessment of the standard measurement values in a cohort group will improve the quality of the surgical fixations of the fractures.


2021 ◽  
Vol 9 (4) ◽  
pp. 232596712199380
Author(s):  
Hongzhi Liu ◽  
Xinqiu Song ◽  
Pei Liu ◽  
Huachen Yu ◽  
Qidong Zhang ◽  
...  

Background: Controversy exists concerning whether tenotomy or tenodesis is the optimal surgical treatment option for proximal biceps tendon lesions. Purpose: To evaluate the clinical outcomes after arthroscopic tenodesis and tenotomy in the treatment of long head of the biceps tendon (LHBT) lesions. Study Design: Systematic review; Level of evidence, 4. Methods: A systematic review was performed by searching PubMed, the Cochrane Library, Web of Science, and Embase to identify randomized controlled trials (RCTs) and cohort studies that compared the clinical efficacy of tenotomy with that of tenodesis for LHBT lesions. A standardized data extraction form was predesigned to obtain bibliographic information of the study as well as patient, intervention, comparison, and outcome data. A random-effects model was used to pool quantitative data from the primary outcomes. Results: A total of 21 eligible studies were separated into 3 methodological groups: (1) 4 RCTs with level 1 evidence, (2) 3 RCTs and 4 prospective cohort studies with level 2 evidence, and (3) 10 retrospective cohort studies with level 3 to 4 evidence. Analysis of the 3 groups demonstrated a significantly higher risk of the Popeye sign after tenotomy versus tenodesis (group 1: risk ratio [RR], 3.29 [95% CI, 1.92-5.49]; group 2: RR, 2.35 [95% CI, 1.43-3.85]; and group 3: RR, 2.57 [95% CI, 1.33-4.98]). Arm cramping pain remained significantly higher after tenotomy only in the retrospective cohort group (RR, 2.17 [95% CI, 1.20-3.95]). The Constant score for tenotomy was significantly worse than that for tenodesis in the prospective cohort group (standardized mean difference [SMD], –0.47 [95% CI, –0.73 to –0.21]), as were the forearm supination strength index (SMD, –0.75 [95% CI, –1.28 to –0.21]) and the Simple Shoulder Test (SST) score (SMD, –0.60 [95% CI, –0.94 to –0.27]). Conclusion: The results demonstrated that compared with tenodesis, tenotomy had a higher risk of a Popeye deformity in all 3 study groups; worse functional outcomes in terms of the Constant score, forearm supination strength index, and SST score according to prospective cohort studies; and a higher incidence of arm cramping pain according to retrospective cohort studies.


2021 ◽  
Vol 1 ◽  
Author(s):  
Hope Witmer

The Covid-19 pandemic pushes organizations to innovate, adapt, and be responsive to new conditions. These demands are exacerbated as organizations respond to the triple sustainability challenge of social and environmental issues alongside economic recovery. These combined factors highlight the need for an inclusive definition of organizational resilience, the increased agility to adapt, learn, and transform to rapidly shifting external and internal conditions. This paper explores a gendered perspective of organizational resilience and the implications for degendering the concept to incorporate masculine and feminine constructs equally valuable to the theory and practices of organizational resilience during times of crisis. Viewing the organizational demands of crisis and the expectations of the millennial workforce through the degendering lens elucidates conceptualizations of gender constructions and power that limit inclusive practices and processes of organizational resilience. Data was used from focus groups of men and women between the ages of 21–35 (millennials) who have experience in the workplace and a shared knowledge of sustainability including social aspects of gender equity and inclusion. The Degendering Organizational Resilience model (DOR) was used for analysis to reveal barriers to inclusive, resilient organizational practices. The data was organized according to the three aspects of the DOR, power structures, gendering practices, and language. A unique contribution of this study is that it explores a cross-cultural gender perspective of organizational resilience focused on a specific cohort group, the millennials. Based on the findings three organizational recommendations for practice were identified. These include recommendations for policies and practices that deconstruct inequitable practices and co-create more agile structures, practices, and narratives for sustainable and resilient organizations.


Sign in / Sign up

Export Citation Format

Share Document