Detecting Relationships Between Categorical Variables Observed Over Time: A Problem of Deflating a Chi-Squared Statistic

Author(s):  
Patricia M. E. Altham
Author(s):  
E. Maguire ◽  
K. Glynn ◽  
C. McGrath ◽  
P. Byrne

Abstract Objectives: A review of the literature demonstrates that relatively little is known about acute psychiatric presentations in children (0–12 years), compared with adolescents or young adults (12 years+). This study aims to review psychiatric presentations of children to a CAMHS Liaison Service at Children’s Hospital Ireland (CHI) at Tallaght University Hospital over a 10-year period. Methods: A retrospective study was undertaken of case notes of all children aged 12 years and under who were referred to the CAMHS Liaison Service between January 2009 and December 2018 (n = 318). Data were anonymised and inputted into SPSSv25 for analysis. The relationships between presentations and methods of self-harm over time were measured using Pearson’s correlation. Associations between categorical variables were analysed using chi-squared tests. Results: There was a significant increase in presentations of under-12s over the 10-year period (r(8)=0.66, p = 0.02). There was also a significant increase in children presenting with a disturbance of conduct and/or emotions over time (r(8) = 0.79, p < 0.001). There was a significant association between female gender and ingestion (X2 = 12.73, df = 1, p < 0.05) and between male gender and ligature as a method of self-harm (X2 = 5.54, df = 1, p < 0.05). Over half (53%) of children presented with suicidal thoughts and 22% presented with suicidal behaviours. The reported use of ligature as a method of self-harm emerged only from 2012 among cases studied. Conclusions: Children aged 12 years and under are presenting in increasing numbers with acute mental health difficulties, including suicidal thoughts and behaviours. There is a worrying trend in methods of self-harm, particularly in high lethality behaviours such as attempted strangulation.


2021 ◽  
pp. 155335062110080
Author(s):  
Lara Blanco Terés ◽  
Carlos Cerdán Santacruz ◽  
Javier García Septiem ◽  
Rocío Maqueda González ◽  
José María Lopesino González ◽  
...  

Introduction: The pandemic produced by SARS-CoV-2 has obliged us to set up the tele-assistance to offer a continuity of care. This implies an innovation, being the degree of satisfaction of patients unknown. Methods: A telephonic survey was conducted with the validated in the Spanish tool Telehealth Usability Questionnaire (Telehealth Usability Questionnaire; rating from 1-7) of all candidate patients assisted consecutively in the Coloproctology Unit. We included demographic variables, education level, job status, diagnosis and consultation type. A descriptive study was done. The relationship between the willingness of consultation model in the future (telemedicine vs traditional) and the categorical variables was analysed through the chi-squared test. Results: A total of 115 patients were included. The average age was 59.9 years, being 60% women. The average score in each of the survey items was higher than 6 in all the questions but 1. 26.1% of the surveyed patients confessed being advocated to tele-assistance in the future. The only factors related to greater willingness to tele-assistance were male gender (37% vs 18.8%; P = .03) and a higher academic preparation level in favour of higher technical studies (35.9%) and university studies (32.4%) opposite to the rest ( P = .043). The rest of variables studied, job status, labour regimen, diagnostic group and consultation type did not show any relationship. Conclusions: A vast majority of patients answered favourably to almost all the items of the survey. However, only 26.1% of them would choose a model of tele-assistance without restrictions.


Circulation ◽  
2019 ◽  
Vol 140 (Suppl_2) ◽  
Author(s):  
David G Buckler ◽  
Megan Barnes ◽  
Tyler D Alexander ◽  
Marissa Lang ◽  
Alexis M Zebrowski ◽  
...  

Introduction: State-level legislation requiring CPR education prior to high school graduation (CPR Legislation) is associated with an increased likelihood of community-level CPR training. CPR Legislation has also been shown to be associated with increased bystander CPR. We hypothesized that states with recent CPR Legislation would be associated with higher survival in older adults following out-of-hospital cardiac arrest (OHCA). Methods: Utilizing 2014 Medicare Claims data for emergency department (ED) visits and inpatient stays, we identified OHCA via ICD-9-CM code. CPR Legislation data was collected through online statute review. Exposure to CPR Legislation was assessed using the patient state of residence reported on the first claim. Patient dispositions were coded as home, SNF, death/hospice, rehab or other. All categories were considered survival to discharge except for death/hospice. Associations between categorical variables were assessed by chi-squared test. Multiple logistic regression was used to calculate the odds ratio associated with OHCA survival and CPR Legislation, controlling for patient age and sex. Results: In 2014, 256,277 OHCAs were identified. Mean age was 79 ±8 y, 48% were female, 23% were non-white, and survival to discharge was 22%. Prior to 2013, 4 states had passed CPR Legislation and 6 others passed legislation in 2013. These states account for 12% of OCHA for the study year. States that passed CPR Legislation in 2013 had the highest survival compared to states with earlier passage or no CPR Legislation (22.2% vs 20.6% vs. 21.8%, respectively, p < 0.001). Among those who survived to discharge, more patients were discharged home from states with 2013 CPR Legislation, than earlier or no legislation (50.8% vs. 41.3% vs. 42.8%, p <0.001). Results of the multiple logistic regression showed CPR Legislation passed in 2013 was associated with a 12% increase in the odds of survival to discharge compared to states with CPR Legislation prior to 2013 (OR: 1.12, p <0.001). Conclusion: States with CPR Legislation passed in 2013 were associated with higher survival to discharge and discharge to home, compared to earlier adopters and states with no legislation. Further work is needed to assess the mechanisms underlying this relationship.


2019 ◽  
Vol 9 (3) ◽  
pp. 204589401882456 ◽  
Author(s):  
Jacob Schultz ◽  
Nicholas Giordano ◽  
Hui Zheng ◽  
Blair A. Parry ◽  
Geoffrey D. Barnes ◽  
...  

Background We provide the first multicenter analysis of patients cared for by eight Pulmonary Embolism Response Teams (PERTs) in the United States (US); describing the frequency of team activation, patient characteristics, pulmonary embolism (PE) severity, treatments delivered, and outcomes. Methods We enrolled patients from the National PERT Consortium™ multicenter registry with a PERT activation between 18 October 2016 and 17 October 2017. Data are presented combined and by PERT institution. Differences between institutions were analyzed using chi-squared test or Fisher's exact test for categorical variables, and ANOVA or Kruskal-Wallis test for continuous variables, with a two-sided P value < 0.05 considered statistically significant. Results There were 475 unique PERT activations across the Consortium, with acute PE confirmed in 416 (88%). The number of activations at each institution ranged from 3 to 13 activations/month/1000 beds with the majority originating from the emergency department (281/475; 59.3%). The largest percentage of patients were at intermediate–low (141/416, 34%) and intermediate–high (146/416, 35%) risk of early mortality, while fewer were at high-risk (51/416, 12%) and low-risk (78/416, 19%). The distribution of risk groups varied significantly between institutions ( P = 0.002). Anticoagulation alone was the most common therapy, delivered to 289/416 (70%) patients with confirmed PE. The proportion of patients receiving any advanced therapy varied between institutions ( P = 0.0003), ranging from 16% to 46%. The 30-day mortality was 16% (53/338), ranging from 9% to 44%. Conclusions The frequency of team activation, PE severity, treatments delivered, and 30-day mortality varies between US PERTs. Further research should investigate the sources of this variability.


2016 ◽  
Vol 82 (10) ◽  
pp. 885-889 ◽  
Author(s):  
Mohammed Al-Temimi ◽  
Charles Trujillo ◽  
John Agapian ◽  
Hanna Park ◽  
Ahmad Dehal ◽  
...  

Incidental appendectomy (IA) could potentially increase the risk of morbidity after abdominal procedures; however, such effect is not clearly established. The aim of our study is to test the association of IA with morbidity after abdominal procedures. We identified 743 (0.37%) IA among 199,233 abdominal procedures in the National Surgical Quality Improvement Program database (2005–2009). Cases with and without IA were matched on the index current procedural terminology code. Patient characteristics were compared using chi-squared test for categorical variables and Student t test for continuous variables. Multivariate logistic regression analysis was performed. Emergency and open surgeries were associated with performing IA. Multivariate analysis showed no association of IA with mortality [odds ratio (OR) = 0.51, 95% confidence interval (CI) = 0.26–1.02], overall morbidity (OR = 1.16, 95% CI = 0.92–1.47), or major morbidity (OR = 1.20, 95% CI = 0.99–1.48). However, IA increased overall morbidity among patients undergoing elective surgery (OR = 1.31,95% CI = 1.03–1.68) or those ≥30 years old (OR = 1.23, 95% CI = 1.00–1.51). IA was also associated with higher wound complications (OR = 1.46,95% CI = 1.05–2.03). In conclusion, IA is an uncommonly performed procedure that is associated with increased risk of postoperative wound complications and increased risk of overall morbidity in a selected patient population.


2020 ◽  
Vol 10 (01) ◽  
pp. e32-e36
Author(s):  
Nasim C. Sobhani ◽  
Rachel Shulman ◽  
Erin E. Tran ◽  
Juan M. Gonzalez

Abstract Objective Although preterm delivery (PTD) before 34 weeks for severe hypertensive disease is a diagnostic criterion for antiphospholipid syndrome (APS), there is no consensus regarding testing for antiphospholipid antibodies (aPL) in this setting. We aim to describe the frequency of and the characteristics associated with inpatient aPL testing in this population. Study Design In this retrospective study of PTD before 34 weeks for severe hypertensive disease, charts were reviewed for aPL testing, gestational age at delivery, fetal complications, and severity of maternal disease. Wilcoxon rank-sum test, Fisher's exact, and chi-squared tests were used for analyses of continuous and categorical variables, and multivariate logistic regression for adjusted odds ratios. Results Among 133 cases, 14.3% had APS screening via aPL testing. Screened patients delivered earlier than unscreened patients (28.9 vs. 31.7 weeks, p <0.001). Each additional week of gestation was associated with a 39% decrease in the odds of screening (95% confidence interval: 0.43–0.85). There were no other differences between the groups. Conclusion APS screening after PTD for severe hypertensive disease is uncommon but more likely with earlier PTD. Despite conflicting recommendations from professional organizations, prior studies demonstrate contraceptive, obstetrical, and long-term risks associated with APS, suggesting that we should increase our screening efforts.


2012 ◽  
Vol 30 (15_suppl) ◽  
pp. e13122-e13122
Author(s):  
Julia Wilkerson ◽  
Laleh Amiri-Kordestani ◽  
Ravi A. Madan ◽  
Bamidele Adesunloye ◽  
Sen Hong Zhuang ◽  
...  

e13122 Background: The response of tumors to chemotherapy is monitored using imaging data or tumor markers and this quantitative data provides a rich source for an objective response assessment and treatment decisions. Responses are usually assessed as categorical variables based on percentage increase or decrease in tumor size. Methods: We have developed mathematical equations that describe efficacy as a continuous variable, enabling the extraction of the appropriate rate constants for tumor growth and regression (decay), designated g and d, respectively. Both are used to describe the rates of tumor growth and regression for the fraction of tumor that is growing despite treatment and the fraction dying as a result of therapy, respectively. Results: Using data from randomized phase III trials in kidney and breast cancer, multiple myeloma, and medullary thyroid carcinoma; as well as phase II trials in prostate cancer we have shown that: (1) values of g but not those of d are strongly correlated (negatively) with patient survival; (2) g can be discerned early in treatment, before growth is demonstrated clinically, providing an early efficacy measure; (3) g typically does not change over time, even over years, suggesting resistance is intrinsic and predictable and does not worsen over time; (4) effective therapies both increase d, and reduce g; and (5) in every cancer studied, the evidence suggests tumor growth reverts to its pre-treatment rate when chemotherapy is discontinued. Conclusions: The observation that g remains stable allows one to predict the most likely outcome of continued therapy. The evidence indicates that the increase in g occurring after treatment discontinuation is due to a resumption of a pre-treatment growth rate and not a change in biology. Our hypothesis is that if a favorable growth rate that slows tumor growth can be identified, survival might be improved if therapies that achieve this favorable growth rate are continued despite crossing conventional disease progression boundaries. We plan a prospective test of this model to provide a more informed decision and better survival outcome by maximizing the benefit obtained from approved therapies.


2016 ◽  
Vol 41 (6) ◽  
pp. E9 ◽  
Author(s):  
Varun R. Kshettry ◽  
Hyunwoo Do ◽  
Khaled Elshazly ◽  
Christopher J. Farrell ◽  
Gurston Nyquist ◽  
...  

OBJECTIVE There is a paucity of literature regarding the learning curve associated with performing endoscopic endonasal cranial base surgery. The purpose of this study was to determine to what extent a learning curve might exist for endoscopic endonasal resection in cases of craniopharyngiomas. METHODS A retrospective review was performed for all endoscopic endonasal craniopharyngioma resections performed at Thomas Jefferson University from 2005 to 2015. To assess for a learning curve effect, patients were divided into an early cohort (2005–2009, n = 20) and a late cohort (2010–2015, n = 23). Preoperative demographics, clinical presentation, imaging characteristics, extent of resection, complications, tumor control, and visual and endocrine outcomes were obtained. Categorical variables and continuous variables were compared using a 2-sided Fisher's exact test and t-test, respectively. RESULTS Only the index operation performed at the authors' institution was included. There were no statistically significant differences between early and late cohorts in terms of patient age, sex, presenting symptoms, history of surgical or radiation treatment, tumor size or consistency, hypothalamic involvement, or histological subtype. The rate of gross-total resection (GTR) increased over time from 20% to 65% (p = 0.005), and the rate of subtotal resection decreased over time from 40% to 13% (p = 0.078). Major neurological complications, including new hydrocephalus, meningitis, carotid artery injury, or stroke, occurred in 6 patients (15%) (8 complications) in the early cohort compared with only 1 (4%) in the late cohort (p = 0.037). CSF leak decreased from 40% to 4% (p = 0.007). Discharge to home increased from 64% to 95% (p = 0.024). Visual improvement was high in both cohorts (88% [early cohort] and 81% [late cohort]). Rate of postoperative panhypopituitarism and permanent diabetes insipidus both increased from 50% to 91% (p = 0.005) and 32% to 78% (p = 0.004), which correlated with a significant increase in intentional stalk sacrifice in the late cohort (from 0% to 70%, p < 0.001). CONCLUSIONS High rates of near- or total resection and visual improvement can be achieved using an endoscopic endonasal approach for craniopharyngiomas. However, the authors did find evidence for a learning curve. After 20 cases, they found a significant decrease in major neurological complications and significant increases in the rates of GTR rate and discharge to home. Although there was a large decrease in the rate of postoperative CSF leak over time, this was largely attributable to the inclusion of very early cases prior to the routine use of vascularized nasoseptal flaps. There was a significant increase in new panhypopituitarism and diabetes insipidus, which is attributable to increase rates of intentional stalk sacrifice.


2021 ◽  
Author(s):  
Gabriella Gatt ◽  
Nikolai Attard

Abstract BackgroundDespite increasing prevalence, age specific risk predictive models for erosive tooth wear in preschool age children have not been developed. Identification of at risk groups and the timely introduction of behavioural change or treatment will stop the progression of erosive wear in the permanent dentition. The aim of this study was to identify age specific risk factors for erosive wear. Distinct risk prediction models for three year old and five year old children were developed.MethodsA prospective cohort study included clinical examinations and parent administered questionnaires for three and five-year-old children. Chi-square tests explored categorical demographic variables, Spearman Rank Order correlation tests examined changes in BEWE scores with changes in food frequencies while Wilcoxon signed rank tests evaluated the temporal effect of frequencies of consumption of dietary items. Mann-Whitney U tests compared changes in BEWE scores over time for the twenty-six bivariate categorical variables and Kruskall-Wallis tests compared changes in BEWE scores over time across the remaining 55 categorical variables representing demographic factors, oral hygiene habits and dietary habits. Change in BEWE scores for continuous variables was investigated using Spearman Rho correlation coefficient Test. Those variables showing significance with a difference in BEWE cumulative score over time were utilised to develop two risk prediction models. The models were evaluated by Receiver Operating Characteristics (ROC) analysis.ResultsRisk factors for the three-year-old cohort included the erosive wear (χ2 (1, 92) = 12.829, p < 0.001), district (χ2 (5, 92) = 17.032, p = 0.004) and family size (χ2 (1, 92) = 4.547, p = 0.033). Risk factors for the five-year-old cohort also included erosive wear (χ2 (1, 144) = 4.768, p = 0.029) gender (χ2 (1, 144) = 19.399, p <0.001), consumption of iced tea (χ2 (1, 144) = 8.872, p = 0.003) and dry mouth (χ2 (1, 144) = 9.598, p = 0.002).Conclusions: Predictive risk factors for three-year-old children are based on demographic factors and are distinct from those for the five-year-old cohort, which are based on biological and behavioural factors. The presence of erosive wear is a risk factor for further wear in both age cohorts.


2016 ◽  
Vol 102 (4) ◽  
pp. 7-16 ◽  
Author(s):  
Susan H. Allen ◽  
Robert L. Marier ◽  
Cecilia Mouton ◽  
Arti Shankar

Currently, the majority of medical boards require only one year of post-graduate training (PGT) for full and unrestricted licensure. This study analyzes the association between years of PGT, board certification and the risk of being disciplined by the Louisiana State Board of Medical Examiners (LSBME) to assess whether training requirements for physician licensure in Louisiana should be revised. 624 physicians who were sanctioned between 1990 and 2010 were compared to a random sample of 6,552 physicians who were not disciplined during the study period. Statistical methods included chi-squared tests of independence and logistic regression analysis. After controlling for demographics, specialty, years of training, board certification status and changing training requirements over time, physicians who had completed more than one year but less than three years of PGT were more than twice as likely to be disciplined (O.R. 2.24, p&lt;.005), while non board-certified physicians were more than four times as likely to be disciplined (O.R. 4.64, p&lt;.0001). Of all physicians sanctioned for findings of substandard practices/medical incompetency, 21% had fewer than three years PGT, and 46% of physicians with less than three years training were sanctioned for this reason. Our study indicates that physicians who do not complete a minimum of three years post-graduate training are more likely to be the subject of a disciplinary action, and that these physicians are more likely to be sanctioned for competency/standards-related issues. Because medical knowledge and training expectations have increased over time, licensing authorities may want to delay full licensure status until applicants have had a minimum of three years PGT in an ACGME or AOA-accredited training program.


Sign in / Sign up

Export Citation Format

Share Document