Case-control studies in neurosurgery

2014 ◽  
Vol 121 (2) ◽  
pp. 285-296 ◽  
Author(s):  
Cody L. Nesvick ◽  
Clinton J. Thompson ◽  
Frederick A. Boop ◽  
Paul Klimo

Object Observational studies, such as cohort and case-control studies, are valuable instruments in evidence-based medicine. Case-control studies, in particular, are becoming increasingly popular in the neurosurgical literature due to their low cost and relative ease of execution; however, no one has yet systematically assessed these types of studies for quality in methodology and reporting. Methods The authors performed a literature search using PubMed/MEDLINE to identify all studies that explicitly identified themselves as “case-control” and were published in the JNS Publishing Group journals (Journal of Neurosurgery, Journal of Neurosurgery: Pediatrics, Journal of Neurosurgery: Spine, and Neurosurgical Focus) or Neurosurgery. Each paper was evaluated for 22 descriptive variables and then categorized as having either met or missed the basic definition of a case-control study. All studies that evaluated risk factors for a well-defined outcome were considered true case-control studies. The authors sought to identify key features or phrases that were or were not predictive of a true case-control study. Those papers that satisfied the definition were further evaluated using the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist. Results The search detected 67 papers that met the inclusion criteria, of which 32 (48%) represented true case-control studies. The frequency of true case-control studies has not changed with time. Use of odds ratios (ORs) and logistic regression (LR) analysis were strong positive predictors of true case-control studies (for odds ratios, OR 15.33 and 95% CI 4.52–51.97; for logistic regression analysis, OR 8.77 and 95% CI 2.69–28.56). Conversely, negative predictors included focus on a procedure/intervention (OR 0.35, 95% CI 0.13–0.998) and use of the word “outcome” in the Results section (OR 0.23, 95% CI 0.082–0.65). After exclusion of nested case-control studies, the negative correlation between focus on a procedure/intervention and true case-control studies was strengthened (OR 0.053, 95% CI 0.0064–0.44). There was a trend toward a negative association between the use of survival analysis or Kaplan-Meier curves and true case-control studies (OR 0.13, 95% CI 0.015–1.12). True case-control studies were no more likely than their counterparts to use a potential study design “expert” (OR 1.50, 95% CI 0.57–3.95). The overall average STROBE score was 72% (range 50–86%). Examples of reporting deficiencies were reporting of bias (28%), missing data (55%), and funding (44%). Conclusions The results of this analysis show that the majority of studies in the neurosurgical literature that identify themselves as “case-control” studies are, in fact, labeled incorrectly. Positive and negative predictors were identified. The authors provide several recommendations that may reverse the incorrect and inappropriate use of the term “case-control” and improve the quality of design and reporting of true case-control studies in neurosurgery.

Author(s):  
Jeremy A Labrecque ◽  
Myriam M G Hunink ◽  
M Arfan Ikram ◽  
M Kamran Ikram

Abstract Case-control studies are an important part of the epidemiologic literature, yet confusion remains about how to interpret estimates from different case-control study designs. We demonstrate that not all case-control study designs estimate odds ratios. On the contrary, case-control studies in the literature often report odds ratios as their main parameter even when using designs that do not estimate odds ratios. Only studies using specific case-control designs should report odds ratios, whereas the case-cohort and incidence-density sampled case-control studies must report risk ratio and incidence rate ratios, respectively. This also applies to case-control studies conducted in open cohorts, which often estimate incidence rate ratios. We also demonstrate the misinterpretation of case-control study estimates in a small sample of highly cited case-control studies in general epidemiologic and medical journals. We therefore suggest that greater care be taken when considering which parameter is to be reported from a case-control study.


2019 ◽  
Vol 48 (6) ◽  
pp. 1981-1991 ◽  
Author(s):  
Yin Bun Cheung ◽  
Xiangmei Ma ◽  
K F Lam ◽  
Jialiang Li ◽  
Paul Milligan

Abstract Background Previous simulation studies of the case–control study design using incidence density sampling, which required individual matching for time, showed biased estimates of association from conditional logistic regression (CLR) analysis; however, the reason for this is unknown. Separately, in the analysis of case–control studies using the exclusive sampling design, it has been shown that unconditional logistic regression (ULR) with adjustment for an individually matched binary factor can give unbiased estimates. The validity of this analytic approach in incidence density sampling needs evaluation. Methods In extensive simulations using incidence density sampling, we evaluated various analytic methods: CLR with and without a bias-reduction method, ULR with adjustment for time in quintiles (and residual time within quintiles) and ULR with adjustment for matched sets and bias reduction. We re-analysed a case–control study of Haemophilus influenzae type B vaccine using these methods. Results We found that the bias in the CLR analysis from previous studies was due to sparse data bias. It can be controlled by the bias-reduction method for CLR or by increasing the number of cases and/or controls. ULR with adjustment for time in quintiles usually gave results highly comparable to CLR, despite breaking the matches. Further adjustment for residual time trends was needed in the case of time-varying effects. ULR with adjustment for matched sets tended to perform poorly despite bias reduction. Conclusions Studies using incidence density sampling may be analysed by either ULR with adjustment for time or CLR, possibly with bias reduction.


Author(s):  
Noam Karni ◽  
Hadar Klein ◽  
Kim Asseo ◽  
Yuval Benjamini ◽  
Sarah Israel ◽  
...  

Background: Clinical diagnosis of COVID-19 poses an enormous challenge to early detection and prevention of COVID-19, which is of crucial importance for pandemic containment. Cases of COVID-19 may be hard to distinguish clinically from other acute viral diseases, resulting in an overwhelming load of laboratory screening. Sudden onset of taste and smell loss emerge as hallmark of COVID-19. The optimal ways for including these symptoms in the screening of suspected COVID-19 patients should now be established. Methods: We performed a case-control study on patients that were PCR-tested for COVID-19 (112 positive and 112 negative participants), recruited during the first wave (March 2020 - May 2020) of COVID-19 pandemic in Israel. Patients were interviewed by phone regarding their symptoms and medical history and were asked to rate their olfactory and gustatory ability before and during their illness on a 1-10 scale. Prevalence and degrees of symptoms were calculated, and odds ratios were estimated. Symptoms-based logistic-regression classifiers were constructed and evaluated on a hold-out set. Results: Changes in smell and taste occurred in 68% (95% CI 60%-76%) and 72% (64%-80%), of positive patients, with 24 (11-53 range) and 12 (6-23) respective odds ratios. The ability to smell was decreased by 0.5 ± 1.5 in negatives, and by 4.5 ± 3.6 in positives, and to taste by 0.4 ± 1.5 and 4.9 ± 3.8, respectively (mean ± SD). A penalized logistic regression classifier based on 5 symptoms (degree of smell change, muscle ache, lack of appetite, fever, and a negatively contributing sore throat), has 66% sensitivity, 97% specificity and an area under the ROC curve of 0.83 (AUC) on a hold-out set. A classifier based on degree of smell change only is almost as good, with 66% sensitivity, 97% specificity and 0.81 AUC. Under the assumption of 8% positives among those tested, the predictive positive value (PPV) of this classifier is 0.68 and negative predictive value (NPV) is 0.97. Conclusions: Self-reported quantitative olfactory changes, either alone or combined with other symptoms, provide a specific and powerful tool for clinical diagnosis of COVID-19. The applicability of this tool for prioritizing COVID-19 laboratory testing is facilitated by a simple calculator presented here.


Author(s):  
Zoran Z. Sarcevic ◽  
Andreja P. Tepavcevic

BACKGROUND: Subacromial pain (SAP) is a common complaint of young athletes, independently of the sport engaged. The prevalence of SAP in some sports is up to 50%. OBJECTIVE: The study was aimed to investigate some new factors possibly associated to subacromial pain in young athletes. The factors considered were the grade of tightness of the clavicular portion of the pectoralis major, dysfunction of the sternoclavicular joint, and serratus anterior and lower trapezius strength. METHODS: This case-control study included 82 young athletes 9–15 years, 41 with the symptoms of SAP and 41 controls. All participants self-reported whether they had subacromial pain. In addition, Hawkins–Kennedy Test was performed to all the participants to evaluate the subacromial pressure. Main outcome measures were the grade of tightness of the clavicular portion of the pectoralis major, dysfunction of the sternoclavicular joint, and serratus anterior and lower trapezius strength. The grade of tightness of the clavicular portion of the pectoralis major and the dysfunction of the sternoclavicular joint were measured with an inclinometer. Serratus anterior and lower trapezius strength were measured by a handheld dynamometer with external belt-fixation. The data were analyzed using t-test for independent samples, Mann-Whitney U test, contingency coefficients and a stepwise binary logistic regression. RESULTS: Significant statistical difference was observed in the grade of tightness of the clavicular portion of the pectoralis major and in the variable representing the physiological functioning of the sternoclavicular joint, between the cases and the controls. There was no significant difference in serratus anterior and lower trapezius strength between the cases and the controls. Logistic regression analysis showed that the variable representing the physiological functioning of the sternoclavicular joint and the grade of shortening of the clavicular portion of the pectoralis major were good predictors for presence of SAP. CONCLUSIONS: A strong association was determined between subacromial pain in young athletes, clavicular portion of pectoralis major tightness and the dysfunction of the sternoclavicular joint.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Bernard Kianu Phanzu ◽  
Aliocha Nkodila Natuhoyila ◽  
Eleuthère Kintoki Vita ◽  
Jean-René M’Buyamba Kabangu ◽  
Benjamin Longo-Mbenza

Abstract Background Conflicting information exists regarding the association between insulin resistance (IR) and left ventricular hypertrophy (LVH). We described the associations between obesity, fasting insulinemia, homeostasis model assessment of insulin resistance (HOMA-IR), and LVH in Black patients with essential hypertension. Methods A case–control study was conducted at the Centre Médical de Kinshasa (CMK), the Democratic Republic of the Congo, between January and December 2019. Cases and controls were hypertensive patients with and without LVH, respectively. The relationships between obesity indices, physical inactivity, glucose metabolism and lipid disorder parameters, and LVH were assessed using linear and logistic regression analyses in simple and univariate exploratory analyses, respectively. When differences were observed between LVH and independent variables, the effects of potential confounders were studied through the use of multiple linear regression and in conditional logistic regression in multivariate analyses. The coefficients of determination (R2), adjusted odds ratios (aORs), and their 95% confidence intervals (95% CIs) were calculated to determine associations between LVH and the independent variables. Results Eighty-eight LVH cases (52 men) were compared against 132 controls (81 men). Variation in left ventricular mass (LVM) could be predicted by the following variables: age (19%), duration of hypertension (31.3%), body mass index (BMI, 44.4%), waist circumference (WC, 42.5%), glycemia (20%), insulinemia (44.8%), and HOMA-IR (43.7%). Hypertension duration, BMI, insulinemia, and HOMA-IR explained 68.3% of LVM variability in the multiple linear regression analysis. In the logistic regression model, obesity increased the risk of LVH by threefold [aOR 2.8; 95% CI (1.06–7.4); p = 0.038], and IR increased the risk of LVH by eightfold [aOR 8.4; 95 (3.7–15.7); p < 0.001]. Conclusion Obesity and IR appear to be the primary predictors of LVH in Black sub-Saharan African hypertensive patients. The comprehensive management of cardiovascular risk factors should be emphasized, with particular attention paid to obesity and IR. A prospective population-based study of Black sub-Saharan individuals that includes the use of serial imaging remains essential to better understand subclinical LV deterioration over time and to confirm the role played by IR in Black sub-Saharan individuals with hypertension.


2021 ◽  
Vol 49 (6) ◽  
pp. 030006052110229
Author(s):  
Ying Li ◽  
Qing-rong Ouyang ◽  
Juan Li ◽  
Xiao-rong Chen ◽  
Lin-lin Li ◽  
...  

Objective To determine the associations between matrix metalloproteinase-2 (MMP-2, encoded by the MMP2 gene) 1306C/T and 735C/T polymorphisms and first and recurrent ischemic stroke in a Chinese population. Methods Patients with first and recurrent ischemic stroke were included. Serum MMP-2 was measured, and MMP2 1306C/T and 735C/T polymorphisms were detected. The associations between MMP2 1306C/T and 735C/T polymorphisms and first and recurrent ischemic stroke were analyzed. Results Serum MMP-2 in patients with first and recurrent ischemic stroke was significantly higher compared with controls, and patients with recurrent ischemic stroke had higher MMP-2 than those with first ischemic stroke. The frequency of the CC genotype and C allele of MMP2 735C/T was highest in patients with recurrent ischemic stroke, followed by patients with first ischemic stroke, and controls. Conversely, the genotype and allele of MMP2 1306C/T did not significantly differ between groups. The CC genotype of MMP2 735C/T was independently associated with first and recurrent ischemic stroke (odds ratios = 1.45 and 1.64, respectively), as was the C allele of MMP2 735C/T (odds ratios = 1.68 and 1.77, respectively). Conclusions The CC genotype and C allele of MMP2 735C/T were associated with first and recurrent ischemic stroke in a Chinese population.


2020 ◽  
Vol 22 (1) ◽  
pp. 6-14
Author(s):  
Matthew I Hardman ◽  
◽  
S Chandralekha Kruthiventi ◽  
Michelle R Schmugge ◽  
Alexandre N Cavalcante ◽  
...  

OBJECTIVE: To determine patient and perioperative characteristics associated with unexpected postoperative clinical deterioration as determined for the need of a postoperative emergency response team (ERT) activation. DESIGN: Retrospective case–control study. SETTING: Tertiary academic hospital. PARTICIPANTS: Patients who underwent general anaesthesia discharged to regular wards between 1 January 2013 and 31 December 2015 and required ERT activation within 48 postoperative hours. Controls were matched based on age, sex and procedure. MAIN OUTCOME MEASURES: Baseline patient and perioperative characteristics were abstracted to develop a multiple logistic regression model to assess for potential associations for increased risk for postoperative ERT. RESULTS: Among 105 345 patients, 797 had ERT calls, with a rate of 7.6 (95% CI, 7.1–8.1) calls per 1000 anaesthetics (0.76%). Multiple logistic regression analysis showed the following risk factors for postoperative ERT: cardiovascular disease (odds ratio [OR], 1.61; 95% CI, 1.18–2.18), neurological disease (OR, 1.57; 95% CI, 1.11–2.22), preoperative gabapentin (OR, 1.60; 95% CI, 1.17–2.20), longer surgical duration (OR, 1.06; 95% CI, 1.02–1.11, per 30 min), emergency procedure (OR, 1.54; 95% CI, 1.09–2.18), and intraoperative use of colloids (OR, 1.50; 95% CI, 1.17–1.92). Compared with control participants, ERT patients had a longer hospital stay, a higher rate of admissions to critical care (55.5%), increased postoperative complications, and a higher 30-day mortality rate (OR, 3.36; 95% CI, 1.73–6.54). CONCLUSION: We identified several patient and procedural characteristics associated with increased likelihood of postoperative ERT activation. ERT intervention is a marker for increased rates of postoperative complications and death.


Blood ◽  
1993 ◽  
Vol 82 (9) ◽  
pp. 2714-2718 ◽  
Author(s):  
DW Kaufman ◽  
JP Kelly ◽  
CB Johannes ◽  
A Sandler ◽  
D Harmon ◽  
...  

Abstract The relation of acute thrombocytopenic purpura (TP) to the use of drugs was investigated in a case-control study conducted in eastern Massachusetts, Rhode Island, and the Philadelphia region; 62 cases over the age of 16 years with acute onset and with a rapid recovery were compared with 2,625 hospital controls. After control for confounding by multiple logistic regression, use of the following drugs in the week before the onset of symptoms was significantly associated: trimethoprim/sulfamethoxazole (relative risk [RR] estimate, 124), quinidine/quinine (101), dipyridamole (14), sulfonylureas (4.8), and salicylates (2.6). The overall annual incidence of acute TP was estimated to be 18 cases per million population. The excess risks for the associated drugs were estimated to be 38 cases per million users of trimethoprim/sulfamethoxazole per week, 26 per million for quinidine/quinine, 3.9 per million for dipyridamole, 1.2 per million for sulfonylureas, and 0.4 per million for salicylates. Associations with sulfonamides, quinidine/quinine, sulfonylureas, and salicylates have been previously reported, but the present study has provided the first quantitative measures of the risk. The association with dipyridamole was unexpected. In general, despite large RRs, the incidence rates attributable to the drugs at issue (excess risks) were low, suggesting that TP is not an important consideration in the use of the various drugs.


2020 ◽  
Vol 11 ◽  
Author(s):  
Eberhard A. Deisenhammer ◽  
Elisa-Marie Behrndt-Bauer ◽  
Georg Kemmler ◽  
Christian Haring ◽  
Carl Miller

Objective: Psychiatric inpatients constitute a population at considerably increased risk for suicide. Identifying those at imminent risk is still a challenging task for hospital staff. This retrospective case–control study focused on clinical risk factors related to the course of the hospital stay.Method: Inpatient suicide cases were identified by linking the Tyrol Suicide Register with the registers of three psychiatric hospitals in the state. Control subjects were patients who had also been hospitalized in the respective psychiatric unit but had not died by suicide. Matching variables included sex, age, hospital, diagnosis, and admission date. The study period comprised 7 years. Data were analyzed by the appropriate two-sample tests and by logistic regression.Results: A total of 30 inpatient suicide cases and 54 control patients were included. A number of factors differentiated cases from controls; after correction for multiple testing, the following retained significance: history of aborted suicide, history of attempted suicide, history of any suicidal behavior/threats, suicidal ideation continuing during hospitalization, no development of prospective plans, no improvement of mood during the hospital stay, and leaving ward without giving notice. Logistic regression identified the latter three variables and history of attempted suicide as highly significant predictors of inpatient suicide.Conclusions: Preventive measures during hospitalization include thorough assessment of suicidal features, an emphasis on the development of future perspectives, and a review of hospital regulations for patients who want to leave the ward.


2019 ◽  
Vol 96 (4) ◽  
pp. 306-311 ◽  
Author(s):  
Stephen J Jordan ◽  
Evelyn Toh ◽  
James A Williams ◽  
Lora Fortenberry ◽  
Michelle L LaPradd ◽  
...  

ObjectivesChlamydia trachomatis (CT) and Mycoplasma genitalium (MG) cause the majority of non-gonococcal urethritis (NGU). The role of Ureaplasma urealyticum (UU) in NGU is unclear. Prior case–control studies that examined the association of UU and NGU may have been confounded by mixed infections and less stringent criteria for controls. The objective of this case–control study was to determine the prevalence and aetiology of mixed infections in men and assess if UU monoinfection is associated with NGU.MethodsWe identified 155 men with NGU and 103 controls. Behavioural and clinical information was obtained and men were tested for Neisseria gonorrhoeae and CT, MG, UU and Trichomonas vaginalis (TV). Men who were five-pathogen negative were classified as idiopathic urethritis (IU).ResultsTwelve per cent of NGU cases in which a pathogen was identified had mixed infections, mostly UU coinfections with MG or CT; 27% had IU. In monoinfected NGU cases, 34% had CT, 17% had MG, 11% had UU and 2% had TV. In controls, pathogens were rarely identified, except for UU, which was present in 20%. Comparing cases and controls, NGU was associated with CT and MG monoinfections and mixed infections. UU monoinfection was not associated with NGU and was almost twice as prevalent in controls. Men in both the case and control groups who were younger and who reported no prior NGU diagnosis were more likely to have UU (OR 0.97 per year of age, 95% CI 0.94 to 0.998 and OR 6.3, 95% CI 1.4 to 28.5, respectively).ConclusionsMixed infections are common in men with NGU and most of these are UU coinfections with other pathogens that are well-established causes of NGU. UU monoinfections are not associated with NGU and are common in younger men and men who have never previously had NGU. Almost half of NGU cases are idiopathic.


Sign in / Sign up

Export Citation Format

Share Document