scholarly journals MRI in the diagnosis of fetal developmental brain abnormalities: the MERIDIAN diagnostic accuracy study

2019 ◽  
Vol 23 (49) ◽  
pp. 1-144 ◽  
Author(s):  
Paul D Griffiths ◽  
Michael Bradburn ◽  
Michael J Campbell ◽  
Cindy L Cooper ◽  
Nicholas Embleton ◽  
...  

Background Ultrasonography has been the mainstay of antenatal screening programmes in the UK for many years. Technical factors and physical limitations may result in suboptimal images that can lead to incorrect diagnoses and inaccurate counselling and prognostic information being given to parents. Previous studies suggest that the addition of in utero magnetic resonance imaging (iuMRI) may improve diagnostic accuracy for fetal brain abnormalities. These studies have limitations, including a lack of an outcome reference diagnosis (ORD), which means that improvements could not be assessed accurately. Objectives To assess the diagnostic impact, acceptability and cost consequence of iuMRI among fetuses with a suspected fetal brain abnormality. Design A pragmatic, prospective, multicentre, cohort study with a health economics analysis and a sociological substudy. Setting Sixteen UK fetal medicine centres. Participants Pregnant women aged ≥ 16 years carrying a fetus (at least 18 weeks’ gestation) with a suspected brain abnormality detected on ultrasonography. Interventions Participants underwent iuMRI and the findings were reported to their referring fetal medicine clinician. Main outcome measures Pregnancy outcome was followed up and an ORD from postnatal imaging or postmortem autopsy/imaging collected when available. Developmental data from the Bayley Scales of Infant Development and questionnaires were collected from the surviving infants aged 2–3 years. Data on the management of the pregnancy before and after the iuMRI were collected to inform the economic evaluation. Two surveys collected data on patient acceptability of iuMRI and qualitative interviews with participants and health professionals were undertaken. Results The primary analysis consisted of 570 fetuses. The absolute diagnostic accuracies of ultrasonography and iuMRI were 68% and 93%, respectively [a difference of 25%, 95% confidence interval (CI) 21% to 29%]. The difference between ultrasonography and iuMRI increased with gestational age. In the 18–23 weeks group, the figures were 70% for ultrasonography and 92% for iuMRI (difference of 23%, 95% CI 18% to 27%); in the ≥ 24 weeks group, the figures were 65% for ultrasonography and 94% for iuMRI (difference of 29%, 95% CI 23% to 36%). Patient acceptability was high, with at least 95% of respondents stating that they would have iuMRI again in a similar situation. Health professional interviews suggested that iuMRI was acceptable to clinicians and that iuMRI was useful as an adjunct to ultrasonography, but not as a replacement. Across a range of scenarios, iuMRI resulted in additional costs compared with ultrasonography alone. The additional cost was consistently < £600 per patient and the cost per management decision appropriately changed was always < £3000. There is potential for reporting bias from the referring clinicians on the diagnostic and prognostic outcomes. Lower than anticipated follow-up rates at 3 years of age were observed. Conclusions iuMRI as an adjunct to ultrasonography significantly improves the diagnostic accuracy and confidence for the detection of fetal brain abnormalities. An evaluation of the use of iuMRI for cases of isolated microcephaly and the diagnosis of fetal spine abnormalities is recommended. Longer-term follow-up studies of children diagnosed with fetal brain abnormalities are required to fully assess the functional significance of the diagnoses. Trial registration Current Controlled Trials ISRCTN27626961. Funding This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 23, No. 49. See the NIHR Journals Library website for further project information.

2019 ◽  
Vol 23 (62) ◽  
pp. 1-94 ◽  
Author(s):  
Mark T Drayson ◽  
Stella Bowcock ◽  
Tim Planche ◽  
Gulnaz Iqbal ◽  
Guy Pratt ◽  
...  

Background Myeloma causes profound immunodeficiency and recurrent serious infections. There are approximately 5500 new UK cases of myeloma per annum, and one-quarter of patients will have a serious infection within 3 months of diagnosis. Newly diagnosed patients may benefit from antibiotic prophylaxis to prevent infection. However, the use of prophylaxis has not been established in myeloma and may be associated with health-care-associated infections (HCAIs), such as Clostridium difficile. There is a need to assess the benefits and cost-effectiveness of the use of antibacterial prophylaxis against any risks in a double-blind, placebo-controlled, randomised clinical trial. Objectives To assess the risks, benefits and cost-effectiveness of prophylactic levofloxacin in newly diagnosed symptomatic myeloma patients. Design Multicentre, randomised, double-blind, placebo-controlled trial. A central telephone randomisation service used a minimisation computer algorithm to allocate treatments in a 1 : 1 ratio. Setting A total of 93 NHS hospitals throughout England, Northern Ireland and Wales. Participants A total of 977 patients with newly diagnosed symptomatic myeloma. Intervention Patients were randomised to receive levofloxacin or placebo tablets for 12 weeks at the start of antimyeloma treatment. Treatment allocation was blinded and balanced by centre, estimated glomerular filtration rate and intention to give high-dose chemotherapy with autologous stem cell transplantation. Follow-up was at 4-week intervals up to 16 weeks, with a further follow-up at 1 year. Main outcome measures The primary outcome was to assess the number of febrile episodes (or deaths) in the first 12 weeks from randomisation. Secondary outcomes included number of deaths and infection-related deaths, days in hospital, carriage and invasive infections, response to antimyeloma treatment and its relation to infection, quality of life and overall survival within the first 12 weeks and beyond. Results In total, 977 patients were randomised (levofloxacin, n = 489; placebo, n = 488). A total of 134 (27%) events (febrile episodes, n = 119; deaths, n = 15) occurred in the placebo arm and 95 (19%) events (febrile episodes, n = 91; deaths, n = 4) occurred in the levofloxacin arm; the hazard ratio for time to first event (febrile episode or death) within the first 12 weeks was 0.66 (95% confidence interval 0.51 to 0.86; p = 0.002). Levofloxacin also reduced other infections (144 infections from 116 patients) compared with placebo (179 infections from 133 patients; p-trend of 0.06). There was no difference in new acquisitions of C. difficile, methicillin-resistant Staphylococcus aureus and extended-spectrum beta-lactamase Gram-negative organisms when assessed up to 16 weeks. Levofloxacin produced slightly higher quality-adjusted life-year gains over 16 weeks, but had associated higher costs for health resource use. With a median follow-up of 52 weeks, there was no significant difference in overall survival (p = 0.94). Limitations Short duration of prophylactic antibiotics and cost-effectiveness. Conclusions During the 12 weeks from new diagnosis, the addition of prophylactic levofloxacin to active myeloma treatment significantly reduced febrile episodes and deaths without increasing HCAIs or carriage. Future work should aim to establish the optimal duration of antibiotic prophylaxis and should involve the laboratory investigation of immunity, inflammation and disease activity on stored samples funded by the TEAMM (Tackling Early Morbidity and Mortality in Myeloma) National Institute for Health Research Efficacy and Mechanism Evaluation grant (reference number 14/24/04). Trial registration Current Controlled Trials ISRCTN51731976. Funding details This project was funded by the NIHR Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 23, No. 62. See the NIHR Journals Library website for further project information.


2020 ◽  
Vol 7 ◽  
pp. 237428952096822
Author(s):  
Erik J. Landaas ◽  
Ashley M. Eckel ◽  
Jonathan L. Wright ◽  
Geoffrey S. Baird ◽  
Ryan N. Hansen ◽  
...  

We describe the methods and decision from a health technology assessment of a new molecular test for bladder cancer (Cxbladder), which was proposed for adoption to our send-out test menu by urology providers. The Cxbladder health technology assessment report contained mixed evidence; predominant concerns were related to the test’s low specificity and high cost. The low specificity indicated a high false-positive rate, which our laboratory formulary committee concluded would result in unnecessary confirmatory testing and follow-up. Our committee voted unanimously to not adopt the test system-wide for use for the initial diagnosis of bladder cancer but supported a pilot study for bladder cancer recurrence surveillance. The pilot study used real-world data from patient management in the scenario in which a patient is evaluated for possible recurrent bladder cancer after a finding of atypical cytopathology in the urine. We evaluated the type and number of follow-up tests conducted including urine cytopathology, imaging studies, repeat cystoscopy evaluation, biopsy, and repeat Cxbladder and their test results. The pilot identified ordering challenges and suggested potential use cases in which the results of Cxbladder affected a change in management. Our health technology assessment provided an objective process to efficiently review test performance and guide new test adoption. Based on our pilot, there were real-world data indicating improved clinician decision-making among select patients who underwent Cxbladder testing.


2019 ◽  
Vol 23 (69) ◽  
pp. 1-144
Author(s):  
Khalida Ismail ◽  
Daniel Stahl ◽  
Adam Bayley ◽  
Katherine Twist ◽  
Kurtis Stewart ◽  
...  

Background Motivational interviewing (MI) enhanced with behaviour change techniques (BCTs) and deployed by health trainers targeting multiple risk factors for cardiovascular disease (CVD) may be more effective than interventions targeting a single risk factor. Objectives The clinical effectiveness and cost-effectiveness of an enhanced lifestyle motivational interviewing intervention for patients at high risk of CVD in group settings versus individual settings and usual care (UC) in reducing weight and increasing physical activity (PA) were tested. Design This was a three-arm, single-blind, parallel randomised controlled trial. Setting A total of 135 general practices across all 12 South London Clinical Commissioning Groups were recruited. Participants A total of 1742 participants aged 40–74 years with a ≥ 20.0% risk of a CVD event in the following 10 years were randomised. Interventions The intervention was designed to integrate MI and cognitive–behavioural therapy (CBT), delivered by trained healthy lifestyle facilitators in 10 sessions over 1 year, in group or individual format. The control group received UC. Randomisation Simple randomisation was used with computer-generated randomisation blocks. In each block, 10 participants were randomised to the group, individual or UC arm in a 4 : 3 : 3 ratio. Researchers were blind to the allocation. Main outcome measures The primary outcomes are change in weight (kg) from baseline and change in PA (average number of steps per day over 1 week) from baseline at the 24-month follow-up, with an interim follow-up at 12 months. An economic evaluation estimates the relative cost-effectiveness of each intervention. Secondary outcomes include changes in low-density lipoprotein cholesterol and CVD risk score. Results The mean age of participants was 69.75 years (standard deviation 4.11 years), 85.5% were male and 89.4% were white. At the 24-month follow-up, the group and individual intervention arms were not more effective than UC in increasing PA [mean 70.05 steps, 95% confidence interval (CI) –288 to 147.9 steps, and mean 7.24 steps, 95% CI –224.01 to 238.5 steps, respectively] or in reducing weight (mean –0.03 kg, 95% CI –0.49 to 0.44 kg, and mean –0.42 kg, 95% CI –0.93 to 0.09 kg, respectively). At the 12-month follow-up, the group and individual intervention arms were not more effective than UC in increasing PA (mean 131.1 steps, 95% CI –85.28 to 347.48 steps, and mean 210.22 steps, 95% CI –19.46 to 439.91 steps, respectively), but there were reductions in weight for the group and individual intervention arms compared with UC (mean –0.52 kg, 95% CI –0.90 to –0.13 kg, and mean –0.55 kg, 95% CI –0.95 to –0.14 kg, respectively). The group intervention arm was not more effective than the individual intervention arm in improving outcomes at either follow-up point. The group and individual interventions were not cost-effective. Conclusions Enhanced MI, in group or individual formats, targeted at members of the general population with high CVD risk is not effective in reducing weight or increasing PA compared with UC. Future work should focus on ensuring objective evidence of high competency in BCTs, identifying those with modifiable factors for CVD risk and improving engagement of patients and primary care. Trial registration Current Controlled Trials ISRCTN84864870. Funding This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 23, No. 69. See the NIHR Journals Library website for further project information. This research was part-funded by the NIHR Biomedical Research Centre at South London and Maudsley NHS Foundation Trust and King’s College London.


2020 ◽  
Vol 41 (1) ◽  
pp. 37-50
Author(s):  
Daniel Gallacher ◽  
Peter Kimani ◽  
Nigel Stallard

Extrapolations of parametric survival models fitted to censored data are routinely used in the assessment of health technologies to estimate mean survival, particularly in diseases that potentially reduce the life expectancy of patients. Akaike’s information criterion (AIC) and Bayesian information criterion (BIC) are commonly used in health technology assessment alongside an assessment of plausibility to determine which statistical model best fits the data and should be used for prediction of long-term treatment effects. We compare fit and estimates of restricted mean survival time (RMST) from 8 parametric models and contrast models preferred in terms of AIC, BIC, and log-likelihood, without considering model plausibility. We assess the methods’ suitability for selecting a parametric model through simulation of data replicating the follow-up of intervention arms for various time-to-event outcomes from 4 clinical trials. Follow-up was replicated through the consideration of recruitment duration and minimum and maximum follow-up times. Ten thousand simulations of each scenario were performed. We demonstrate that the different methods can result in disagreement over the best model and that it is inappropriate to base model selection solely on goodness-of-fit statistics without consideration of hazard behavior and plausibility of extrapolations. We show that typical trial follow-up can be unsuitable for extrapolation, resulting in unreliable estimation of multiple parameter models, and infer that selecting survival models based only on goodness-of-fit statistics is unsuitable due to the high level of uncertainty in a cost-effectiveness analysis. This article demonstrates the potential problems of overreliance on goodness-of-fit statistics when selecting a model for extrapolation. When follow-up is more mature, BIC appears superior to the other selection methods, selecting models with the most accurate and least biased estimates of RMST.


2021 ◽  
Vol 25 (2) ◽  
pp. 1-32
Author(s):  
John N Primrose ◽  
Siân A Pugh ◽  
Gareth Thomas ◽  
Matthew Ellis ◽  
Karwan Moutasim ◽  
...  

Background Following surgical and adjuvant treatment of primary colorectal cancer, many patients are routinely followed up with axial imaging (most commonly computerised tomography imaging) and blood carcinoembryonic antigen (a tumour marker) testing. Because fewer than one-fifth of patients will relapse, a large number of patients are followed up unnecessarily. Objectives To determine whether or not the intratumoural immune signature could identify a cohort of patients with a relapse rate so low that follow-up is unnecessary. Design An observational study based on a secondary tissue collection of the tumours from participants in the FACS (Follow-up After Colorectal Cancer Surgery) trial. Setting and participants Formalin-fixed paraffin-embedded tumour tissue was obtained from 550 out of 1202 participants in the FACS trial. Tissue microarrays were constructed and stained for cluster of differentiation (CD)3+ and CD45RO+ T lymphocytes as well as standard haematoxylin and eosin staining, with a view to manual and, subsequently, automated cell counting. Results The tissue microarrays were satisfactorily stained for the two immune markers. Manual cell counting proved possible on the arrays, but manually counting the number of cores for the entire study was found to not be feasible; therefore, an attempt was made to use automatic cell counting. Although it is clear that this approach is workable, there were both hardware and software problems; therefore, reliable data could not be obtained within the time frame of the study. Limitations The main limitations were the inability to use machine counting because of problems with both hardware and software, and the loss of critical scientific staff. Findings from this research indicate that this approach will be able to count intratumoural immune cells in the long term, but whether or not the original aim of the project proved possible is not known. Conclusions The project was not successful in its aim because of the failure to achieve a reliable counting system. Future work Further work is needed to perfect immune cell machine counting and then complete the objectives of this study that are still relevant. Trial registration Current Controlled Trials ISRCTN41458548. Funding This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 25, No. 2. See the NIHR Journals Library website for further project information.


2018 ◽  
Vol 22 (36) ◽  
pp. 1-162 ◽  
Author(s):  
Naiem Moiemen ◽  
Jonathan Mathers ◽  
Laura Jones ◽  
Jonathan Bishop ◽  
Philip Kinghorn ◽  
...  

Background Eleven million people suffer a fire-related injury worldwide every year, and 71% have significant scarring. Pressure garment therapy (PGT) is a standard part of burn scar management, but there is little evidence of its clinical effectiveness or cost-effectiveness. Objective To identify the barriers to, and the facilitators of, conducting a randomised controlled trial (RCT) of burn scar management with and without PGT and test whether or not such a trial is feasible. Design Web-based surveys, semistructured individual interviews, a pilot RCT including a health economic evaluation and embedded process evaluation. Setting UK NHS burns services. Interviews and the pilot trial were run in seven burns services. Participants Thirty NHS burns services and 245 staff provided survey responses and 15 staff participated in individual interviews. Face-to-face interviews were held with 24 adult patients and 16 parents of paediatric patients who had undergone PGT. The pilot trial recruited 88 participants (57 adults and 31 children) who were at risk of hypertrophic scarring and were considered suitable for scar management therapy. Interviews were held with 34 participants soon after recruitment, with 23 participants at 12 months and with eight staff from six sites at the end of the trial. Interventions The intervention was standard care with pressure garments. The control was standard care comprising scar management techniques involving demonstration and recommendations to undertake massage three or four times per day with moisturiser, silicone treatment, stretching and other exercises. Main outcome measures Feasibility was assessed by eligibility rates, consent rates, retention in allocated arms, adherence with treatment and follow-up and completion of outcome assessments. The outcomes from interview-based studies were core outcome domains and barriers to, and facilitators of, trial participation and delivery. Results NHS burns services treat 2845 patients per annum (1476 paediatric and 1369 adult) and use pressure garments for 6–18 months, costing £2,171,184. The majority of staff perceived a need for a RCT of PGT, but often lacked equipoise around the research question and PGT as a treatment. Strong views about the use of PGT have the potential to influence the conduct of a full-scale RCT. A range of outcome domains was identified as important via the qualitative research: perceptions of appearance, specific scar characteristics, function, pain and itch, broader psychosocial outcomes and treatment burden. The outcome tools evaluated in the pilot trial did not cover all of these domains. The planned 88 participants were recruited: the eligibility rate was 88% [95% confidence interval (CI) 83% to 92%], the consent rate was 47% (95% CI 40% to 55%). Five (6%) participants withdrew, 14 (16%) were lost to follow-up and 8 (9%) crossed over. Adherence was as in clinical practice. Completion of outcomes was high for adult patients but poorer from parents of paediatric patients, particularly for quality of life. Sections on range of movement and willingness to pay were found to be challenging and poorly completed. Limitations The Brisbane Burn Scar Impact Profile appears more suitable in terms of conceptual coverage than the outcome scales that were used in the trial but was not available at the time of the study. Conclusions A definitive RCT of PGT in burn scar management appears feasible. However, staff attitudes to the use of pressure garments may lead to biases, and the provision of training and support to sites and an ongoing assessment of trial processes are required. Future work We recommend that any future trial include an in-depth mixed-methods recruitment investigation and a process evaluation to account for this. Trial registration Current Controlled Trials ISRCTN34483199. Funding This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 22, No. 36. See the NIHR Journals Library website for further project information


2017 ◽  
Vol 33 (S1) ◽  
pp. 154-155
Author(s):  
Irene Eriksson ◽  
Björn Wettermark ◽  
Marie Persson ◽  
Morgan Edström ◽  
Brian Godman ◽  
...  

INTRODUCTION:Over the past decades, early awareness and alert (EAA) activities and systems have gained importance and become a key early Health Technology Assessment (HTA) tool. While a pioneer in HTA, Sweden had no national level EAA activities until recently. We describe the evolution and current status of the Swedish EAA System.METHODS:This was a historical analysis based on the knowledge and experience of the authors supplemented by a targeted review of published and grey literature, as well as documents produced by or relating to the Swedish EAA System. Key milestones and a description of the current state of the Swedish EAA System are presented.RESULTS:Initiatives to establish a system for the identification and assessment of emerging health technologies in Sweden date back to the 1980s. Since the 1990s, the Swedish Agency for Health Technology Assessment and Assessment of Social Services (SBU) supported the development of EuroScan and was one of its founding members. In the mid-2000s, an independent regional initiative, driven by the Stockholm Drug and Therapeutics Committee, resulted in the establishment of a regional horizon scanning unit. By 2009, this work had expanded to a collaboration between the four biggest regions in Sweden. The following year it was further expanded to the national level. Today, the Swedish EAA System carries out identification, filtration and prioritization of new drugs, early assessment of the prioritized drugs, and dissemination of the information. Its outputs are used to select new drugs for inclusion in the Swedish national process for managed introduction and follow-up.CONCLUSIONS:The Swedish EAA System started as a regional initiative and rapidly grew to become a national level activity. An important feature of the system today is its complete integration into the national process for managed introduction and follow-up of new drugs. The system will continue to evolve as a response both to the changing landscape of health innovations and to new policy initiatives at the regional, national and international levels.


Author(s):  
◽  
Peter J. Neumann ◽  
Michael F. Drummond ◽  
Bengt Jönsson ◽  
Bryan R. Luce ◽  
...  

Previously, our group—the International Working Group for HTA Advancement—proposed a set of fifteen Key Principles that could be applied to health technology assessment (HTA) programs in different jurisdictions and across a range of organizations and perspectives. In this commentary, we investigate the extent to which these principles are supported and used by fourteen selected HTA organizations worldwide. We find that some principles are broadly supported: examples include being explicit about HTA goals and scope; considering a wide range of evidence and outcomes; and being unbiased and transparent. Other principles receive less widespread support: examples are addressing issues of generalizability and transferability; being transparent on the link between HTA findings and decision-making processes; considering a full societal perspective; and monitoring the implementation of HTA findings. The analysis also suggests a lack of consensus in the field about some principles—for example, considering a societal perspective. Our study highlights differences in the uptake of key principles for HTA and indicates considerable room for improvement for HTA organizations to adopt principles identified to reflect good HTA practices. Most HTA organizations espouse certain general concepts of good practice—for example, assessments should be unbiased and transparent. However, principles that require more intensive follow-up—for example, monitoring the implementation of HTA findings—have received little support and execution.


2019 ◽  
Vol 23 (61) ◽  
pp. 1-128 ◽  
Author(s):  
Alexis Llewellyn ◽  
Julie Jones-Diette ◽  
Jeannette Kraft ◽  
Colin Holton ◽  
Melissa Harden ◽  
...  

Background Osteomyelitis is an infection of the bone. Medical imaging tests, such as radiography, ultrasound, magnetic resonance imaging (MRI), single-photon emission computed tomography (SPECT) and positron emission tomography (PET), are often used to diagnose osteomyelitis. Objectives To systematically review the evidence on the diagnostic accuracy, inter-rater reliability and implementation of imaging tests to diagnose osteomyelitis. Data sources We conducted a systematic review of imaging tests to diagnose osteomyelitis. We searched MEDLINE and other databases from inception to July 2018. Review methods Risk of bias was assessed with QUADAS-2 [quality assessment of diagnostic accuracy studies (version 2)]. Diagnostic accuracy was assessed using bivariate regression models. Imaging tests were compared. Subgroup analyses were performed based on the location and nature of the suspected osteomyelitis. Studies of children, inter-rater reliability and implementation outcomes were synthesised narratively. Results Eighty-one studies were included (diagnostic accuracy: 77 studies; inter-rater reliability: 11 studies; implementation: one study; some studies were included in two reviews). One-quarter of diagnostic accuracy studies were rated as being at a high risk of bias. In adults, MRI had high diagnostic accuracy [95.6% sensitivity, 95% confidence interval (CI) 92.4% to 97.5%; 80.7% specificity, 95% CI 70.8% to 87.8%]. PET also had high accuracy (85.1% sensitivity, 95% CI 71.5% to 92.9%; 92.8% specificity, 95% CI 83.0% to 97.1%), as did SPECT (95.1% sensitivity, 95% CI 87.8% to 98.1%; 82.0% specificity, 95% CI 61.5% to 92.8%). There was similar diagnostic performance with MRI, PET and SPECT. Scintigraphy (83.6% sensitivity, 95% CI 71.8% to 91.1%; 70.6% specificity, 57.7% to 80.8%), computed tomography (69.7% sensitivity, 95% CI 40.1% to 88.7%; 90.2% specificity, 95% CI 57.6% to 98.4%) and radiography (70.4% sensitivity, 95% CI 61.6% to 77.8%; 81.5% specificity, 95% CI 69.6% to 89.5%) all had generally inferior diagnostic accuracy. Technetium-99m hexamethylpropyleneamine oxime white blood cell scintigraphy (87.3% sensitivity, 95% CI 75.1% to 94.0%; 94.7% specificity, 95% CI 84.9% to 98.3%) had higher diagnostic accuracy, similar to that of PET or MRI. There was no evidence that diagnostic accuracy varied by scan location or cause of osteomyelitis, although data on many scan locations were limited. Diagnostic accuracy in diabetic foot patients was similar to the overall results. Only three studies in children were identified; results were too limited to draw any conclusions. Eleven studies evaluated inter-rater reliability. MRI had acceptable inter-rater reliability. We found only one study on test implementation and no evidence on patient preferences or cost-effectiveness of imaging tests for osteomyelitis. Limitations Most studies included < 50 participants and were poorly reported. There was limited evidence for children, ultrasonography and on clinical factors other than diagnostic accuracy. Conclusions Osteomyelitis is reliably diagnosed by MRI, PET and SPECT. No clear reason to prefer one test over the other in terms of diagnostic accuracy was identified. The wider availability of MRI machines, and the fact that MRI does not expose patients to harmful ionising radiation, may mean that MRI is preferable in most cases. Diagnostic accuracy does not appear to vary with the potential cause of osteomyelitis or with the body part scanned. Considerable uncertainty remains over the diagnostic accuracy of imaging tests in children. Studies of diagnostic accuracy in children, particularly using MRI and ultrasound, are needed. Study registration This study is registered as PROSPERO CRD42017068511. Funding This project was funded by the National Institute for Health Research Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 23, No. 61. See the NIHR Journals Library website for further project information.


2019 ◽  
Vol 23 (41) ◽  
pp. 1-30
Author(s):  
Hugh S Markus ◽  
Susanna C Larsson ◽  
John Dennis ◽  
Wilhelm Kuker ◽  
Ursula G Schulz ◽  
...  

Background Symptomatic vertebral artery (VA) stenosis has been associated with a markedly increased early risk of recurrent stroke. VA stenosis can be treated with stenting; however, there are few data from randomised controlled trials evaluating the efficacy of this treatment, and recent studies in intracranial stenosis have suggested that stenting may be associated with increased risk. Objective The Vertebral artery Ischaemia Stenting Trial (VIST) was established to compare the risks and benefits of vertebral angioplasty and stenting with best medical treatment (BMT) alone for recently symptomatic VA stenosis. Design VIST was a prospective, randomised, open, parallel, blinded end-point clinical trial. Setting The trial was performed in 14 hospitals in the UK. Participants Recruitment began on 23 October 2008 and follow-up ended on 1 March 2016, by which time every patient had been followed up for at least 1 year. Participants had to have symptomatic vertebral stenosis of at least 50% resulting from presumed atheromatous disease. Both patients and clinicians were aware of treatment allocation; however, an independent adjudication committee, masked to treatment allocation, assessed all primary and secondary end points. Interventions Participants were randomly assigned (1 : 1) to either vertebral angioplasty/stenting plus BMT (n = 91) or BMT alone (n = 88). A total of 182 patients were initially enrolled; however, three patients (two who withdrew after randomisation and one who did not attend after the initial randomisation visit) did not contribute any follow-up data and were excluded. None of these three patients had outcome events. Main outcomes and measures The primary end point was the occurrence of fatal or non-fatal stroke in any arterial territory during follow-up. Results The median follow-up was 3.5 (interquartile range 2.1–4.7) years. Of the 61 patients who were stented, 48 (78.7%) had extracranial stenosis and 13 (21.3%) had intracranial stenosis. No perioperative complications occurred with extracranial stenting; two strokes occurred during intracranial stenting. The primary end point occurred in five patients (including one fatal stroke) in the stent group and in 12 patients (including two fatal strokes) in the medical group (giving a hazard ratio of 0.40, 95% confidence interval 0.14 to 1.13; p = 0.08), with an absolute risk reduction of 25 strokes per 1000 person-years. Limitations The study was underpowered because it failed to reach target recruitment. The high rate of non-confirmation of stenosis in the stented group of the trial was a second limitation. Conclusions The trial found no difference in risk of the primary end point between the two groups. Future Post hoc analysis suggested that stenting could be associated with a reduced recurrent stroke risk in symptomatic VA and further studies are now required to confirm these findings, particularly in extracranial VA stenosis where complication rates with stenting were confirmed to be very low. Trial registration Current Controlled Trials ISRCTN95212240. Funding This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 23, No. 41. See the NIHR Journals Library website for further project information. In addition, funding for the pilot phase was provided by the Stroke Association.


Sign in / Sign up

Export Citation Format

Share Document