scholarly journals Developing automated methods for disease subtyping in UK Biobank: an exemplar study on stroke

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Kristiina Rannikmäe ◽  
Honghan Wu ◽  
Steven Tominey ◽  
William Whiteley ◽  
Naomi Allen ◽  
...  

Abstract Background Better phenotyping of routinely collected coded data would be useful for research and health improvement. For example, the precision of coded data for hemorrhagic stroke (intracerebral hemorrhage [ICH] and subarachnoid hemorrhage [SAH]) may be as poor as < 50%. This work aimed to investigate the feasibility and added value of automated methods applied to clinical radiology reports to improve stroke subtyping. Methods From a sub-population of 17,249 Scottish UK Biobank participants, we ascertained those with an incident stroke code in hospital, death record or primary care administrative data by September 2015, and ≥ 1 clinical brain scan report. We used a combination of natural language processing and clinical knowledge inference on brain scan reports to assign a stroke subtype (ischemic vs ICH vs SAH) for each participant and assessed performance by precision and recall at entity and patient levels. Results Of 225 participants with an incident stroke code, 207 had a relevant brain scan report and were included in this study. Entity level precision and recall ranged from 78 to 100%. Automated methods showed precision and recall at patient level that were very good for ICH (both 89%), good for SAH (both 82%), but, as expected, lower for ischemic stroke (73%, and 64%, respectively), suggesting coded data remains the preferred method for identifying the latter stroke subtype. Conclusions Our automated method applied to radiology reports provides a feasible, scalable and accurate solution to improve disease subtyping when used in conjunction with administrative coded health data. Future research should validate these findings in a different population setting.

Stroke ◽  
2021 ◽  
Vol 52 (Suppl_1) ◽  
Author(s):  
Kristiina Rannikmae ◽  
Honghan Wu ◽  
Steven Tominey ◽  
William Whiteley ◽  
Naomi Allen ◽  
...  

Objective: In UK Biobank (UKB), a large population-based prospective study, cases of many diseases are ascertained through linkage to routinely collected, coded national health datasets. However routinely collected coded data cannot always provide sufficient accuracy or granularity (i.e. sub-phenotypes) for research studies. For example, while ischemic stroke codes appear accurate, the precision for hemorrhagic stroke codes (intracerebral hemorrhage [ICH] and subarachnoid hemorrhage [SAH]) may be as poor as <50%. We investigated whether automated analysis of radiology reports could improve disease subtyping in UKB, using stroke as an exemplar disease. Methods: From a sub-population of 17,249 UKB participants, we ascertained those with an incident stroke code and ≥1 clinical brain scan report. We used automated methods (a combination of natural language processing and clinical knowledge inference) on brain scan reports to assign a stroke subtype (ischemic vs ICH vs SAH) for each participant and assessed performance by precision (positive predictive value) and recall (sensitivity) at both entity and patient levels. Results: Of 225 participants with an incident stroke code, 207 had a relevant brain scan report. Entity level precision and recall ranged from 78% to 100%. Automated methods showed precision (positive predictive value) and recall (sensitivity) at patient level that were very good for ICH (both 89%), good for SAH (both 82%), but, as expected, lower for ischemic stroke (73%, and 64%, respectively), suggesting coded data remains the preferred method for identifying the latter stroke subtype (Table 1). Discussion: Future research should validate these findings in another dataset. Conclusion: Our novel automated method applied to radiology reports provides a feasible, scalable and accurate solution to improve disease subtyping when used in conjunction with administrative coded health data.


2019 ◽  
Vol 12 (3) ◽  
pp. 229-237 ◽  
Author(s):  
Alban Revy ◽  
François Hallouard ◽  
Sandrine Joyeux-Klamber ◽  
Andrea Skanjeti ◽  
Catherine Rioufol ◽  
...  

Objective: Recent gallium-68 labeled peptides are of increasing interest in PET imaging in nuclear medicine. Somakit TOC® is a radiopharmaceutical kit registered in the European Union for the preparation of [68Ga]Ga-DOTA-TOC used for the diagnosis of neuroendocrine tumors. Development of a labeling process using a synthesizer is particularly interesting for the quality and reproducibility of the final product although only manual processes are described in the Summary of Product (SmPC) of the registered product. The aim of the present study was therefore to evaluate the feasibility and value of using an automated synthesizer for the preparation of [68Ga]Ga-DOTA-TOC according to the SmPC of the Somakit TOC®. Methods: Three methods of preparation were compared; each followed the SmPC of the Somakit TOC®. Over time, overheads, and overexposure were evaluated for each method. Results: Mean±SD preparation time was 26.2±0.3 minutes for the manual method, 28±0.5 minutes for the semi-automated, and 40.3±0.2 minutes for the automated method. Overcost of the semi-automated method is 0.25€ per preparation for consumables and from 0.58€ to 0.92€ for personnel costs according to the operator (respectively, technician or pharmacist). For the automated method, overcost is 70€ for consumables and from 4.06€ to 6.44€ for personnel. For the manual method, extremity exposure was 0.425mSv for the right finger, and 0.350mSv for the left finger; for both the semi-automated and automated method extremity exposure were below the limit of quantification. Conclusion: The present study reports for the first time both the feasibility of using a [68Ga]- radiopharmaceutical kit with a synthesizer and the limits for the development of a fully automated process.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Oliver J. Kennedy ◽  
Jonathan A. Fallowfield ◽  
Robin Poole ◽  
Peter C. Hayes ◽  
Julie Parkes ◽  
...  

Abstract Background Chronic liver disease (CLD) is a growing cause of morbidity and mortality worldwide, particularly in low to middle-income countries with high disease burden and limited treatment availability. Coffee consumption has been linked with lower rates of CLD, but little is known about the effects of different coffee types, which vary in chemical composition. This study aimed to investigate associations of coffee consumption, including decaffeinated, instant and ground coffee, with chronic liver disease outcomes. Methods A total of 494,585 UK Biobank participants with known coffee consumption and electronic linkage to hospital, death and cancer records were included in this study. Cox regression was used to estimate hazard ratios (HR) of incident CLD, incident CLD or steatosis, incident hepatocellular carcinoma (HCC) and death from CLD according to coffee consumption of any type as well as for decaffeinated, instant and ground coffee individually. Results Among 384,818 coffee drinkers and 109,767 non-coffee drinkers, there were 3600 cases of CLD, 5439 cases of CLD or steatosis, 184 cases of HCC and 301 deaths from CLD during a median follow-up of 10.7 years. Compared to non-coffee drinkers, coffee drinkers had lower adjusted HRs of CLD (HR 0.79, 95% CI 0.72–0.86), CLD or steatosis (HR 0.80, 95% CI 0.75–0.86), death from CLD (HR 0.51, 95% CI 0.39–0.67) and HCC (HR 0.80, 95% CI 0.54–1.19). The associations for decaffeinated, instant and ground coffee individually were similar to all types combined. Conclusion The finding that all types of coffee are protective against CLD is significant given the increasing incidence of CLD worldwide and the potential of coffee as an intervention to prevent CLD onset or progression.


2020 ◽  
Vol 12 (7) ◽  
pp. 1185 ◽  
Author(s):  
Roxane J. Francis ◽  
Mitchell B. Lyons ◽  
Richard T. Kingsford ◽  
Kate J. Brandis

Using drones to count wildlife saves time and resources and allows access to difficult or dangerous areas. We collected drone imagery of breeding waterbirds at colonies in the Okavango Delta (Botswana) and Lowbidgee floodplain (Australia). We developed a semi-automated counting method, using machine learning, and compared effectiveness of freeware and payware in identifying and counting waterbird species (targets) in the Okavango Delta. We tested transferability to the Australian breeding colony. Our detection accuracy (targets), between the training and test data, was 91% for the Okavango Delta colony and 98% for the Lowbidgee floodplain colony. These estimates were within 1–5%, whether using freeware or payware for the different colonies. Our semi-automated method was 26% quicker, including development, and 500% quicker without development, than manual counting. Drone data of waterbird colonies can be collected quickly, allowing later counting with minimal disturbance. Our semi-automated methods efficiently provided accurate estimates of nesting species of waterbirds, even with complex backgrounds. This could be used to track breeding waterbird populations around the world, indicators of river and wetland health, with general applicability for monitoring other taxa.


1977 ◽  
Vol 60 (1) ◽  
pp. 179-182
Author(s):  
H Latham Breunig ◽  
Robert E Scroggs ◽  
Lealon V Tonkinson ◽  
Henry Bikin

Abstract A turbidimetric microbiological assay method for monensin in chicken rations was submitted in a modified form to 8 collaborating laboratories along with randomized and coded samples. Three laboratories used the manual method and 5 used the automated method. Other factors in the experimental design were ration types (broiler starter, broiler finisher, and pullet grower), feed form (meal vs. pellets), and potency level (90 and 110 g/ton) for one ration. Average recoveries for the ration types over all laboratories and feed forms were 87.7—93.13% of label, while mean recoveries in 2 feed forms were 91.7% for meal and 87.6% for pellets. Average recoveries in the 8 laboratories ranged from 84.6 to 106.64% of label for 90 g/ton rations and 87.1 to 106.6% for 110 g/ton rations. There was no significant difference between the manual and the automated methods. The collaborators’ assays were uniform with respect to within-laboratory variation. Relative standard deviations ranged from 4.51 to 10.76% with a median of 6.04%. Agreement with the plate assay is quite good. The turbidimetric method for monensin has been adopted as official first action.


2020 ◽  
Vol 493 (3) ◽  
pp. 3854-3865
Author(s):  
Ian B Hewitt ◽  
Patrick Treuthardt

ABSTRACT The pitch angle (PA) of arms in spiral galaxies has been found to correlate with a number of important parameters that are normally time intensive and difficult to measure. Accurate PA measurements are therefore important in understanding the underlying physics of disc galaxies. We introduce a semi-automated method that improves upon a parallelized two-dimensional fast Fourier transform algorithm (p2dfft) to estimate PA. Rather than directly inputting deprojected, star subtracted, and galaxy centred images into p2dfft, our method (p2dfft:traced) takes visually traced spiral arms from deprojected galaxy images as input. The tracings do not require extensive expertise to complete. This procedure ignores foreground stars, bulge and/or bar structures, and allows for better discrimination between arm and interarm regions, all of which reduce noise in the results. We compare p2dfft:traced to other manual and automated methods of measuring PA using both simple barred and non-barred spiral galaxy models and a small sample of observed spiral galaxies with different representative morphologies. We find that p2dfft:traced produces results that, in general, are more accurate and precise than the other tested methods and it strikes a balance between total automation and time-consuming manual input to give reliable PA measurements.


2020 ◽  
Vol 21 (5) ◽  
pp. 202-205
Author(s):  
Elisabetta Kuczewski ◽  
Elodie Munier-Marion ◽  
Sélilah Amour ◽  
Thomas Bénet ◽  
Frédéric Rongieras ◽  
...  

Surgical site infection (SSI) surveillance methods are not standardised and are often time-consuming. We compared an active method, based on orthopaedic department staff reporting suspected SSI, with a semi-automated method, based on computerised extraction of surgical revisions, after total hip and knee arthroplasty. Both methods allowed finding the same SSI cases. We found the same sensitivity but higher specificity with a straightforward time gain using the passive method. This represents an added value for the organisation of an effective SSI surveillance, based on existing hospital databases.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Julie A. Fitzpatrick ◽  
Nicolas Basty ◽  
Madeleine Cule ◽  
Yi Liu ◽  
Jimmy D. Bell ◽  
...  

AbstractPsoas muscle measurements are frequently used as markers of sarcopenia and predictors of health. Manually measured cross-sectional areas are most commonly used, but there is a lack of consistency regarding the position of the measurement and manual annotations are not practical for large population studies. We have developed a fully automated method to measure iliopsoas muscle volume (comprised of the psoas and iliacus muscles) using a convolutional neural network. Magnetic resonance images were obtained from the UK Biobank for 5000 participants, balanced for age, gender and BMI. Ninety manual annotations were available for model training and validation. The model showed excellent performance against out-of-sample data (average dice score coefficient of 0.9046 ± 0.0058 for six-fold cross-validation). Iliopsoas muscle volumes were successfully measured in all 5000 participants. Iliopsoas volume was greater in male compared with female subjects. There was a small but significant asymmetry between left and right iliopsoas muscle volumes. We also found that iliopsoas volume was significantly related to height, BMI and age, and that there was an acceleration in muscle volume decrease in men with age. Our method provides a robust technique for measuring iliopsoas muscle volume that can be applied to large cohorts.


2012 ◽  
Vol 6 (6) ◽  
pp. 1507-1526 ◽  
Author(s):  
J. Karvonen ◽  
B. Cheng ◽  
T. Vihma ◽  
M. Arkett ◽  
T. Carrieres

Abstract. An analysis of ice thickness distribution is a challenge, particularly in a seasonal sea ice zone with a strongly dynamic ice motion field, such as the Gulf of St. Lawrence off Canada. We present a novel automated method for ice concentration and thickness analysis combining modeling of sea ice thermodynamics and detection of ice motion on the basis of space-borne Synthetic Aperture Radar (SAR) data. Thermodynamic evolution of sea ice thickness in the Gulf of St. Lawrence was simulated for two winters, 2002–2003 and 2008–2009. The basin-scale ice thickness was controlled by atmospheric forcing, but the spatial distribution of ice thickness and concentration could not be explained by thermodynamics only. SAR data were applied to detect ice motion and ice surface structure during these two winters. The SAR analysis is based on estimation of ice motion between SAR image pairs and analysis of the local SAR texture statistics. Including SAR data analysis brought a significant added value to the results based on thermodynamics only. Our novel method combining the thermodynamic modeling and SAR yielded results that well match with the distribution of observations based on airborne Electromagnetic Induction (EM) method. Compared to the present operational method of producing ice charts for the Gulf of St. Lawrence, which is based on visual interpretation of SAR data, the new method reveals much more detailed and physically based information on spatial distribution of ice thickness. The algorithms can be run automatically, and the final products can then be used by ice analysts for operational ice service. The method is globally applicable to all seas where SAR data are available.


Sign in / Sign up

Export Citation Format

Share Document