P4364A direct comparison between 2D and 4D deformation imaging in hypertrophic hearts. An agreement of disagreement

2019 ◽  
Vol 40 (Supplement_1) ◽  
Author(s):  
E Pagourelias ◽  
O Mirea ◽  
J Duchenne ◽  
S Unlu ◽  
J Van Cleemput ◽  
...  

Abstract Background Previous studies have directly compared 2-dimensional (2D) and 4-dimensional (4D) deformation imaging in normal and ischemic hearts suggesting a moderate agreement prone to technical considerations. However, the level of agreement between 2D and 4D-strain imaging has never been adequately addressed in hypertrophic hearts, nor has it been validated against a “ground truth”. Purpose We aimed at directly comparing 4D and 2D global and regional deformation parameters and depict which may best reflect underlying segmental fibrosis in hypertrophic cardiomyopathy (HCM), as defined by late gadolinium enhancement (LGE) in cardiac magnetic resonance (CMR). Methods We included 40 HCM patients (54.1±14.3 years, 82.5% male, maximum wall thickness 19.3±4.8mm) who have consecutively undergone 2D-,4D-speckle tracking echocardiography and CMR. Global and segmental circumferential (CS) and longitudinal (LS) strain have been calculated from 2D acquisitions and 4D full volume data, where additionally radial (RS) and area (AS) strain have been extracted using an 18 segment left ventricle model. Accordingly, segmental fibrosis was defined by LGE in corresponding CMR slices. Results Deformation parameters (2D and 4D, global and regional) presented overall poor to moderate agreement (Figure A+B) with regional 4D_LS and 4D_CS values being constantly less negative compared to 2D derivatives (−7.29±6.94% and −8.53±8.8% accordingly). In regional analysis, 720 segments were evaluated of which 134 (19.7%) were enhanced and 95 of them thickened (68.8%) (thickness>12 mm), with segments presenting both characteristics showing the greatest impairment both in 2D and 4D strain values. Among segmental deformation indices, 2D_SLS showed the best area under the curve [(AUC)=0.78, 95% CI (0.75–0.81), p<0.0005] to detect segmental fibrosis, with 2D_SCS and all 4D deformation indices presenting significantly lower AUC (Figure C). Conclusions In HCM, 2D and 4D deformation parameters are not interchangeable, showing modest agreement. Thickness and tracking algorithm calculating assumptions seem to induce this variability. Nevertheless, among HCM patients 2D_SLS remains the best strain parameter for tissue characterization and fibrosis detection. Acknowledgement/Funding Supported with a scholarship by the Greek State Scholarship Foundation (IKY).

2020 ◽  
Vol 21 (11) ◽  
pp. 1262-1272 ◽  
Author(s):  
Efstathios D Pagourelias ◽  
Oana Mirea ◽  
Jürgen Duchenne ◽  
Serkan Unlu ◽  
Johan Van Cleemput ◽  
...  

Abstract Aims We aimed at directly comparing three-dimensional (3D) and two-dimensional (2D) deformation parameters in hypertrophic hearts and depict which may best reflect underlying fibrosis in hypertrophic cardiomyopathy (HCM), defined by late gadolinium enhancement (LGE) in cardiac magnetic resonance (CMR). Methods and results We included 40 HCM [54.1 ± 14.3 years, 82.5% male, maximum wall thickness (MWT) 19.3 ± 4.8 mm] and 15 hypertensive (HTN) patients showing myocardial hypertrophy (58.1 ± 15.6 years, 80% male, MWT 12.8 ± 1.4 mm) who have consecutively undergone 2D-, 3D-speckle tracking echocardiography and LGE CMR. Deformation parameters (2D and 3D) presented overall poor to moderate correlations, with 3D_longitudinal strain (LS) and 3D_circumferential strain (CS) values being constantly higher compared to 2D derivatives. By regression analysis, hypertrophy substrate (HCM vs. hypertension) and hypertrophy magnitude were the parameters to influence 2D–3D LS and CS strain correlations (R2 = 0.66, P &lt; 0.001 and R2 = 0.5, P = 0.001 accordingly). Among segmental deformation indices, 2D_LS showed the best area under the curve [AUC = 0.78, 95% confidence intervals (CI) (0.75–0.81), P &lt; 0.0005] to detect fibrosis, with 3D deformation parameters showing similar AUC (0.65) and 3D_LS presenting the highest specificity [93.1%, 95% CI (90.6–95.1)]. Conclusions In hypertrophic hearts, 2D and 3D deformation parameters are not interchangeable, showing modest correlations. Thickness, substrate, and tracking algorithm calculating assumptions seem to induce this variability. Nevertheless, among HCM patients 2D_peak segmental longitudinal strain remains the best strain parameter for tissue characterization and fibrosis detection.


2020 ◽  
Vol 21 (Supplement_1) ◽  
Author(s):  
E Pagourelias ◽  
O Mirea ◽  
J Duchenne ◽  
S Unlu ◽  
J Van Cleemput ◽  
...  

Abstract Funding Acknowledgements Supported with a scholarship by the Greek State Scholarship Foundation (IKY). Background Previous studies have suggested that in normal and ischemic hearts three- (3D) and two-dimensional (2D) strain values present a moderate agreement which is prone to technical considerations. However, the level of agreement between 2D and 3D-strain imaging has never been adequately addressed in hypertrophic hearts, nor has it been validated against a "ground truth". Especially in hypertrophic cardiomyopathy (HCM), the magnitude and eccentricity of hypertrophy set additional challenges in standardization and measurement of regional 3D deformation parameters. Purpose Aims of this study were i) to investigate the consistency between 3D and 2D regional deformation parameters in HCM and ii) to test their accuracy in identifying regional fibrosis as this is defined by late gadolinium enhancement (LGE) in cardiac magnetic resonance (CMR). Methods We included 40 HCM patients (54.1 ± 14.3 years, 82.5% male, maximum wall thickness 19.3 ± 4.8mm) who have consecutively undergone 2D-,3D-speckle tracking echocardiography and CMR. Segmental circumferential (SCS) and longitudinal (SLS) strain have been calculated from 2D acquisitions and 3D full volume data, where additionally radial (SRS) and area (SAS) strain have been extracted using an 18 segment left ventricle model. Accordingly, segmental fibrosis was defined by LGE in corresponding CMR slices. Results Out of 720 segments evaluated, 134 (19.7%) were enhanced and 95(13.2%) thickened (thickness &gt; 12 mm). Two dimensional LS and CS analysis was feasible in 719 (99.9%) and 678 (94.2%) segments respectively, while 686 segments (95.3%) were appropriate for 3D tracking. 3D_SLS values were -7.9 ± 6.8% less negative compared to 2D_SLS values [level of agreement (LOA)(-21.1-5.4%)], while the bias for SCS values was even higher -8.5 ± 8.6 [LOA(-25.4-8.4%)]. Absolute agreement between 2D and 3D deformation imaging modalities was poor to moderate [Intra-class Correlation Coefficient (ICC)= 0.46, 95%CI (0.15-0.68), p &lt; 0.0005 for SLS and ICC = 0.19, 95%CI(0.07-0.38), p &lt; 0.0005 for SCS] (Panel A). Following regression analysis, regional thickness was the only segmental factor to influence the correlation between 3D and 2D_SLS [R2 = 0.504, B = 0.33, 95%CI(0.22-0.44), p &lt; 0.0005)], without, however, being a significant regressor for the other 2D vs 3D correlations. Among deformation indices, 2D_SLS showed the best area under the curve [(AUC)=0.78, 95%CI(0.75-0.81), p &lt; 0.0005] to detect segmental fibrosis identified by CMR LGE, with 3D_SLS, 3D_SAS and 3D_SRS showing similar AUC (0.65) and 3D_SLS presenting the highest specificity [93.1%, 95%CI(90.6-95.1)] (Panel B). Conclusions In HCM, 2D and 3D deformation parameters are not interchangeable, showing modest agreement. Thickness and tracking algorithm calculating assumptions seem to induce this inconsistency. Among HCM patients, 2D_SLS remains the most accurate strain parameter to detect regional fibrosis. Abstract P984 Figure.


2018 ◽  
Vol 18 (5-6) ◽  
pp. 483-504 ◽  
Author(s):  
Marius Ötting ◽  
Roland Langrock ◽  
Christian Deutscher

Recent years have seen several match-fixing scandals in soccer. In order to avoid match-fixing, existing literature and fraud detection systems primarily focus on analysing betting odds provided by bookmakers. In our work, we suggest to not only analyse odds but also total volume placed on bets, thereby making use of more of the information available. As a case study for our method, we consider the second division in Italian soccer, Serie B, since for this league it has effectively been proven that some matches were fixed, such that to some extent we can ground truth our approach. For the betting volume data, we use a flexible generalized additive model for location, scale and shape (GAMLSS), with log-normal response, to account for the various complex patterns present in the data. For the betting odds, we use a GAMLSS with bivariate Poisson response to model the number of goals scored by both teams, and to subsequently derive the corresponding odds. We then conduct outlier detection in order to flag suspicious matches. Our results indicate that monitoring both betting volumes and betting odds can lead to more reliable detection of suspicious matches.


2021 ◽  
Author(s):  
Melissa Min-Szu Yao ◽  
Hao Du ◽  
Mikael Hartman ◽  
Wing P. Chan ◽  
Mengling Feng

UNSTRUCTURED Purpose: To develop a novel artificial intelligence (AI) model algorithm focusing on automatic detection and classification of various patterns of calcification distribution in mammographic images using a unique graph convolution approach. Materials and methods: Images from 200 patients classified as Category 4 or 5 according to the American College of Radiology Breast Imaging Reporting and Database System, which showed calcifications according to the mammographic reports and diagnosed breast cancers. The calcification distributions were classified as either diffuse, segmental, regional, grouped, or linear. Excluded were mammograms with (1) breast cancer as a single or combined characterization such as a mass, asymmetry, or architectural distortion with or without calcifications; (2) hidden calcifications that were difficult to mark; or (3) incomplete medical records. Results: A graph convolutional network-based model was developed. 401 mammographic images from 200 cases of breast cancer were divided based on calcification distribution pattern: diffuse (n = 24), regional (n = 111), group (n = 201), linear (n = 8) or segmental (n = 57). The classification performances were measured using metrics including precision, recall, F1 score, accuracy and multi-class area under receiver operating characteristic curve. The proposed achieved precision of 0.483 ± 0.015, sensitivity of 0.606 (0.030), specificity of 0.862 ± 0.018, F1 score of 0.527 ± 0.035, accuracy of 60.642% ± 3.040% and area under the curve of 0.754 ± 0.019, finding method to be superior compared to all baseline models. The predicted linear and diffuse classifications were highly similar to the ground truth, and the predicted grouped and regional classifications were also superior compared to baseline models. Conclusion: The proposed deep neural network framework is an AI solution to automatically detect and classify calcification distribution patterns on mammographic images highly suspected of showing breast cancers. Further study of the AI model in an actual clinical setting and additional data collection will improve its performance.


Separations ◽  
2018 ◽  
Vol 5 (3) ◽  
pp. 44 ◽  
Author(s):  
Alyssa Allen ◽  
Mary Williams ◽  
Nicholas Thurn ◽  
Michael Sigman

Computational models for determining the strength of fire debris evidence based on likelihood ratios (LR) were developed and validated against data sets derived from different distributions of ASTM E1618-14 designated ignitable liquid class and substrate pyrolysis contributions using in-silico generated data. The models all perform well in cross validation against the distributions used to generate the model. However, a model generated based on data that does not contain representatives from all of the ASTM E1618-14 classes does not perform well in validation with data sets that contain representatives from the missing classes. A quadratic discriminant model based on a balanced data set (ignitable liquid versus substrate pyrolysis), with a uniform distribution of the ASTM E1618-14 classes, performed well (receiver operating characteristic area under the curve of 0.836) when tested against laboratory-developed casework-relevant samples of known ground truth.


2020 ◽  
pp. 084653712090885
Author(s):  
Fatemeh Homayounieh ◽  
Subba R. Digumarthy ◽  
Jennifer A. Febbo ◽  
Sherief Garrana ◽  
Chayanin Nitiwarangkul ◽  
...  

Purpose: To assess and compare detectability of pneumothorax on unprocessed baseline, single-energy, bone-subtracted, and enhanced frontal chest radiographs (chest X-ray, CXR). Method and Materials: Our retrospective institutional review board–approved study included 202 patients (mean age 53 ± 24 years; 132 men, 70 women) who underwent frontal CXR and had trace, moderate, large, or tension pneumothorax. All patients (except those with tension pneumothorax) had concurrent chest computed tomography (CT). Two radiologists reviewed the CXR and chest CT for pneumothorax on baseline CXR (ground truth). All baseline CXR were processed to generate bone-subtracted and enhanced images (ClearRead X-ray). Four radiologists (R1-R4) assessed the baseline, bone-subtracted, and enhanced images and recorded the presence of pneumothorax (side, size, and confidence for detection) for each image type. Area under the curve (AUC) was calculated with receiver operating characteristic analyses to determine the accuracy of pneumothorax detection. Results: Bone-subtracted images (AUC: 0.89-0.97) had the lowest accuracy for detection of pneumothorax compared to the baseline (AUC: 0.94-0.97) and enhanced (AUC: 0.96-0.99) radiographs ( P < .01). Most false-positive and false-negative pneumothoraces were detected on the bone-subtracted images and the least numbers on the enhanced radiographs. Highest detection rates and confidence were noted for the enhanced images (empiric AUC for R1-R4 0.96-0.99). Conclusion: Enhanced CXRs are superior to bone-subtracted and unprocessed radiographs for detection of pneumothorax. Clinical Relevance/Application: Enhanced CXRs improve detection of pneumothorax over unprocessed images; bone-subtracted images must be cautiously reviewed to avoid false negatives.


Author(s):  
Kelvin R. Santiago-Chaparro ◽  
David A. Noyce

The capabilities of radar-based vehicle detection (RVD) systems used at signalized intersections for stop bar and advanced detection are arguably underutilized. Underutilization happens because RVD systems can monitor the position and speed (i.e., trajectory) of multiple vehicles at the same time but these trajectories are only used to emulate the behavior of legacy detection systems such as inductive loop detectors. When full vehicle trajectories tracked by an RVD system are collected, detailed traffic operations and safety performance measures can be calculated for signalized intersections. Unfortunately, trajectory datasets obtained from RVD systems often contain significant noise which makes the computation of performance measures difficult. In this paper, a description of the type of trajectory datasets that can be obtained from RVD systems is presented along with a characterization of the noise expected in these datasets. Guidance on the noise removal procedures that can be applied to these datasets is also presented. This guidance can be applied to the use of data from commercially-available RVD systems to obtain advanced performance measures. To demonstrate the potential accuracy of the noise removal procedures, the procedures were applied to trajectory data obtained from an existing intersection, and data on a basic performance measure (vehicle volume) were extracted from the dataset. Volume data derived from the de-noised trajectory dataset was compared with ground truth volume and an absolute average difference of approximately one vehicle every 5 min was found, thus highlighting the potential accuracy of the noise removal procedures introduced.


2020 ◽  
Vol 14 ◽  
Author(s):  
Daniel J. King ◽  
Jan Novak ◽  
Adam J. Shephard ◽  
Richard Beare ◽  
Vicki A. Anderson ◽  
...  

Structural segmentation of T1-weighted (T1w) MRI has shown morphometric differences, both compared to controls and longitudinally, following a traumatic brain injury (TBI). While many patients with TBI present with abnormalities on structural MRI images, most neuroimaging software packages have not been systematically evaluated for accuracy in the presence of these pathology-related MRI abnormalities. The current study aimed to assess whether acute MRI lesions (MRI acquired 7–71 days post-injury) cause error in the estimates of brain volume produced by the semi-automated segmentation tool, Freesurfer. More specifically, to investigate whether this error was global, the presence of lesion-induced error in the contralesional hemisphere, where no abnormal signal was present, was measured. A dataset of 176 simulated lesion cases was generated using actual lesions from 16 pediatric TBI (pTBI) cases recruited from the emergency department and 11 typically-developing controls. Simulated lesion cases were compared to the “ground truth” of the non-lesion control-case T1w images. Using linear mixed-effects models, results showed that hemispheric measures of cortex volume were significantly lower in the contralesional-hemisphere compared to the ground truth. Interestingly, however, cortex volume (and cerebral white matter volume) were not significantly different in the lesioned hemisphere. However, percent volume difference (PVD) between the simulated lesion and ground truth showed that the magnitude of difference of cortex volume in the contralesional-hemisphere (mean PVD = 0.37%) was significantly smaller than that in the lesioned hemisphere (mean PVD = 0.47%), suggesting a small, but systematic lesion-induced error. Lesion characteristics that could explain variance in the PVD for each hemisphere were investigated. Taken together, these results suggest that the lesion-induced error caused by simulated lesions was not focal, but globally distributed. Previous post-processing approaches to adjust for lesions in structural analyses address the focal region where the lesion was located however, our results suggest that focal correction approaches are insufficient for the global error in morphometric measures of the injured brain.


Diagnostics ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 2109
Author(s):  
Skandha S. Sanagala ◽  
Andrew Nicolaides ◽  
Suneet K. Gupta ◽  
Vijaya K. Koppula ◽  
Luca Saba ◽  
...  

Background and Purpose: Only 1–2% of the internal carotid artery asymptomatic plaques are unstable as a result of >80% stenosis. Thus, unnecessary efforts can be saved if these plaques can be characterized and classified into symptomatic and asymptomatic using non-invasive B-mode ultrasound. Earlier plaque tissue characterization (PTC) methods were machine learning (ML)-based, which used hand-crafted features that yielded lower accuracy and unreliability. The proposed study shows the role of transfer learning (TL)-based deep learning models for PTC. Methods: As pertained weights were used in the supercomputer framework, we hypothesize that transfer learning (TL) provides improved performance compared with deep learning. We applied 11 kinds of artificial intelligence (AI) models, 10 of them were augmented and optimized using TL approaches—a class of Atheromatic™ 2.0 TL (AtheroPoint™, Roseville, CA, USA) that consisted of (i–ii) Visual Geometric Group-16, 19 (VGG16, 19); (iii) Inception V3 (IV3); (iv–v) DenseNet121, 169; (vi) XceptionNet; (vii) ResNet50; (viii) MobileNet; (ix) AlexNet; (x) SqueezeNet; and one DL-based (xi) SuriNet-derived from UNet. We benchmark 11 AI models against our earlier deep convolutional neural network (DCNN) model. Results: The best performing TL was MobileNet, with accuracy and area-under-the-curve (AUC) pairs of 96.10 ± 3% and 0.961 (p < 0.0001), respectively. In DL, DCNN was comparable to SuriNet, with an accuracy of 95.66% and 92.7 ± 5.66%, and an AUC of 0.956 (p < 0.0001) and 0.927 (p < 0.0001), respectively. We validated the performance of the AI architectures with established biomarkers such as greyscale median (GSM), fractal dimension (FD), higher-order spectra (HOS), and visual heatmaps. We benchmarked against previously developed Atheromatic™ 1.0 ML and showed an improvement of 12.9%. Conclusions: TL is a powerful AI tool for PTC into symptomatic and asymptomatic plaques.


Sign in / Sign up

Export Citation Format

Share Document