Identifying Fatal Head Injuries on Postmortem Computed Tomography Using Convolutional Neural Network/Deep Learning: A Feasibility Study

2020 ◽  
Vol 65 (6) ◽  
pp. 2019-2022 ◽  
Author(s):  
Jack Garland ◽  
Benjamin Ondruschka ◽  
Simon Stables ◽  
Paul Morrow ◽  
Kilak Kesha ◽  
...  
Cancers ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 652 ◽  
Author(s):  
Carlo Augusto Mallio ◽  
Andrea Napolitano ◽  
Gennaro Castiello ◽  
Francesco Maria Giordano ◽  
Pasquale D'Alessio ◽  
...  

Background: Coronavirus disease 2019 (COVID-19) pneumonia and immune checkpoint inhibitor (ICI) therapy-related pneumonitis share common features. The aim of this study was to determine on chest computed tomography (CT) images whether a deep convolutional neural network algorithm is able to solve the challenge of differential diagnosis between COVID-19 pneumonia and ICI therapy-related pneumonitis. Methods: We enrolled three groups: a pneumonia-free group (n = 30), a COVID-19 group (n = 34), and a group of patients with ICI therapy-related pneumonitis (n = 21). Computed tomography images were analyzed with an artificial intelligence (AI) algorithm based on a deep convolutional neural network structure. Statistical analysis included the Mann–Whitney U test (significance threshold at p < 0.05) and the receiver operating characteristic curve (ROC curve). Results: The algorithm showed low specificity in distinguishing COVID-19 from ICI therapy-related pneumonitis (sensitivity 97.1%, specificity 14.3%, area under the curve (AUC) = 0.62). ICI therapy-related pneumonitis was identified by the AI when compared to pneumonia-free controls (sensitivity = 85.7%, specificity 100%, AUC = 0.97). Conclusions: The deep learning algorithm is not able to distinguish between COVID-19 pneumonia and ICI therapy-related pneumonitis. Awareness must be increased among clinicians about imaging similarities between COVID-19 and ICI therapy-related pneumonitis. ICI therapy-related pneumonitis can be applied as a challenge population for cross-validation to test the robustness of AI models used to analyze interstitial pneumonias of variable etiology.


2019 ◽  
Vol 110 (5) ◽  
pp. 338-350
Author(s):  
Yanji Luo ◽  
Xin Chen ◽  
Jie Chen ◽  
Chenyu Song ◽  
Jingxian Shen ◽  
...  

Introduction: The pathological grading of pancreatic neuroendocrine neoplasms (pNENs) is an independent predictor of survival and indicator for treatment. Deep learning (DL) with a convolutional neural network (CNN) may improve the preoperative prediction of pNEN grading. Methods: Ninety-three pNEN patients with preoperative contrast-enhanced computed tomography (CECT) from Hospital I were retrospectively enrolled. A CNN-based DL algorithm was applied to the CECT images to obtain 3 models (arterial, venous, and arterial/venous models), the performances of which were evaluated via an eightfold cross-validation technique. The CECT images of the optimal phase were used for comparing the DL and traditional machine learning (TML) models in predicting the pathological grading of pNENs. The performance of radiologists by using qualitative and quantitative computed tomography findings was also evaluated. The best DL model from the eightfold cross-validation was evaluated on an independent testing set of 19 patients from Hospital II who were scanned on a different scanner. The Kaplan-Meier (KM) analysis was employed for survival analysis. Results: The area under the curve (AUC; 0.81) of arterial phase in validation set was significantly higher than those of venous (AUC 0.57, p = 0.03) and arterial/venous phase (AUC 0.70, p = 0.03) in predicting the pathological grading of pNENs. Compared with the TML models, the DL model gave a higher (although insignificantly) AUC. The highest OR was achieved for the p ratio <0.9, the AUC and accuracy for diagnosing G3 pNENs were 0.80 and 79.1% respectively. The DL algorithm achieved an AUC of 0.82 and an accuracy of 88.1% for the independent testing set. The KM analysis showed a statistical significant difference between the predicted G1/2 and G3 groups in the progression-free survival (p = 0.001) and overall survival (p < 0.001). Conclusion: The CNN-based DL method showed a relatively robust performance in predicting pathological grading of pNENs from CECT images.


Author(s):  
Victor Ibanez ◽  
Samuel Gunz ◽  
Svenja Erne ◽  
Eric J. Rawdon ◽  
Garyfalia Ampanozi ◽  
...  

AbstractImaging techniques are widely used for medical diagnostics. In some cases, a lack of medical practitioners who can manually analyze the images can lead to a bottleneck. Consequently, we developed a custom-made convolutional neural network (RiFNet = Rib Fracture Network) that can detect rib fractures in postmortem computed tomography. In a retrospective cohort study, we retrieved PMCT data from 195 postmortem cases with rib fractures from July 2017 to April 2018 from our database. The computed tomography data were prepared using a plugin in the commercial imaging software Syngo.via whereby the rib cage was unfolded on a single-in-plane image reformation. Out of the 195 cases, a total of 585 images were extracted and divided into two groups labeled “with” and “without” fractures. These two groups were subsequently divided into training, validation, and test datasets to assess the performance of RiFNet. In addition, we explored the possibility of applying transfer learning techniques on our dataset by choosing two independent noncommercial off-the-shelf convolutional neural network architectures (ResNet50 V2 and Inception V3) and compared the performances of those two with RiFNet. When using pre-trained convolutional neural networks, we achieved an F1 score of 0.64 with Inception V3 and an F1 score of 0.61 with ResNet50 V2. We obtained an average F1 score of 0.91 ± 0.04 with RiFNet. RiFNet is efficient in detecting rib fractures on postmortem computed tomography. Transfer learning techniques are not necessarily well adapted to make classifications in postmortem computed tomography.


2019 ◽  
Author(s):  
Seoin Back ◽  
Junwoong Yoon ◽  
Nianhan Tian ◽  
Wen Zhong ◽  
Kevin Tran ◽  
...  

We present an application of deep-learning convolutional neural network of atomic surface structures using atomic and Voronoi polyhedra-based neighbor information to predict adsorbate binding energies for the application in catalysis.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Young-Gon Kim ◽  
Sungchul Kim ◽  
Cristina Eunbee Cho ◽  
In Hye Song ◽  
Hee Jin Lee ◽  
...  

AbstractFast and accurate confirmation of metastasis on the frozen tissue section of intraoperative sentinel lymph node biopsy is an essential tool for critical surgical decisions. However, accurate diagnosis by pathologists is difficult within the time limitations. Training a robust and accurate deep learning model is also difficult owing to the limited number of frozen datasets with high quality labels. To overcome these issues, we validated the effectiveness of transfer learning from CAMELYON16 to improve performance of the convolutional neural network (CNN)-based classification model on our frozen dataset (N = 297) from Asan Medical Center (AMC). Among the 297 whole slide images (WSIs), 157 and 40 WSIs were used to train deep learning models with different dataset ratios at 2, 4, 8, 20, 40, and 100%. The remaining, i.e., 100 WSIs, were used to validate model performance in terms of patch- and slide-level classification. An additional 228 WSIs from Seoul National University Bundang Hospital (SNUBH) were used as an external validation. Three initial weights, i.e., scratch-based (random initialization), ImageNet-based, and CAMELYON16-based models were used to validate their effectiveness in external validation. In the patch-level classification results on the AMC dataset, CAMELYON16-based models trained with a small dataset (up to 40%, i.e., 62 WSIs) showed a significantly higher area under the curve (AUC) of 0.929 than those of the scratch- and ImageNet-based models at 0.897 and 0.919, respectively, while CAMELYON16-based and ImageNet-based models trained with 100% of the training dataset showed comparable AUCs at 0.944 and 0.943, respectively. For the external validation, CAMELYON16-based models showed higher AUCs than those of the scratch- and ImageNet-based models. Model performance for slide feasibility of the transfer learning to enhance model performance was validated in the case of frozen section datasets with limited numbers.


2021 ◽  
Vol 13 (2) ◽  
pp. 274
Author(s):  
Guobiao Yao ◽  
Alper Yilmaz ◽  
Li Zhang ◽  
Fei Meng ◽  
Haibin Ai ◽  
...  

The available stereo matching algorithms produce large number of false positive matches or only produce a few true-positives across oblique stereo images with large baseline. This undesired result happens due to the complex perspective deformation and radiometric distortion across the images. To address this problem, we propose a novel affine invariant feature matching algorithm with subpixel accuracy based on an end-to-end convolutional neural network (CNN). In our method, we adopt and modify a Hessian affine network, which we refer to as IHesAffNet, to obtain affine invariant Hessian regions using deep learning framework. To improve the correlation between corresponding features, we introduce an empirical weighted loss function (EWLF) based on the negative samples using K nearest neighbors, and then generate deep learning-based descriptors with high discrimination that is realized with our multiple hard network structure (MTHardNets). Following this step, the conjugate features are produced by using the Euclidean distance ratio as the matching metric, and the accuracy of matches are optimized through the deep learning transform based least square matching (DLT-LSM). Finally, experiments on Large baseline oblique stereo images acquired by ground close-range and unmanned aerial vehicle (UAV) verify the effectiveness of the proposed approach, and comprehensive comparisons demonstrate that our matching algorithm outperforms the state-of-art methods in terms of accuracy, distribution and correct ratio. The main contributions of this article are: (i) our proposed MTHardNets can generate high quality descriptors; and (ii) the IHesAffNet can produce substantial affine invariant corresponding features with reliable transform parameters.


Sign in / Sign up

Export Citation Format

Share Document