scholarly journals DeepImageTranslator V2: analysis of multimodal medical images using semantic segmentation maps generated through deep learning

2021 ◽  
Author(s):  
En Zhou Ye ◽  
En Hui Ye ◽  
Run Zhou Ye

Introduction: Analysis of multimodal medical images often requires the selection of one or many anatomical regions of interest (ROIs) for extraction of useful statistics. This task can prove laborious when a manual approach is used. We have previously developed a user-friendly software tool for image-to-image translation using deep learning. Therefore, we present herein an update to the DeepImageTranslator software with the addiction of a tool for multimodal medical image segmentation analysis (hereby referred to as the MMMISA). Methods: The MMMISA was implemented using the Tkinter library. Backend computations were implemented using the Pydicom, Numpy, and OpenCV libraries. We tested our software using 4188 whole-body axial 2-deoxy-2-[18F]-fluoroglucose-position emission tomography/computed tomography ([18F]-FDG-PET/CT) slices of 10 patients from the ACRIN-HNSCC (American College of Radiology Imaging Network-Head and Neck Squamous Cell Carcinoma) database. Using the deep learning software DeepImageTranslator, a model was trained with 36 randomly selected CT slices and manually labelled semantic segmentation maps. Utilizing the trained model, all the CT scans of the 10 HNSCC patients were segmented with high accuracy. Segmentation maps generated using the deep convolutional network were then used to measure organ specific [18F]-FDG uptake. We also compared measurements performed using the MMMISA and those made with manually selected ROIs. Results: The MMMISA is a tool that allows user to select ROIs based on deep learning-generated segmentation maps and to compute accurate statistics for these ROIs based on coregistered multimodal images. We found that organ-specific [18F]-FDG uptake measured using multiple manually selected ROIs is concordant with whole-tissue measurements made with segmentation maps using the MMMISA tool.

PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0247388
Author(s):  
Jingfei Hu ◽  
Hua Wang ◽  
Jie Wang ◽  
Yunqi Wang ◽  
Fang He ◽  
...  

Semantic segmentation of medical images provides an important cornerstone for subsequent tasks of image analysis and understanding. With rapid advancements in deep learning methods, conventional U-Net segmentation networks have been applied in many fields. Based on exploratory experiments, features at multiple scales have been found to be of great importance for the segmentation of medical images. In this paper, we propose a scale-attention deep learning network (SA-Net), which extracts features of different scales in a residual module and uses an attention module to enforce the scale-attention capability. SA-Net can better learn the multi-scale features and achieve more accurate segmentation for different medical image. In addition, this work validates the proposed method across multiple datasets. The experiment results show SA-Net achieves excellent performances in the applications of vessel detection in retinal images, lung segmentation, artery/vein(A/V) classification in retinal images and blastocyst segmentation. To facilitate SA-Net utilization by the scientific community, the code implementation will be made publicly available.


10.29007/r6cd ◽  
2022 ◽  
Author(s):  
Hoang Nhut Huynh ◽  
My Duyen Nguyen ◽  
Thai Hong Truong ◽  
Quoc Tuan Nguyen Diep ◽  
Anh Tu Tran ◽  
...  

Segmentation is one of the most common methods for analyzing and processing medical images, assisting doctors in making accurate diagnoses by providing detailed information about the required body part. However, segmenting medical images presents a number of challenges, including the need for medical professionals to be trained, the fact that it is time-consuming and prone to errors. As a result, it appears that an automated medical image segmentation system is required. Deep learning algorithms have recently demonstrated superior performance for segmentation tasks, particularly semantic segmentation networks that provide a pixel-level understanding of images. U- Net for image segmentation is one of the modern complex networks in the field of medical imaging; several segmentation networks have been built on its foundation with the advancements of Recurrent Residual convolutional units and the construction of recurrent residual convolutional neural network based on U-Net (R2U-Net). R2U-Net is used to perform trachea and bronchial segmentation on a dataset of 36,000 images. With a variety of experiments, the proposed segmentation resulted in a dice-coefficient of 0.8394 on the test dataset. Finally, a number of research issues are raised, indicating the need for future improvements.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Karim Armanious ◽  
Tobias Hepp ◽  
Thomas Küstner ◽  
Helmut Dittmann ◽  
Konstantin Nikolaou ◽  
...  

2007 ◽  
Vol 28 (5) ◽  
pp. 373-381 ◽  
Author(s):  
Trond V. Bogsrud ◽  
Dimitrios Karantanis ◽  
Mark A. Nathan ◽  
Brian P. Mullan ◽  
Gregory A. Wiseman ◽  
...  
Keyword(s):  
Pet Ct ◽  

2021 ◽  
pp. 193229682110426
Author(s):  
Or Katz ◽  
Dan Presil ◽  
Liz Cohen ◽  
Roi Nachmani ◽  
Naomi Kirshner ◽  
...  

Background: Medical image segmentation is a well-studied subject within the field of image processing. The goal of this research is to create an AI retinal screening grading system that is both accurate and fast. We introduce a new segmentation network which achieves state-of-the-art results on semantic segmentation of color fundus photographs. By applying the net-work to identify anatomical markers of diabetic retinopathy (DR) and diabetic macular edema (DME), we collect sufficient information to classify patients by grades R0 and R1 or above, M0 and M1. Methods: The AI grading system was trained on screening data to evaluate the presence of DR and DME. The core algorithm of the system is a deep learning network that segments relevant anatomical features in a retinal image. Patients were graded according to the standard NHS Diabetic Eye Screening Program feature-based grading protocol. Results: The algorithm performance was evaluated with a series of 6,981 patient retinal images from routine diabetic eye screenings. It correctly predicted 98.9% of retinopathy events and 95.5% of maculopathy events. Non-disease events prediction rate was 68.6% for retinopathy and 81.2% for maculopathy. Conclusion: This novel deep learning model was trained and tested on patient data from annual diabetic retinopathy screenings can classify with high accuracy the DR and DME status of a person with diabetes. The system can be easily reconfigured according to any grading protocol, without running a long AI training procedure. The incorporation of the AI grading system can increase the graders’ productivity and improve the final outcome accuracy of the screening process.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Dominik Müller ◽  
Frank Kramer

Abstract Background The increased availability and usage of modern medical imaging induced a strong need for automatic medical image segmentation. Still, current image segmentation platforms do not provide the required functionalities for plain setup of medical image segmentation pipelines. Already implemented pipelines are commonly standalone software, optimized on a specific public data set. Therefore, this paper introduces the open-source Python library MIScnn. Implementation The aim of MIScnn is to provide an intuitive API allowing fast building of medical image segmentation pipelines including data I/O, preprocessing, data augmentation, patch-wise analysis, metrics, a library with state-of-the-art deep learning models and model utilization like training, prediction, as well as fully automatic evaluation (e.g. cross-validation). Similarly, high configurability and multiple open interfaces allow full pipeline customization. Results Running a cross-validation with MIScnn on the Kidney Tumor Segmentation Challenge 2019 data set (multi-class semantic segmentation with 300 CT scans) resulted into a powerful predictor based on the standard 3D U-Net model. Conclusions With this experiment, we could show that the MIScnn framework enables researchers to rapidly set up a complete medical image segmentation pipeline by using just a few lines of code. The source code for MIScnn is available in the Git repository: https://github.com/frankkramer-lab/MIScnn.


Biomedicines ◽  
2020 ◽  
Vol 8 (6) ◽  
pp. 159
Author(s):  
Brenda Huska ◽  
Sarah Niccoli ◽  
Christopher P. Phenix ◽  
Simon J. Lees

Significant depots of brown adipose tissue (BAT) have been identified in many adult humans through positron emission tomography (PET), with the amount of BAT being inversely correlated with obesity. As dietary activation of BAT has implications for whole body glucose metabolism, leucine was used in the present study to determine its ability to promote BAT activation resulting in increased glucose uptake. In order to assess this, 2-deoxy-2-(fluorine-18)fluoro-d-glucose (18F-FDG) uptake was measured in C57BL/6 mice using microPET after treatment with leucine, glucose, or both in interscapular BAT (IBAT). Pretreatment with propranolol (PRP) was used to determine the role of β-adrenergic activation in glucose and leucine-mediated 18F-FDG uptake. Analysis of maximum standardized uptake values (SUVMAX) determined that glucose administration increased 18F-FDG uptake in IBAT by 25.3%. While leucine did not promote 18F-FDG uptake alone, it did potentiate glucose-mediated 18F-FDG uptake, increasing 18F-FDG uptake in IBAT by 22.5%, compared to glucose alone. Pretreatment with PRP prevented the increase in IBAT 18F-FDG uptake following the combination of glucose and leucine administration. These data suggest that leucine is effective in promoting BAT 18F-FDG uptake through β-adrenergic activation in combination with glucose.


Author(s):  
Amal Ibrahim Ahmed Othman ◽  
Merhan Nasr ◽  
Moustafa Abdel-Kawi

Abstract Background The purpose of this study was to compare between contrast-enhanced computer tomography (CE CT) and 18F-FDG PET/CT in the detection of extranodal involvement in lymphoma and to correlate between SUVmax of the extranodal lesion and the hottest LN. One hundred patients with pathologically proven lymphoma underwent whole body 18F-FDG PET/CT and CECT scans. Images were compared regarding the ability of detection of extranodal lymphomatous sites. Kappa agreement was applied to find the degree of agreement between both modalities. Pearson’s correlation was used for correlating SUVmax of the extranodal lesions and hottest LN. The degree of FDG uptake was correlated with histopathological type. Results There was a poor agreement between PET/CT and CECT in the detection of extranodal sites (k = 0.32). There was a significant positive moderate correlation between SUVmax of the extranodal lesions and hottest LN (r = 0.45). PET/CT study resulted in up staging of 10% and down staging of 5% of cases. Conclusion In lymphoma staging, FDG PET/CT enables more detection of extranodal involved sites that show normal morphology at CECT. It differentiates lymphomatous infiltration from benign causes of increased FDG uptake with subsequent proper disease staging.


Author(s):  
Sumati Sundaraiya ◽  
Abubacker Sulaiman ◽  
Adhithyan Rajendran

AbstractA young gentleman with suspected cardiac sarcoidosis and LV dysfunction whose CMR revealed multifocal subepicardial to mid myocardial linear enhancement in the left ventricular myocardium underwent cardiac 18F-FDG PET imaging. The images revealed patchy regions of increased FDG uptake involving the apical to mid anterolateral, mid to basal anteroseptal/ right ventricular and mildly increased FDG uptake in apical inferior segments of the LV myocardium concordant with CMR findings. Whole body PET CT imaging showed multiple hypermetabolic supra and infra diaphragmatic lymphadenopathy, with no pulmonary lesion identified. Biopsy from the left para aortic lymph node revealed necrotizing granulomatous inflammation consistent with tuberculosis. Based on the histopathological findings of the lymph nodes, diagnosis of cardiac tuberculosis was made, given the similar imaging appearances in both sarcoidosis and TB. This case highlights that cardiac TB although rare, should be included in the differential diagnosis in patients with suspected infiltrative cardiomyopathy, particularly in TB endemic regions.


Sign in / Sign up

Export Citation Format

Share Document