scholarly journals A Comparison of Deep Learning and Pharmacokinetic Model Selection Methods in Segmentation of High-Grade Glioma

Author(s):  
Azimeh NV Dehkordi ◽  
Sedigheh Sina ◽  
Freshteh Khodadadi

Purpose: Glioma tumor segmentation is an essential step in clinical decision making. Recently, computer-aided methods have been widely used for rapid and accurate delineation of the tumor regions. Methods based on image feature extraction can be used as fast methods, while segmentation based on the physiology and pharmacokinetic of the tissues is more accurate. This study aims to compare the performance of tumor segmentation based on these two different methods. Materials and Methods: Nested Model Selection (NMS) based on Extended-Toft’s model was applied to 190 Dynamic Contrast-Enhanced MRI (DCE-MRI) slices acquired from 25 Glioblastoma Multiforme (GBM) patients in 70 time-points. A model with three pharmacokinetic parameters, Model 3, is usually assigned to tumor voxel based on the time-contrast concentration signal. We utilized Deep-Net as a CNN network, based on Deeplabv3+ and layers of pre-trained resnet18, which has been trained with 17288 T1-Contrast MRI slices with HGG brain tumor to predict the tumor region in our 190 DCE MRI T1 images. The NMS-based physiological tumor segmentation was considered as a reference to compare the results of tumor segmentation by Deep-Net. Dice, Jaccard, and overlay similarity coefficients were used to evaluate the tumor segmentation accuracy and reliability of the Deep tumor segmentation method. Results: The results showed a relatively high similarity coefficient (Dice coefficient: 0.73±0.15, Jaccard coefficient: 0.66±0.17, and overlay coefficient: 0.71±0.15) between deep learning tumor segmentation and the tumor region identified by the NMS method. The results indicate that the deep learning methods may be used as accurate and robust tumor segmentation. Conclusion: Deep learning-based segmentation can play a significant role to increase the segmentation accuracy in clinical application, if their training process is completely automatic and independent from human error.

2020 ◽  
Vol 10 (20) ◽  
pp. 7201
Author(s):  
Xiao-Xia Yin ◽  
Lihua Yin ◽  
Sillas Hadjiloucas

Mining algorithms for Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) of breast tissue are discussed. The algorithms are based on recent advances in multi-dimensional signal processing and aim to advance current state-of-the-art computer-aided detection and analysis of breast tumours when these are observed at various states of development. The topics discussed include image feature extraction, information fusion using radiomics, multi-parametric computer-aided classification and diagnosis using information fusion of tensorial datasets as well as Clifford algebra based classification approaches and convolutional neural network deep learning methodologies. The discussion also extends to semi-supervised deep learning and self-supervised strategies as well as generative adversarial networks and algorithms using generated confrontational learning approaches. In order to address the problem of weakly labelled tumour images, generative adversarial deep learning strategies are considered for the classification of different tumour types. The proposed data fusion approaches provide a novel Artificial Intelligence (AI) based framework for more robust image registration that can potentially advance the early identification of heterogeneous tumour types, even when the associated imaged organs are registered as separate entities embedded in more complex geometric spaces. Finally, the general structure of a high-dimensional medical imaging analysis platform that is based on multi-task detection and learning is proposed as a way forward. The proposed algorithm makes use of novel loss functions that form the building blocks for a generated confrontation learning methodology that can be used for tensorial DCE-MRI. Since some of the approaches discussed are also based on time-lapse imaging, conclusions on the rate of proliferation of the disease can be made possible. The proposed framework can potentially reduce the costs associated with the interpretation of medical images by providing automated, faster and more consistent diagnosis.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Siyu Xiong ◽  
Guoqing Wu ◽  
Xitian Fan ◽  
Xuan Feng ◽  
Zhongcheng Huang ◽  
...  

Abstract Background Brain tumor segmentation is a challenging problem in medical image processing and analysis. It is a very time-consuming and error-prone task. In order to reduce the burden on physicians and improve the segmentation accuracy, the computer-aided detection (CAD) systems need to be developed. Due to the powerful feature learning ability of the deep learning technology, many deep learning-based methods have been applied to the brain tumor segmentation CAD systems and achieved satisfactory accuracy. However, deep learning neural networks have high computational complexity, and the brain tumor segmentation process consumes significant time. Therefore, in order to achieve the high segmentation accuracy of brain tumors and obtain the segmentation results efficiently, it is very demanding to speed up the segmentation process of brain tumors. Results Compared with traditional computing platforms, the proposed FPGA accelerator has greatly improved the speed and the power consumption. Based on the BraTS19 and BraTS20 dataset, our FPGA-based brain tumor segmentation accelerator is 5.21 and 44.47 times faster than the TITAN V GPU and the Xeon CPU. In addition, by comparing energy efficiency, our design can achieve 11.22 and 82.33 times energy efficiency than GPU and CPU, respectively. Conclusion We quantize and retrain the neural network for brain tumor segmentation and merge batch normalization layers to reduce the parameter size and computational complexity. The FPGA-based brain tumor segmentation accelerator is designed to map the quantized neural network model. The accelerator can increase the segmentation speed and reduce the power consumption on the basis of ensuring high accuracy which provides a new direction for the automatic segmentation and remote diagnosis of brain tumors.


Author(s):  
Lei Zhang ◽  
Zhimeng Luo ◽  
Ruimei Chai ◽  
Dooman Arefan ◽  
Jules Sumkin ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Yongchao Jiang ◽  
Mingquan Ye ◽  
Daobin Huang ◽  
Xiaojie Lu

Automatic and accurate segmentation of brain tumors plays an important role in the diagnosis and treatment of brain tumors. In order to improve the accuracy of brain tumor segmentation, an improved multimodal MRI brain tumor segmentation algorithm based on U-net is proposed in this paper. In the original U-net, the contracting path uses the pooling layer to reduce the resolution of the feature image and increase the receptive field. In the expanding path, the up sampling is used to restore the size of the feature image. In this process, some details of the image will be lost, leading to low segmentation accuracy. This paper proposes an improved convolutional neural network named AIU-net (Atrous-Inception U-net). In the encoder of U-net, A-inception (Atrous-inception) module is introduced to replace the original convolution block. The A-inception module is an inception structure with atrous convolution, which increases the depth and width of the network and can expand the receptive field without adding additional parameters. In order to capture the multiscale features, the atrous spatial pyramid pooling module (ASPP) is introduced. The experimental results on the BraTS (the multimodal brain tumor segmentation challenge) dataset show that the dice score obtained by this method is 0.93 for the enhancing tumor region, 0.86 for the whole tumor region, and 0.92 for the tumor core region, and the segmentation accuracy is improved.


2021 ◽  
Author(s):  
Edson Damasceno Carvalho ◽  
Romuere Rodrigues Veloso Silva ◽  
Mano Joseph Mathew ◽  
Flavio Henrique Duarte Araujo ◽  
Antonio Oseas De Carvalho Filho

Author(s):  
G. Anand Kumar ◽  
P. V. Sridevi

The major goal of this paper is to isolate tumor region from nontumor regions and the estimation of tumor volume. Accurate segmentation is not an easy task due to the varying size, shape and location of the tumor. After segmentation, volume estimation is necessary in order to accurately estimate the tumor volume. By exactly estimating the volume of abnormal tissue, physicians can do excellent prognosis, clinical planning and dosage estimation. This paper describes a new Euclidean Similarity factor (ESF) based active contour model with deep learning for segmenting the tumor region into complete, core and enhanced tumor portions. Initially, the ESF considers the spatial distances and intensity differences of the region automatically to detect the tumor region. It preserves the image details but removes the noisy details. Then, the 3D Convolutional Neural Network (3D CNN) segments the tumor by automatically extracting spatiotemporal features. Finally, the extended shoelace method estimates the volume of the tumor accurately for [Formula: see text]-sided polygons. The simulation result achieves a high accuracy of 92% and Jaccard index of 0.912 and computes the tumor volume with effective performance than existing approaches.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Steven A. Hicks ◽  
Jonas L. Isaksen ◽  
Vajira Thambawita ◽  
Jonas Ghouse ◽  
Gustav Ahlberg ◽  
...  

AbstractDeep learning-based tools may annotate and interpret medical data more quickly, consistently, and accurately than medical doctors. However, as medical doctors are ultimately responsible for clinical decision-making, any deep learning-based prediction should be accompanied by an explanation that a human can understand. We present an approach called electrocardiogram gradient class activation map (ECGradCAM), which is used to generate attention maps and explain the reasoning behind deep learning-based decision-making in ECG analysis. Attention maps may be used in the clinic to aid diagnosis, discover new medical knowledge, and identify novel features and characteristics of medical tests. In this paper, we showcase how ECGradCAM attention maps can unmask how a novel deep learning model measures both amplitudes and intervals in 12-lead electrocardiograms, and we show an example of how attention maps may be used to develop novel ECG features.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5312
Author(s):  
Yanni Zhang ◽  
Yiming Liu ◽  
Qiang Li ◽  
Jianzhong Wang ◽  
Miao Qi ◽  
...  

Recently, deep learning-based image deblurring and deraining have been well developed. However, most of these methods fail to distill the useful features. What is more, exploiting the detailed image features in a deep learning framework always requires a mass of parameters, which inevitably makes the network suffer from a high computational burden. We propose a lightweight fusion distillation network (LFDN) for image deblurring and deraining to solve the above problems. The proposed LFDN is designed as an encoder–decoder architecture. In the encoding stage, the image feature is reduced to various small-scale spaces for multi-scale information extraction and fusion without much information loss. Then, a feature distillation normalization block is designed at the beginning of the decoding stage, which enables the network to distill and screen valuable channel information of feature maps continuously. Besides, an information fusion strategy between distillation modules and feature channels is also carried out by the attention mechanism. By fusing different information in the proposed approach, our network can achieve state-of-the-art image deblurring and deraining results with a smaller number of parameters and outperform the existing methods in model complexity.


Sign in / Sign up

Export Citation Format

Share Document