scholarly journals Portable framework to deploy deep learning segmentation models for medical images

2021 ◽  
Author(s):  
Aditi Iyer ◽  
Eve Locastro ◽  
Aditya Apte ◽  
Harini Veeraraghavan ◽  
Joseph O Deasy

Purpose: This work presents a framework for deployment of deep learning image segmentation models for medical images across different operating systems and programming languages. Methods: Computational Environment for Radiological Research (CERR) platform was extended for deploying deep learning-based segmentation models to leverage CERR's existing functionality for radiological data import, transformation, management, and visualization. The framework is compatible with MATLAB as well as GNU Octave and Python for license-free use. Pre and post processing configurations including parameters for pre-processing images, population of channels, and post-processing segmentations was standardized using JSON format. CPU and GPU implementations of pre-trained deep learning segmentation models were packaged using Singularity containers for use in Linux and Conda environment archives for Windows, macOS and Linux operating systems. The framework accepts images in various formats including DICOM and CERR's planC and outputs segmentation in various formats including DICOM RTSTRUCT and planC objects. The ability to access the results readily in planC format enables visualization as well as radiomics and dosimetric analysis. The framework can be readily deployed in clinical software such as MIM via their extensions. Results: The open-source, GPL copyrighted framework developed in this work has been successfully used to deploy Deep Learning based segmentation models for five in-house developed and published models. These models span various treatment sites (H&N, Lung and Prostate) and modalities (CT, MR). Documentation for their usage and demo workflow is provided at https://github.com/cerr/CERR/wiki/Auto-Segmentation-models. The framework has also been used in clinical workflow for segmenting images for treatment planning and for segmenting publicly available large datasets for outcomes studies. Conclusions: This work presented a comprehensive, open-source framework for deploying deep learning-based medical image segmentation models. The framework was used to translate the developed models to clinic as well as reproducible and consistent image segmentation across institutions, facilitating multi-institutional outcomes modeling studies.

2020 ◽  
Vol 10 (19) ◽  
pp. 6838
Author(s):  
Chen Li ◽  
Wei Chen ◽  
Yusong Tan

Malignant lesions are a huge threat to human health and have a high mortality rate. Locating the contour of organs is a preparation step, and it helps doctors diagnose correctly. Therefore, there is an urgent clinical need for a segmentation model specifically designed for medical imaging. However, most current medical image segmentation models directly migrate from natural image segmentation models, thus ignoring some characteristic features for medical images, such as false positive phenomena and the blurred boundary problem in 3D volume data. The research on organ segmentation models for medical images is still challenging and demanding. As a consequence, we redesign a 3D convolutional neural network (CNN) based on 3D U-Net and adopted the render method from computer graphics for 3D medical images segmentation, named Render 3D U-Net. This network adapts a subdivision-based point-sampling method to replace the original upsampling method for rendering high-quality boundaries. Besides, Render 3D U-Net integrates the point-sampling method into 3D ANU-Net architecture under deep supervision. Meanwhile, to reduce false positive phenomena in clinical diagnosis and to achieve more accurate segmentation, Render 3D U-Net specially designs a module for screening false positive. Finally, three public challenge datasets (MICCAI 2017 LiTS, MICCAI 2019 KiTS, and ISBI 2019 segTHOR) were selected as experiment datasets and to evaluate the performance on target organs. Compared with other models, Render 3D U-Net improved the performance on both overall organ and boundary in the CT image segmentation tasks, including in the liver, kidney, and heart.


2020 ◽  
Vol 64 (2) ◽  
pp. 20508-1-20508-12 ◽  
Author(s):  
Getao Du ◽  
Xu Cao ◽  
Jimin Liang ◽  
Xueli Chen ◽  
Yonghua Zhan

Abstract Medical image analysis is performed by analyzing images obtained by medical imaging systems to solve clinical problems. The purpose is to extract effective information and improve the level of clinical diagnosis. In recent years, automatic segmentation based on deep learning (DL) methods has been widely used, where a neural network can automatically learn image features, which is in sharp contrast with the traditional manual learning method. U-net is one of the most important semantic segmentation frameworks for a convolutional neural network (CNN). It is widely used in the medical image analysis domain for lesion segmentation, anatomical segmentation, and classification. The advantage of this network framework is that it can not only accurately segment the desired feature target and effectively process and objectively evaluate medical images but also help to improve accuracy in the diagnosis by medical images. Therefore, this article presents a literature review of medical image segmentation based on U-net, focusing on the successful segmentation experience of U-net for different lesion regions in six medical imaging systems. Along with the latest advances in DL, this article introduces the method of combining the original U-net architecture with deep learning and a method for improving the U-net network.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Shaolei Lang ◽  
Yinxia Xu ◽  
Liang Li ◽  
Bin Wang ◽  
Yang Yang ◽  
...  

In recent years, the incidence of thyroid nodules has shown an increasing trend year by year and has become one of the important diseases that endanger human health. Ultrasound medical images based on deep learning are widely used in clinical diagnosis due to their cheapness, no radiation, and low cost. The use of image processing technology to accurately segment the nodule area provides important auxiliary information for the doctor’s diagnosis, which is of great value for guiding clinical treatment. The purpose of this article is to explore the application value of combined detection of abnormal sugar-chain glycoprotein (TAP) and carcinoembryonic antigen (CEA) in the risk estimation of thyroid cancer in patients with thyroid nodules of type IV and above based on deep learning medical images. In this paper, ultrasound thyroid images are used as the research content, and the active contour level set method is used as the segmentation basis, and a segmentation algorithm for thyroid nodules is proposed. This paper takes ultrasound thyroid images as the research content, uses the active contour level set method as the basis of segmentation, and proposes an image segmentation algorithm Fast-SegNet based on deep learning, which extends the network model that was mainly used for thyroid medical image segmentation to more scenarios of the segmentation task. From January 2019 to October 2020, 400 patients with thyroid nodules of type IV and above were selected for physical examination and screening at the Health Management Center of our hospital, and they were diagnosed as thyroid cancer by pathological examination of thyroid nodules under B-ultrasound positioning. The detection rates of thyroid cancer in patients with thyroid nodules of type IV and above are compared; serum TAP and CEA levels are detected; PT-PCR is used to detect TTF-1, PTEN, and NIS expression; the detection, missed diagnosis, misdiagnosis rate, and diagnostic efficiency of the three detection methods are compared. This article uses the thyroid nodule region segmented based on deep learning medical images and compares experiments with CV model, LBF model, and DRLSE model. The experimental results show that the segmentation overlap rate of this method is as high as 98.4%, indicating that the algorithm proposed in this paper can more accurately extract the thyroid nodule area.


10.29007/r6cd ◽  
2022 ◽  
Author(s):  
Hoang Nhut Huynh ◽  
My Duyen Nguyen ◽  
Thai Hong Truong ◽  
Quoc Tuan Nguyen Diep ◽  
Anh Tu Tran ◽  
...  

Segmentation is one of the most common methods for analyzing and processing medical images, assisting doctors in making accurate diagnoses by providing detailed information about the required body part. However, segmenting medical images presents a number of challenges, including the need for medical professionals to be trained, the fact that it is time-consuming and prone to errors. As a result, it appears that an automated medical image segmentation system is required. Deep learning algorithms have recently demonstrated superior performance for segmentation tasks, particularly semantic segmentation networks that provide a pixel-level understanding of images. U- Net for image segmentation is one of the modern complex networks in the field of medical imaging; several segmentation networks have been built on its foundation with the advancements of Recurrent Residual convolutional units and the construction of recurrent residual convolutional neural network based on U-Net (R2U-Net). R2U-Net is used to perform trachea and bronchial segmentation on a dataset of 36,000 images. With a variety of experiments, the proposed segmentation resulted in a dice-coefficient of 0.8394 on the test dataset. Finally, a number of research issues are raised, indicating the need for future improvements.


2019 ◽  
Author(s):  
Aditya P. Apte ◽  
Aditi Iyer ◽  
Maria Thor ◽  
Rutu Pandya ◽  
Rabia Haq ◽  
...  

An open-source library of implementations for deep-learning based image segmentation and outcomes models is presented in this work. As oncology treatment planning becomes increasingly driven by automation, such a library of model implementations is crucial to (i) validate existing models on datasets collected at different institutions, (ii) automate segmentation, (iii) create ensembles for improving performance and (iv) incorporate validated models in the clinical workflow. The library was developed with Computational Environment for Radiological Research (CERR) software platform. CERR is a natural choice to centralize model implementations due to its comprehensiveness, popularity, and ease of use. CERR provides well-validated feature extraction for radiotherapy dosimetry and radiomics with fine control over the calculation settings. This allows users to select the appropriate feature calculation used in the model derivation. Models for automatic image segmentation are distributed via Singularity containers, with seamless i/o to and from CERR. Singularity containers allow for segmentation models to be deployed with a variety of scientific computing architectures. Deployment of models is driven by JSON configuration file, making it convenient to plug-in models. Models from the library can be called programmatically for batch evaluation. The library includes implementations for popular radiotherapy models outlined in the Quantitative Analysis of Normal Tissue Effects in the Clinic effort and recently published literature. Radiomics models include features from Image Biomarker Standardization features found to be important across multiple sites and image modalities. Deep learning-based image segmentation models include state of the art networks such as Deeplab and other problem-specific architectures. The library is distributed as GNU-copyrighted software at https://www.github.com/cerr/CERR.


2021 ◽  
Vol 11 (4) ◽  
pp. 1965
Author(s):  
Raul-Ronald Galea ◽  
Laura Diosan ◽  
Anca Andreica ◽  
Loredana Popa ◽  
Simona Manole ◽  
...  

Despite the promising results obtained by deep learning methods in the field of medical image segmentation, lack of sufficient data always hinders performance to a certain degree. In this work, we explore the feasibility of applying deep learning methods on a pilot dataset. We present a simple and practical approach to perform segmentation in a 2D, slice-by-slice manner, based on region of interest (ROI) localization, applying an optimized training regime to improve segmentation performance from regions of interest. We start from two popular segmentation networks, the preferred model for medical segmentation, U-Net, and a general-purpose model, DeepLabV3+. Furthermore, we show that ensembling of these two fundamentally different architectures brings constant benefits by testing our approach on two different datasets, the publicly available ACDC challenge, and the imATFIB dataset from our in-house conducted clinical study. Results on the imATFIB dataset show that the proposed approach performs well with the provided training volumes, achieving an average Dice Similarity Coefficient of the whole heart of 89.89% on the validation set. Moreover, our algorithm achieved a mean Dice value of 91.87% on the ACDC validation, being comparable to the second best-performing approach on the challenge. Our approach provides an opportunity to serve as a building block of a computer-aided diagnostic system in a clinical setting.


2006 ◽  
Vol 40 (3) ◽  
pp. 286-295 ◽  
Author(s):  
Andrew Buxton

PurposeTo review the variety of software solutions available for putting CDS/ISIS databases on the internet. To help anyone considering which route to take.Design/methodology/approachBriefly describes the characteristics, history, origin and availability of each package. Identifies the type of skills required to implement the package and the kind of application it is suited to. Covers CDS/ISIS Unix version, JavaISIS, IsisWWW, WWWISIS Versions 3 and 5, Genisis, IAH, WWW‐ISIS, and OpenIsis.FindingsThere is no obvious single “best” solution. Several are free but may require more investment in acquiring the skills to install and configure them. The choice will depend on the user's experience with CDS/ISIS formatting language, HTML, programming languages, operating systems, open source software, and so on.Originality/valueThere is detailed documentation available for most of these packages, but little previous guidance to help potential users to distinguish and choose between them.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 268
Author(s):  
Yeganeh Jalali ◽  
Mansoor Fateh ◽  
Mohsen Rezvani ◽  
Vahid Abolghasemi ◽  
Mohammad Hossein Anisi

Lung CT image segmentation is a key process in many applications such as lung cancer detection. It is considered a challenging problem due to existing similar image densities in the pulmonary structures, different types of scanners, and scanning protocols. Most of the current semi-automatic segmentation methods rely on human factors therefore it might suffer from lack of accuracy. Another shortcoming of these methods is their high false-positive rate. In recent years, several approaches, based on a deep learning framework, have been effectively applied in medical image segmentation. Among existing deep neural networks, the U-Net has provided great success in this field. In this paper, we propose a deep neural network architecture to perform an automatic lung CT image segmentation process. In the proposed method, several extensive preprocessing techniques are applied to raw CT images. Then, ground truths corresponding to these images are extracted via some morphological operations and manual reforms. Finally, all the prepared images with the corresponding ground truth are fed into a modified U-Net in which the encoder is replaced with a pre-trained ResNet-34 network (referred to as Res BCDU-Net). In the architecture, we employ BConvLSTM (Bidirectional Convolutional Long Short-term Memory)as an advanced integrator module instead of simple traditional concatenators. This is to merge the extracted feature maps of the corresponding contracting path into the previous expansion of the up-convolutional layer. Finally, a densely connected convolutional layer is utilized for the contracting path. The results of our extensive experiments on lung CT images (LIDC-IDRI database) confirm the effectiveness of the proposed method where a dice coefficient index of 97.31% is achieved.


Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2107
Author(s):  
Xin Wei ◽  
Huan Wan ◽  
Fanghua Ye ◽  
Weidong Min

In recent years, medical image segmentation (MIS) has made a huge breakthrough due to the success of deep learning. However, the existing MIS algorithms still suffer from two types of uncertainties: (1) the uncertainty of the plausible segmentation hypotheses and (2) the uncertainty of segmentation performance. These two types of uncertainties affect the effectiveness of the MIS algorithm and then affect the reliability of medical diagnosis. Many studies have been done on the former but ignore the latter. Therefore, we proposed the hierarchical predictable segmentation network (HPS-Net), which consists of a new network structure, a new loss function, and a cooperative training mode. According to our knowledge, HPS-Net is the first network in the MIS area that can generate both the diverse segmentation hypotheses to avoid the uncertainty of the plausible segmentation hypotheses and the measure predictions about these hypotheses to avoid the uncertainty of segmentation performance. Extensive experiments were conducted on the LIDC-IDRI dataset and the ISIC2018 dataset. The results show that HPS-Net has the highest Dice score compared with the benchmark methods, which means it has the best segmentation performance. The results also confirmed that the proposed HPS-Net can effectively predict TNR and TPR.


Sign in / Sign up

Export Citation Format

Share Document