scholarly journals Segmentation Trachea and Bronchial Branches in Chest Computed Tomography Image by Deep Learning - preliminary results -

10.29007/r6cd ◽  
2022 ◽  
Author(s):  
Hoang Nhut Huynh ◽  
My Duyen Nguyen ◽  
Thai Hong Truong ◽  
Quoc Tuan Nguyen Diep ◽  
Anh Tu Tran ◽  
...  

Segmentation is one of the most common methods for analyzing and processing medical images, assisting doctors in making accurate diagnoses by providing detailed information about the required body part. However, segmenting medical images presents a number of challenges, including the need for medical professionals to be trained, the fact that it is time-consuming and prone to errors. As a result, it appears that an automated medical image segmentation system is required. Deep learning algorithms have recently demonstrated superior performance for segmentation tasks, particularly semantic segmentation networks that provide a pixel-level understanding of images. U- Net for image segmentation is one of the modern complex networks in the field of medical imaging; several segmentation networks have been built on its foundation with the advancements of Recurrent Residual convolutional units and the construction of recurrent residual convolutional neural network based on U-Net (R2U-Net). R2U-Net is used to perform trachea and bronchial segmentation on a dataset of 36,000 images. With a variety of experiments, the proposed segmentation resulted in a dice-coefficient of 0.8394 on the test dataset. Finally, a number of research issues are raised, indicating the need for future improvements.

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Dominik Müller ◽  
Frank Kramer

Abstract Background The increased availability and usage of modern medical imaging induced a strong need for automatic medical image segmentation. Still, current image segmentation platforms do not provide the required functionalities for plain setup of medical image segmentation pipelines. Already implemented pipelines are commonly standalone software, optimized on a specific public data set. Therefore, this paper introduces the open-source Python library MIScnn. Implementation The aim of MIScnn is to provide an intuitive API allowing fast building of medical image segmentation pipelines including data I/O, preprocessing, data augmentation, patch-wise analysis, metrics, a library with state-of-the-art deep learning models and model utilization like training, prediction, as well as fully automatic evaluation (e.g. cross-validation). Similarly, high configurability and multiple open interfaces allow full pipeline customization. Results Running a cross-validation with MIScnn on the Kidney Tumor Segmentation Challenge 2019 data set (multi-class semantic segmentation with 300 CT scans) resulted into a powerful predictor based on the standard 3D U-Net model. Conclusions With this experiment, we could show that the MIScnn framework enables researchers to rapidly set up a complete medical image segmentation pipeline by using just a few lines of code. The source code for MIScnn is available in the Git repository: https://github.com/frankkramer-lab/MIScnn.


2021 ◽  
Vol 7 ◽  
pp. e607
Author(s):  
Ayat Abedalla ◽  
Malak Abdullah ◽  
Mahmoud Al-Ayyoub ◽  
Elhadj Benkhelifa

Medical imaging refers to visualization techniques to provide valuable information about the internal structures of the human body for clinical applications, diagnosis, treatment, and scientific research. Segmentation is one of the primary methods for analyzing and processing medical images, which helps doctors diagnose accurately by providing detailed information on the body’s required part. However, segmenting medical images faces several challenges, such as requiring trained medical experts and being time-consuming and error-prone. Thus, it appears necessary for an automatic medical image segmentation system. Deep learning algorithms have recently shown outstanding performance for segmentation tasks, especially semantic segmentation networks that provide pixel-level image understanding. By introducing the first fully convolutional network (FCN) for semantic image segmentation, several segmentation networks have been proposed on its basis. One of the state-of-the-art convolutional networks in the medical image field is U-Net. This paper presents a novel end-to-end semantic segmentation model, named Ens4B-UNet, for medical images that ensembles four U-Net architectures with pre-trained backbone networks. Ens4B-UNet utilizes U-Net’s success with several significant improvements by adapting powerful and robust convolutional neural networks (CNNs) as backbones for U-Nets encoders and using the nearest-neighbor up-sampling in the decoders. Ens4B-UNet is designed based on the weighted average ensemble of four encoder-decoder segmentation models. The backbone networks of all ensembled models are pre-trained on the ImageNet dataset to exploit the benefit of transfer learning. For improving our models, we apply several techniques for training and predicting, including stochastic weight averaging (SWA), data augmentation, test-time augmentation (TTA), and different types of optimal thresholds. We evaluate and test our models on the 2019 Pneumothorax Challenge dataset, which contains 12,047 training images with 12,954 masks and 3,205 test images. Our proposed segmentation network achieves a 0.8608 mean Dice similarity coefficient (DSC) on the test set, which is among the top one-percent systems in the Kaggle competition.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0247388
Author(s):  
Jingfei Hu ◽  
Hua Wang ◽  
Jie Wang ◽  
Yunqi Wang ◽  
Fang He ◽  
...  

Semantic segmentation of medical images provides an important cornerstone for subsequent tasks of image analysis and understanding. With rapid advancements in deep learning methods, conventional U-Net segmentation networks have been applied in many fields. Based on exploratory experiments, features at multiple scales have been found to be of great importance for the segmentation of medical images. In this paper, we propose a scale-attention deep learning network (SA-Net), which extracts features of different scales in a residual module and uses an attention module to enforce the scale-attention capability. SA-Net can better learn the multi-scale features and achieve more accurate segmentation for different medical image. In addition, this work validates the proposed method across multiple datasets. The experiment results show SA-Net achieves excellent performances in the applications of vessel detection in retinal images, lung segmentation, artery/vein(A/V) classification in retinal images and blastocyst segmentation. To facilitate SA-Net utilization by the scientific community, the code implementation will be made publicly available.


2020 ◽  
Vol 64 (2) ◽  
pp. 20508-1-20508-12 ◽  
Author(s):  
Getao Du ◽  
Xu Cao ◽  
Jimin Liang ◽  
Xueli Chen ◽  
Yonghua Zhan

Abstract Medical image analysis is performed by analyzing images obtained by medical imaging systems to solve clinical problems. The purpose is to extract effective information and improve the level of clinical diagnosis. In recent years, automatic segmentation based on deep learning (DL) methods has been widely used, where a neural network can automatically learn image features, which is in sharp contrast with the traditional manual learning method. U-net is one of the most important semantic segmentation frameworks for a convolutional neural network (CNN). It is widely used in the medical image analysis domain for lesion segmentation, anatomical segmentation, and classification. The advantage of this network framework is that it can not only accurately segment the desired feature target and effectively process and objectively evaluate medical images but also help to improve accuracy in the diagnosis by medical images. Therefore, this article presents a literature review of medical image segmentation based on U-net, focusing on the successful segmentation experience of U-net for different lesion regions in six medical imaging systems. Along with the latest advances in DL, this article introduces the method of combining the original U-net architecture with deep learning and a method for improving the U-net network.


2021 ◽  
Author(s):  
Aditi Iyer ◽  
Eve Locastro ◽  
Aditya Apte ◽  
Harini Veeraraghavan ◽  
Joseph O Deasy

Purpose: This work presents a framework for deployment of deep learning image segmentation models for medical images across different operating systems and programming languages. Methods: Computational Environment for Radiological Research (CERR) platform was extended for deploying deep learning-based segmentation models to leverage CERR's existing functionality for radiological data import, transformation, management, and visualization. The framework is compatible with MATLAB as well as GNU Octave and Python for license-free use. Pre and post processing configurations including parameters for pre-processing images, population of channels, and post-processing segmentations was standardized using JSON format. CPU and GPU implementations of pre-trained deep learning segmentation models were packaged using Singularity containers for use in Linux and Conda environment archives for Windows, macOS and Linux operating systems. The framework accepts images in various formats including DICOM and CERR's planC and outputs segmentation in various formats including DICOM RTSTRUCT and planC objects. The ability to access the results readily in planC format enables visualization as well as radiomics and dosimetric analysis. The framework can be readily deployed in clinical software such as MIM via their extensions. Results: The open-source, GPL copyrighted framework developed in this work has been successfully used to deploy Deep Learning based segmentation models for five in-house developed and published models. These models span various treatment sites (H&N, Lung and Prostate) and modalities (CT, MR). Documentation for their usage and demo workflow is provided at https://github.com/cerr/CERR/wiki/Auto-Segmentation-models. The framework has also been used in clinical workflow for segmenting images for treatment planning and for segmenting publicly available large datasets for outcomes studies. Conclusions: This work presented a comprehensive, open-source framework for deploying deep learning-based medical image segmentation models. The framework was used to translate the developed models to clinic as well as reproducible and consistent image segmentation across institutions, facilitating multi-institutional outcomes modeling studies.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Shaolei Lang ◽  
Yinxia Xu ◽  
Liang Li ◽  
Bin Wang ◽  
Yang Yang ◽  
...  

In recent years, the incidence of thyroid nodules has shown an increasing trend year by year and has become one of the important diseases that endanger human health. Ultrasound medical images based on deep learning are widely used in clinical diagnosis due to their cheapness, no radiation, and low cost. The use of image processing technology to accurately segment the nodule area provides important auxiliary information for the doctor’s diagnosis, which is of great value for guiding clinical treatment. The purpose of this article is to explore the application value of combined detection of abnormal sugar-chain glycoprotein (TAP) and carcinoembryonic antigen (CEA) in the risk estimation of thyroid cancer in patients with thyroid nodules of type IV and above based on deep learning medical images. In this paper, ultrasound thyroid images are used as the research content, and the active contour level set method is used as the segmentation basis, and a segmentation algorithm for thyroid nodules is proposed. This paper takes ultrasound thyroid images as the research content, uses the active contour level set method as the basis of segmentation, and proposes an image segmentation algorithm Fast-SegNet based on deep learning, which extends the network model that was mainly used for thyroid medical image segmentation to more scenarios of the segmentation task. From January 2019 to October 2020, 400 patients with thyroid nodules of type IV and above were selected for physical examination and screening at the Health Management Center of our hospital, and they were diagnosed as thyroid cancer by pathological examination of thyroid nodules under B-ultrasound positioning. The detection rates of thyroid cancer in patients with thyroid nodules of type IV and above are compared; serum TAP and CEA levels are detected; PT-PCR is used to detect TTF-1, PTEN, and NIS expression; the detection, missed diagnosis, misdiagnosis rate, and diagnostic efficiency of the three detection methods are compared. This article uses the thyroid nodule region segmented based on deep learning medical images and compares experiments with CV model, LBF model, and DRLSE model. The experimental results show that the segmentation overlap rate of this method is as high as 98.4%, indicating that the algorithm proposed in this paper can more accurately extract the thyroid nodule area.


2021 ◽  
Author(s):  
En Zhou Ye ◽  
En Hui Ye ◽  
Run Zhou Ye

Introduction: Analysis of multimodal medical images often requires the selection of one or many anatomical regions of interest (ROIs) for extraction of useful statistics. This task can prove laborious when a manual approach is used. We have previously developed a user-friendly software tool for image-to-image translation using deep learning. Therefore, we present herein an update to the DeepImageTranslator software with the addiction of a tool for multimodal medical image segmentation analysis (hereby referred to as the MMMISA). Methods: The MMMISA was implemented using the Tkinter library. Backend computations were implemented using the Pydicom, Numpy, and OpenCV libraries. We tested our software using 4188 whole-body axial 2-deoxy-2-[18F]-fluoroglucose-position emission tomography/computed tomography ([18F]-FDG-PET/CT) slices of 10 patients from the ACRIN-HNSCC (American College of Radiology Imaging Network-Head and Neck Squamous Cell Carcinoma) database. Using the deep learning software DeepImageTranslator, a model was trained with 36 randomly selected CT slices and manually labelled semantic segmentation maps. Utilizing the trained model, all the CT scans of the 10 HNSCC patients were segmented with high accuracy. Segmentation maps generated using the deep convolutional network were then used to measure organ specific [18F]-FDG uptake. We also compared measurements performed using the MMMISA and those made with manually selected ROIs. Results: The MMMISA is a tool that allows user to select ROIs based on deep learning-generated segmentation maps and to compute accurate statistics for these ROIs based on coregistered multimodal images. We found that organ-specific [18F]-FDG uptake measured using multiple manually selected ROIs is concordant with whole-tissue measurements made with segmentation maps using the MMMISA tool.


Sign in / Sign up

Export Citation Format

Share Document