scholarly journals Intelligent Algorithms-Based CT Image Segmentation in Patients with Cardiovascular Diseases and Realization of Visualization Algorithms

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Xianhua Huang

The study focused on the intelligent algorithms-based segmentation of computed tomography (CT) images of patients with cardiovascular diseases (CVD) and the realization of visualization algorithms. The first step was to design a method for precise segmentation under the cylinder model based on the coronary body data of the coarse segmentation, and then the principles of different visualization algorithms were discussed. The results showed that the precise segmentation method can effectively eliminate most of the branches and calcified lesions; curved planar reformation (CPR) and straightened CPR can display the entire blood vessel on one image; and spherical CPR can display the complete coronary artery tree on an image, so that a problem with a certain blood vessel can be quickly found. In conclusion, the precise segmentation of CT images of CVD and visualization algorithm based on the cylinder model have clinical significance in the diagnosis of CVD.

2018 ◽  
Vol 11 (06) ◽  
pp. 1850037
Author(s):  
Ling-ling Cui ◽  
Hui Zhang

In order to effectively improve the pathological diagnosis capability and feature resolution of 3D human brain CT images, a threshold segmentation method of multi-resolution 3D human brain CT image based on edge pixel grayscale feature decomposition is proposed in this paper. In this method, first, original 3D human brain image information is collected, and CT image filtering is performed to the collected information through the gradient value decomposition method, and edge contour features of the 3D human brain CT image are extracted. Then, the threshold segmentation method is adopted to segment the regional pixel feature block of the 3D human brain CT image to segment the image into block vectors with high-resolution feature points, and the 3D human brain CT image is reconstructed with the salient feature point as center. Simulation results show that the method proposed in this paper can provide accuracy up to 100% when the signal-to-noise ratio is 0, and with the increase of signal-to-noise ratio, the accuracy provided by this method is stable at 100%. Comparison results show that the threshold segmentation method of multi-resolution 3D human brain CT image based on edge pixel grayscale feature decomposition is significantly better than traditional methods in pathological feature estimation accuracy, and it effectively improves the rapid pathological diagnosis and positioning recognition abilities to CT images.


2020 ◽  
Author(s):  
Jinseok Lee

BACKGROUND The coronavirus disease (COVID-19) has explosively spread worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) can be used as a relevant screening tool owing to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely busy fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians. OBJECTIVE We aimed to quickly develop an AI technique to diagnose COVID-19 pneumonia and differentiate it from non-COVID pneumonia and non-pneumonia diseases on CT. METHODS A simple 2D deep learning framework, named fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning, using one of the four state-of-art pre-trained deep learning models (VGG16, ResNet50, InceptionV3, or Xception) as a backbone. For training and testing of FCONet, we collected 3,993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and non-pneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training and a testing set at a ratio of 8:2. For the test dataset, the diagnostic performance to diagnose COVID-19 pneumonia was compared among the four pre-trained FCONet models. In addition, we tested the FCONet models on an additional external testing dataset extracted from the embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers. RESULTS Of the four pre-trained models of FCONet, the ResNet50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100%, and accuracy 99.87%) and outperformed the other three pre-trained models in testing dataset. In additional external test dataset using low-quality CT images, the detection accuracy of the ResNet50 model was the highest (96.97%), followed by Xception, InceptionV3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively). CONCLUSIONS The FCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing dataset, the ResNet50-based FCONet might be the best model, as it outperformed other FCONet models based on VGG16, Xception, and InceptionV3.


2021 ◽  
Vol 17 (4) ◽  
pp. 1-16
Author(s):  
Xiaowe Xu ◽  
Jiawei Zhang ◽  
Jinglan Liu ◽  
Yukun Ding ◽  
Tianchen Wang ◽  
...  

As one of the most commonly ordered imaging tests, the computed tomography (CT) scan comes with inevitable radiation exposure that increases cancer risk to patients. However, CT image quality is directly related to radiation dose, and thus it is desirable to obtain high-quality CT images with as little dose as possible. CT image denoising tries to obtain high-dose-like high-quality CT images (domain Y ) from low dose low-quality CT images (domain X ), which can be treated as an image-to-image translation task where the goal is to learn the transform between a source domain X (noisy images) and a target domain Y (clean images). Recently, the cycle-consistent adversarial denoising network (CCADN) has achieved state-of-the-art results by enforcing cycle-consistent loss without the need of paired training data, since the paired data is hard to collect due to patients’ interests and cardiac motion. However, out of concerns on patients’ privacy and data security, protocols typically require clinics to perform medical image processing tasks including CT image denoising locally (i.e., edge denoising). Therefore, the network models need to achieve high performance under various computation resource constraints including memory and performance. Our detailed analysis of CCADN raises a number of interesting questions that point to potential ways to further improve its performance using the same or even fewer computation resources. For example, if the noise is large leading to a significant difference between domain X and domain Y , can we bridge X and Y with a intermediate domain Z such that both the denoising process between X and Z and that between Z and Y are easier to learn? As such intermediate domains lead to multiple cycles, how do we best enforce cycle- consistency? Driven by these questions, we propose a multi-cycle-consistent adversarial network (MCCAN) that builds intermediate domains and enforces both local and global cycle-consistency for edge denoising of CT images. The global cycle-consistency couples all generators together to model the whole denoising process, whereas the local cycle-consistency imposes effective supervision on the process between adjacent domains. Experiments show that both local and global cycle-consistency are important for the success of MCCAN, which outperforms CCADN in terms of denoising quality with slightly less computation resource consumption.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 268
Author(s):  
Yeganeh Jalali ◽  
Mansoor Fateh ◽  
Mohsen Rezvani ◽  
Vahid Abolghasemi ◽  
Mohammad Hossein Anisi

Lung CT image segmentation is a key process in many applications such as lung cancer detection. It is considered a challenging problem due to existing similar image densities in the pulmonary structures, different types of scanners, and scanning protocols. Most of the current semi-automatic segmentation methods rely on human factors therefore it might suffer from lack of accuracy. Another shortcoming of these methods is their high false-positive rate. In recent years, several approaches, based on a deep learning framework, have been effectively applied in medical image segmentation. Among existing deep neural networks, the U-Net has provided great success in this field. In this paper, we propose a deep neural network architecture to perform an automatic lung CT image segmentation process. In the proposed method, several extensive preprocessing techniques are applied to raw CT images. Then, ground truths corresponding to these images are extracted via some morphological operations and manual reforms. Finally, all the prepared images with the corresponding ground truth are fed into a modified U-Net in which the encoder is replaced with a pre-trained ResNet-34 network (referred to as Res BCDU-Net). In the architecture, we employ BConvLSTM (Bidirectional Convolutional Long Short-term Memory)as an advanced integrator module instead of simple traditional concatenators. This is to merge the extracted feature maps of the corresponding contracting path into the previous expansion of the up-convolutional layer. Finally, a densely connected convolutional layer is utilized for the contracting path. The results of our extensive experiments on lung CT images (LIDC-IDRI database) confirm the effectiveness of the proposed method where a dice coefficient index of 97.31% is achieved.


2014 ◽  
Vol 721 ◽  
pp. 783-787
Author(s):  
Shao Hu Peng ◽  
Hyun Do Nam ◽  
Yan Fen Gan ◽  
Xiao Hu

Automatic segmentation of the line-like regions plays a very important role in the automatic recognition system, such as automatic cracks recognition in X-ray images, automatic vessels segmentation in CT images. In order to automatically segment line-like regions in the X-ray/CT images, this paper presents a robust line filter based on the local gray level variation and multiscale analysis. The proposed line filter makes usage of the local gray level and its local variation to enhance line-like regions in the X-ray/CT image, which can well overcome the problems of the image noises and non-uniform intensity of the images. For detecting various sizes of line-like regions, an image pyramid is constructed based on different neighboring distances, which enables the proposed filter to analyze different sizes of regions independently. Experimental results showed that the proposed line filter can well segment various sizes of line-like regions in the X-ray/CT images, which are with image noises and non-uniform intensity problems.


2021 ◽  
Author(s):  
weijun chen ◽  
Cheng Wang ◽  
Wenming Zhan ◽  
Yongshi Jia ◽  
Fangfang Ruan ◽  
...  

Abstract Background:Radiotherapy requires the target area and the organs at risk to be contoured on the CT image of the patient. During the process of organs-at-Risk (OAR) of the chest and abdomen, the doctor needs to contour at each CT image. The delineations of large and varied shapes are time-consuming and laborious.This study aims to evaluate the results of two automatic contouring software on OAR definition of CT images of lung cancer and rectal cancer patients. Methods: The CT images of 15 patients with rectal cancer and 15 patients with lung cancer were selected separately, and the organs at risk were outlined by the same experienced doctor as references, and then the same datasets were automatically contoured based on AiContour®© (Manufactured by Linking MED, China) and Raystation®© (Manufactured by Raysearch, Sweden) respectively. Overlap index (OI), Dice similarity index (DSC) and Volume difference (DV) were evaluated based on the auto-contours, and independent-sample t-test analysis is applied to the results. Results: The results of AiContour®© on OI and DSC were better than that of Raystation®© with statistical difference. There was no significant difference in DV between the results of two software. Conclusions: With AiContour®©, auto-contouring results of most organs in the chest and abdomen are good, and with slight modification, it can meet the clinical requirements for planning. With Raystation®©, auto-contouring results in most OAR is not as good as AiContour®©, and only the auto-contouring results of some organs can be used clinically after modification.


Author(s):  
H.-F. Lee ◽  
P.-C. Huang ◽  
C. Wietholt ◽  
C.-H. Hsu ◽  
K. M. Lin ◽  
...  

2019 ◽  
Vol 46 (11) ◽  
pp. 4970-4982 ◽  
Author(s):  
Azael M. Sousa ◽  
Samuel B. Martins ◽  
Alexandre X. Falcão ◽  
Fabiano Reis ◽  
Ericson Bagatin ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document