scholarly journals Lung segmentation in chest radiographs using fully convolutional networks

Author(s):  
RAHUL HOODA ◽  
AJAY MITTAL ◽  
SANJEEV SOFAT

Automated segmentation of medical images that aims at extracting anatomical boundaries is a fundamental step in any computer-aided diagnosis (CAD) system. Chest radiographic CAD systems, which are used to detect pulmonary diseases, first segment the lung field to precisely define the region-of-interest from which radiographic patterns are sought. In this paper, a deep learning-based method for segmenting lung fields from chest radiographs has been proposed. Several modifications in the fully convolutional network, which is used for segmenting natural images to date, have been attempted and evaluated to finally evolve a network fine-tuned for segmenting lung fields. The testing accuracy and overlap of the evolved network are 98.75% and 96.10%, respectively, which exceeds the state-of-the-art results.

Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 2915 ◽  
Author(s):  
Wenchao Kang ◽  
Yuming Xiang ◽  
Feng Wang ◽  
Ling Wan ◽  
Hongjian You

Emergency flood monitoring and rescue need to first detect flood areas. This paper provides a fast and novel flood detection method and applies it to Gaofen-3 SAR images. The fully convolutional network (FCN), a variant of VGG16, is utilized for flood mapping in this paper. Considering the requirement of flood detection, we fine-tune the model to get higher accuracy results with shorter training time and fewer training samples. Compared with state-of-the-art methods, our proposed algorithm not only gives robust and accurate detection results but also significantly reduces the detection time.


2021 ◽  
Vol 11 (15) ◽  
pp. 6975
Author(s):  
Tao Zhang ◽  
Lun He ◽  
Xudong Li ◽  
Guoqing Feng

Lipreading aims to recognize sentences being spoken by a talking face. In recent years, the lipreading method has achieved a high level of accuracy on large datasets and made breakthrough progress. However, lipreading is still far from being solved, and existing methods tend to have high error rates on the wild data and have the defects of disappearing training gradient and slow convergence. To overcome these problems, we proposed an efficient end-to-end sentence-level lipreading model, using an encoder based on a 3D convolutional network, ResNet50, Temporal Convolutional Network (TCN), and a CTC objective function as the decoder. More importantly, the proposed architecture incorporates TCN as a feature learner to decode feature. It can partly eliminate the defects of RNN (LSTM, GRU) gradient disappearance and insufficient performance, and this yields notable performance improvement as well as faster convergence. Experiments show that the training and convergence speed are 50% faster than the state-of-the-art method, and improved accuracy by 2.4% on the GRID dataset.


2021 ◽  
Vol 104 (3) ◽  
pp. 003685042110381
Author(s):  
Xue Bai ◽  
Ze Liu ◽  
Jie Zhang ◽  
Shengye Wang ◽  
Qing Hou ◽  
...  

Fully convolutional networks were developed for predicting optimal dose distributions for patients with left-sided breast cancer and compared the prediction accuracy between two-dimensional and three-dimensional networks. Sixty cases treated with volumetric modulated arc radiotherapy were analyzed. Among them, 50 cases were randomly chosen to conform the training set, and the remaining 10 were to construct the test set. Two U-Net fully convolutional networks predicted the dose distributions, with two-dimensional and three-dimensional convolution kernels, respectively. Computed tomography images, delineated regions of interest, or their combination were considered as input data. The accuracy of predicted results was evaluated against the clinical dose. Most types of input data retrieved a similar dose to the ground truth for organs at risk ( p > 0.05). Overall, the two-dimensional model had higher performance than the three-dimensional model ( p < 0.05). Moreover, the two-dimensional region of interest input provided the best prediction results regarding the planning target volume mean percentage difference (2.40 ± 0.18%), heart mean percentage difference (4.28 ± 2.02%), and the gamma index at 80% of the prescription dose are with tolerances of 3 mm and 3% (0.85 ± 0.03), whereas the two-dimensional combined input provided the best prediction regarding ipsilateral lung mean percentage difference (4.16 ± 1.48%), lung mean percentage difference (2.41 ± 0.95%), spinal cord mean percentage difference (0.67 ± 0.40%), and 80% Dice similarity coefficient (0.94 ± 0.01). Statistically, the two-dimensional combined inputs achieved higher prediction accuracy regarding 80% Dice similarity coefficient than the two-dimensional region of interest input (0.94 ± 0.01 vs 0.92 ± 0.01, p < 0.05). The two-dimensional data model retrieves higher performance than its three-dimensional counterpart for dose prediction, especially when using region of interest and combined inputs.


Author(s):  
Zhichao Huang ◽  
Xutao Li ◽  
Yunming Ye ◽  
Michael K. Ng

Graph Convolutional Networks (GCNs) have been extensively studied in recent years. Most of existing GCN approaches are designed for the homogenous graphs with a single type of relation. However, heterogeneous graphs of multiple types of relations are also ubiquitous and there is a lack of methodologies to tackle such graphs. Some previous studies address the issue by performing conventional GCN on each single relation and then blending their results. However, as the convolutional kernels neglect the correlations across relations, the strategy is sub-optimal. In this paper, we propose the Multi-Relational Graph Convolutional Network (MR-GCN) framework by developing a novel convolution operator on multi-relational graphs. In particular, our multi-dimension convolution operator extends the graph spectral analysis into the eigen-decomposition of a Laplacian tensor. And the eigen-decomposition is formulated with a generalized tensor product, which can correspond to any unitary transform instead of limited merely to Fourier transform. We conduct comprehensive experiments on four real-world multi-relational graphs to solve the semi-supervised node classification task, and the results show the superiority of MR-GCN against the state-of-the-art competitors.


Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3818
Author(s):  
Ye Zhang ◽  
Yi Hou ◽  
Shilin Zhou ◽  
Kewei Ouyang

Recent advances in time series classification (TSC) have exploited deep neural networks (DNN) to improve the performance. One promising approach encodes time series as recurrence plot (RP) images for the sake of leveraging the state-of-the-art DNN to achieve accuracy. Such an approach has been shown to achieve impressive results, raising the interest of the community in it. However, it remains unsolved how to handle not only the variability in the distinctive region scale and the length of sequences but also the tendency confusion problem. In this paper, we tackle the problem using Multi-scale Signed Recurrence Plots (MS-RP), an improvement of RP, and propose a novel method based on MS-RP images and Fully Convolutional Networks (FCN) for TSC. This method first introduces phase space dimension and time delay embedding of RP to produce multi-scale RP images; then, with the use of asymmetrical structure, constructed RP images can represent very long sequences (>700 points). Next, MS-RP images are obtained by multiplying designed sign masks in order to remove the tendency confusion. Finally, FCN is trained with MS-RP images to perform classification. Experimental results on 45 benchmark datasets demonstrate that our method improves the state-of-the-art in terms of classification accuracy and visualization evaluation.


2018 ◽  
Vol 7 (2.25) ◽  
pp. 133
Author(s):  
T R. Thamizhvani ◽  
Bincy Babu ◽  
A Josephin Arockia Dhivya ◽  
R J. Hemalatha ◽  
Josline Elsa Joseph ◽  
...  

Early detection of breast cancer is necessary because it is considered as one of the most common reason of cancer death among women. Nowadays, the basic screening test for detection of breast cancer is Mammography which con-sists of various artifacts. These artifacts leads to wrong results in detection of breast cancer. Therefore, Computer Aided Diagnosis (CAD) system mainly focus in removal of artifacts and mammogram quality enhancement. By this procedure, exact Region of Interest (ROI) can be obtained. This is a challenging procedure because detection of pecto-ral muscle and cancer region is difficult. Here a comparative study of different preprocessing and enhancement tech-niques are done by testing proposed system on mammogram mini-MIAS database. Result obtained shows that sug-gested system is efficient for CAD system.  


2019 ◽  
Vol 11 (6) ◽  
pp. 684 ◽  
Author(s):  
Maria Papadomanolaki ◽  
Maria Vakalopoulou ◽  
Konstantinos Karantzalos

Deep learning architectures have received much attention in recent years demonstrating state-of-the-art performance in several segmentation, classification and other computer vision tasks. Most of these deep networks are based on either convolutional or fully convolutional architectures. In this paper, we propose a novel object-based deep-learning framework for semantic segmentation in very high-resolution satellite data. In particular, we exploit object-based priors integrated into a fully convolutional neural network by incorporating an anisotropic diffusion data preprocessing step and an additional loss term during the training process. Under this constrained framework, the goal is to enforce pixels that belong to the same object to be classified at the same semantic category. We compared thoroughly the novel object-based framework with the currently dominating convolutional and fully convolutional deep networks. In particular, numerous experiments were conducted on the publicly available ISPRS WGII/4 benchmark datasets, namely Vaihingen and Potsdam, for validation and inter-comparison based on a variety of metrics. Quantitatively, experimental results indicate that, overall, the proposed object-based framework slightly outperformed the current state-of-the-art fully convolutional networks by more than 1% in terms of overall accuracy, while intersection over union results are improved for all semantic categories. Qualitatively, man-made classes with more strict geometry such as buildings were the ones that benefit most from our method, especially along object boundaries, highlighting the great potential of the developed approach.


Author(s):  
Liang Yang ◽  
Zesheng Kang ◽  
Xiaochun Cao ◽  
Di Jin ◽  
Bo Yang ◽  
...  

In the past few years, semi-supervised node classification in attributed network has been developed rapidly. Inspired by the success of deep learning, researchers adopt the convolutional neural network to develop the Graph Convolutional Networks (GCN), and they have achieved surprising classification accuracy by considering the topological information and employing the fully connected network (FCN). However, the given network topology may also induce a performance degradation if it is directly employed in classification, because it may possess high sparsity and certain noises. Besides, the lack of learnable filters in GCN also limits the performance. In this paper, we propose a novel Topology Optimization based Graph Convolutional Networks (TO-GCN) to fully utilize the potential information by jointly refining the network topology and learning the parameters of the FCN. According to our derivations, TO-GCN is more flexible than GCN, in which the filters are fixed and only the classifier can be updated during the learning process. Extensive experiments on real attributed networks demonstrate the superiority of the proposed TO-GCN against the state-of-the-art approaches.


2019 ◽  
Vol 12 (9) ◽  
pp. 4713-4724
Author(s):  
Chaojun Shi ◽  
Yatong Zhou ◽  
Bo Qiu ◽  
Jingfei He ◽  
Mu Ding ◽  
...  

Abstract. Cloud segmentation plays a very important role in astronomical observatory site selection. At present, few researchers segment cloud in nocturnal all-sky imager (ASI) images. This paper proposes a new automatic cloud segmentation algorithm that utilizes the advantages of deep-learning fully convolutional networks (FCNs) to segment cloud pixels from diurnal and nocturnal ASI images; it is called the enhancement fully convolutional network (EFCN). Firstly, all the ASI images in the data set from the Key Laboratory of Optical Astronomy at the National Astronomical Observatories of Chinese Academy of Sciences (CAS) are converted from the red–green–blue (RGB) color space to hue saturation intensity (HSI) color space. Secondly, the I channel of the HSI color space is enhanced by histogram equalization. Thirdly, all the ASI images are converted from the HSI color space to RGB color space. Then after 100 000 iterative trainings based on the ASI images in the training set, the optimum associated parameters of the EFCN-8s model are obtained. Finally, we use the trained EFCN-8s to segment the cloud pixels of the ASI image in the test set. In the experiments our proposed EFCN-8s was compared with four other algorithms (OTSU, FCN-8s, EFCN-32s, and EFCN-16s) using four evaluation metrics. Experiments show that the EFCN-8s is much more accurate in cloud segmentation for diurnal and nocturnal ASI images than the other four algorithms.


Sign in / Sign up

Export Citation Format

Share Document