scholarly journals Portrait Segmentation Using Ensemble of Heterogeneous Deep-Learning Models

Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 197
Author(s):  
Yong-Woon Kim ◽  
Yung-Cheol Byun ◽  
Addapalli V. N. Krishna

Image segmentation plays a central role in a broad range of applications, such as medical image analysis, autonomous vehicles, video surveillance and augmented reality. Portrait segmentation, which is a subset of semantic image segmentation, is widely used as a preprocessing step in multiple applications such as security systems, entertainment applications, video conferences, etc. A substantial amount of deep learning-based portrait segmentation approaches have been developed, since the performance and accuracy of semantic image segmentation have improved significantly due to the recent introduction of deep learning technology. However, these approaches are limited to a single portrait segmentation model. In this paper, we propose a novel approach using an ensemble method by combining multiple heterogeneous deep-learning based portrait segmentation models to improve the segmentation performance. The Two-Models ensemble and Three-Models ensemble, using a simple soft voting method and weighted soft voting method, were experimented. Intersection over Union (IoU) metric, IoU standard deviation and false prediction rate were used to evaluate the performance. Cost efficiency was calculated to analyze the efficiency of segmentation. The experiment results show that the proposed ensemble approach can perform with higher accuracy and lower errors than single deep-learning-based portrait segmentation models. The results also show that the ensemble of deep-learning models typically increases the use of memory and computing power, although it also shows that the ensemble of deep-learning models can perform more efficiently than a single model with higher accuracy using less memory and less computing power.

Author(s):  
Wenjia Cai ◽  
Jie Xu ◽  
Ke Wang ◽  
Xiaohong Liu ◽  
Wenqin Xu ◽  
...  

Abstract Anterior segment eye diseases account for a significant proportion of presentations to eye clinics worldwide, including diseases associated with corneal pathologies, anterior chamber abnormalities (e.g. blood or inflammation) and lens diseases. The construction of an automatic tool for the segmentation of anterior segment eye lesions will greatly improve the efficiency of clinical care. With research on artificial intelligence progressing in recent years, deep learning models have shown their superiority in image classification and segmentation. The training and evaluation of deep learning models should be based on a large amount of data annotated with expertise, however, such data are relatively scarce in the domain of medicine. Herein, the authors developed a new medical image annotation system, called EyeHealer. It is a large-scale anterior eye segment dataset with both eye structures and lesions annotated at the pixel level. Comprehensive experiments were conducted to verify its performance in disease classification and eye lesion segmentation. The results showed that semantic segmentation models outperformed medical segmentation models. This paper describes the establishment of the system for automated classification and segmentation tasks. The dataset will be made publicly available to encourage future research in this area.


PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0243253
Author(s):  
Qiang Lin ◽  
Mingyang Luo ◽  
Ruiting Gao ◽  
Tongtong Li ◽  
Zhengxing Man ◽  
...  

SPECT imaging has been identified as an effective medical modality for diagnosis, treatment, evaluation and prevention of a range of serious diseases and medical conditions. Bone SPECT scan has the potential to provide more accurate assessment of disease stage and severity. Segmenting hotspot in bone SPECT images plays a crucial role to calculate metrics like tumor uptake and metabolic tumor burden. Deep learning techniques especially the convolutional neural networks have been widely exploited for reliable segmentation of hotspots or lesions, organs and tissues in the traditional structural medical images (i.e., CT and MRI) due to their ability of automatically learning the features from images in an optimal way. In order to segment hotspots in bone SPECT images for automatic assessment of metastasis, in this work, we develop several deep learning based segmentation models. Specifically, each original whole-body bone SPECT image is processed to extract the thorax area, followed by image mirror, translation and rotation operations, which augments the original dataset. We then build segmentation models based on two commonly-used famous deep networks including U-Net and Mask R-CNN by fine-tuning their structures. Experimental evaluation conducted on a group of real-world bone SEPCT images reveals that the built segmentation models are workable on identifying and segmenting hotspots of metastasis in bone SEPCT images, achieving a value of 0.9920, 0.7721, 0.6788 and 0.6103 for PA (accuracy), CPA (precision), Rec (recall) and IoU, respectively. Finally, we conclude that the deep learning technology have the huge potential to identify and segment hotspots in bone SPECT images.


2021 ◽  
pp. 161-174
Author(s):  
Pashupati Bhatt ◽  
Ashok Kumar Sahoo ◽  
Saumitra Chattopadhyay ◽  
Chandradeep Bhatt

2020 ◽  
Vol 12 (6) ◽  
pp. 959 ◽  
Author(s):  
Mohammad Pashaei ◽  
Hamid Kamangir ◽  
Michael J. Starek ◽  
Philippe Tissot

Deep learning has already been proved as a powerful state-of-the-art technique for many image understanding tasks in computer vision and other applications including remote sensing (RS) image analysis. Unmanned aircraft systems (UASs) offer a viable and economical alternative to a conventional sensor and platform for acquiring high spatial and high temporal resolution data with high operational flexibility. Coastal wetlands are among some of the most challenging and complex ecosystems for land cover prediction and mapping tasks because land cover targets often show high intra-class and low inter-class variances. In recent years, several deep convolutional neural network (CNN) architectures have been proposed for pixel-wise image labeling, commonly called semantic image segmentation. In this paper, some of the more recent deep CNN architectures proposed for semantic image segmentation are reviewed, and each model’s training efficiency and classification performance are evaluated by training it on a limited labeled image set. Training samples are provided using the hyper-spatial resolution UAS imagery over a wetland area and the required ground truth images are prepared by manual image labeling. Experimental results demonstrate that deep CNNs have a great potential for accurate land cover prediction task using UAS hyper-spatial resolution images. Some simple deep learning architectures perform comparable or even better than complex and very deep architectures with remarkably fewer training epochs. This performance is especially valuable when limited training samples are available, which is a common case in most RS applications.


2020 ◽  
Vol 6 (11) ◽  
pp. 125 ◽  
Author(s):  
Albert Comelli ◽  
Claudia Coronnello ◽  
Navdeep Dahiya ◽  
Viviana Benfante ◽  
Stefano Palmucci ◽  
...  

Background: The aim of this work is to identify an automatic, accurate, and fast deep learning segmentation approach, applied to the parenchyma, using a very small dataset of high-resolution computed tomography images of patients with idiopathic pulmonary fibrosis. In this way, we aim to enhance the methodology performed by healthcare operators in radiomics studies where operator-independent segmentation methods must be used to correctly identify the target and, consequently, the texture-based prediction model. Methods: Two deep learning models were investigated: (i) U-Net, already used in many biomedical image segmentation tasks, and (ii) E-Net, used for image segmentation tasks in self-driving cars, where hardware availability is limited and accurate segmentation is critical for user safety. Our small image dataset is composed of 42 studies of patients with idiopathic pulmonary fibrosis, of which only 32 were used for the training phase. We compared the performance of the two models in terms of the similarity of their segmentation outcome with the gold standard and in terms of their resources’ requirements. Results: E-Net can be used to obtain accurate (dice similarity coefficient = 95.90%), fast (20.32 s), and clinically acceptable segmentation of the lung region. Conclusions: We demonstrated that deep learning models can be efficiently applied to rapidly segment and quantify the parenchyma of patients with pulmonary fibrosis, without any radiologist supervision, in order to produce user-independent results.


Sign in / Sign up

Export Citation Format

Share Document