A Fast Approach for Video Deblurring using Multi-Scale Deep Neural Network

Author(s):  
Rahul Kumar Gupta ◽  
Kishor Upla
2021 ◽  
pp. 1-15
Author(s):  
Wenjun Tan ◽  
Luyu Zhou ◽  
Xiaoshuo Li ◽  
Xiaoyu Yang ◽  
Yufei Chen ◽  
...  

BACKGROUND: The distribution of pulmonary vessels in computed tomography (CT) and computed tomography angiography (CTA) images of lung is important for diagnosing disease, formulating surgical plans and pulmonary research. PURPOSE: Based on the pulmonary vascular segmentation task of International Symposium on Image Computing and Digital Medicine 2020 challenge, this paper reviews 12 different pulmonary vascular segmentation algorithms of lung CT and CTA images and then objectively evaluates and compares their performances. METHODS: First, we present the annotated reference dataset of lung CT and CTA images. A subset of the dataset consisting 7,307 slices for training and 3,888 slices for testing was made available for participants. Second, by analyzing the performance comparison of different convolutional neural networks from 12 different institutions for pulmonary vascular segmentation, the reasons for some defects and improvements are summarized. The models are mainly based on U-Net, Attention, GAN, and multi-scale fusion network. The performance is measured in terms of Dice coefficient, over segmentation ratio and under segmentation rate. Finally, we discuss several proposed methods to improve the pulmonary vessel segmentation results using deep neural networks. RESULTS: By comparing with the annotated ground truth from both lung CT and CTA images, most of 12 deep neural network algorithms do an admirable job in pulmonary vascular extraction and segmentation with the dice coefficients ranging from 0.70 to 0.85. The dice coefficients for the top three algorithms are about 0.80. CONCLUSIONS: Study results show that integrating methods that consider spatial information, fuse multi-scale feature map, or have an excellent post-processing to deep neural network training and optimization process are significant for further improving the accuracy of pulmonary vascular segmentation.


2015 ◽  
Vol 108 (2) ◽  
pp. 473a ◽  
Author(s):  
Xundong Wu ◽  
Yong Wu ◽  
Enrico Stefani

2021 ◽  
Vol 2021 (1) ◽  
pp. 5-10
Author(s):  
Chahine Nicolas ◽  
Belkarfa Salim

In this paper, we propose a novel and standardized approach to the problem of camera-quality assessment on portrait scenes. Our goal is to evaluate the capacity of smartphone front cameras to preserve texture details on faces. We introduce a new portrait setup and an automated texture measurement. The setup includes two custom-built lifelike mannequin heads, shot in a controlled lab environment. The automated texture measurement includes a Region-of-interest (ROI) detection and a deep neural network. To this aim, we create a realistic mannequins database, which contains images from different cameras, shot in several lighting conditions. The ground-truth is based on a novel pairwise comparison technology where the scores are generated in terms of Just-Noticeable-differences (JND). In terms of methodology, we propose a Multi-Scale CNN architecture with random crop augmentation, to overcome overfitting and to get a low-level feature extraction. We validate our approach by comparing its performance with several baselines inspired by the Image Quality Assessment (IQA) literature.


Author(s):  
Weijie Yang ◽  
Yueting Hui

Image scene analysis is to analyze image scene content through image semantic segmentation, which can identify the categories and positions of different objects in an image. However, due to the loss of spatial detail information, the accuracy of image scene analysis is often affected, resulting in rough edges of FCN, inconsistent class labels of target regions and missing small targets. To address these problems, this paper increases the receptive field, conducts multi-scale fusion and changes the weight of different sensitive channels, so as to improve the feature discrimination and maintain or restore spatial detail information. Furthermore, the deep neural network FCN is used to build the base model of semantic segmentation. The ASPP, data augmentation, SENet, decoder and global pooling are added to the baseline to optimize the model structure and improve the effect of semantic segmentation. Finally, the more accurate results of scene analysis are obtained.


2016 ◽  
Vol 185 ◽  
pp. 163-170 ◽  
Author(s):  
Xiaoheng Jiang ◽  
Yanwei Pang ◽  
Xuelong Li ◽  
Jing Pan

Sign in / Sign up

Export Citation Format

Share Document