fully convolutional networks
Recently Published Documents


TOTAL DOCUMENTS

508
(FIVE YEARS 290)

H-INDEX

36
(FIVE YEARS 15)

Diagnostics ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 126
Author(s):  
Pierre Daudé ◽  
Patricia Ancel ◽  
Sylviane Confort Gouny ◽  
Alexis Jacquier ◽  
Frank Kober ◽  
...  

In magnetic resonance imaging (MRI), epicardial adipose tissue (EAT) overload remains often overlooked due to tedious manual contouring in images. Automated four-chamber EAT area quantification was proposed, leveraging deep-learning segmentation using multi-frame fully convolutional networks (FCN). The investigation involved 100 subjects—comprising healthy, obese, and diabetic patients—who underwent 3T cardiac cine MRI, optimized U-Net and FCN (noted FCNB) were trained on three consecutive cine frames for segmentation of central frame using dice loss. Networks were trained using 4-fold cross-validation (n = 80) and evaluated on an independent dataset (n = 20). Segmentation performances were compared to inter-intra observer bias with dice (DSC) and relative surface error (RSE). Both systole and diastole four-chamber area were correlated with total EAT volume (r = 0.77 and 0.74 respectively). Networks’ performances were equivalent to inter-observers’ bias (EAT: DSCInter = 0.76, DSCU-Net = 0.77, DSCFCNB = 0.76). U-net outperformed (p < 0.0001) FCNB on all metrics. Eventually, proposed multi-frame U-Net provided automated EAT area quantification with a 14.2% precision for the clinically relevant upper three quarters of EAT area range, scaling patients’ risk of EAT overload with 70% accuracy. Exploiting multi-frame U-Net in standard cine provided automated EAT quantification over a wide range of EAT quantities. The method is made available to the community through a FSLeyes plugin.


Author(s):  
Dr. S. Saraswathi ◽  
S. Ramya

This paper focuses on speech derverberation using a single microphone. We investigate the applicability of fully convolutional networks (FCN) to enhance the speech signal represented by short-time Fourier transform (STFT) images in light of their recent success in many image processing applications. We present two variants: a "U-Net," which is an encoder-decoder network with skip connections, and a generative adversarial network (GAN) with the U-Net as the generator, which produces a more intuitive cost function for training. To assess our method, we used data from the REVERB challenge and compared our results to those of other methods tested under the same conditions. In most cases, we discovered that our method outperforms the competing methods.


2021 ◽  
Vol 12 (1) ◽  
pp. 283
Author(s):  
Mengtao Sun ◽  
Li Lu ◽  
Ibrahim A. Hameed ◽  
Carl Petter Skaar Kulseng ◽  
Kjell-Inge Gjesdal

Accurately identifying the pixels of small organs or lesions from magnetic resonance imaging (MRI) has a critical impact on clinical diagnosis. U-net is the most well-known and commonly used neural network for image segmentation. However, the small anatomical structures in medical images cannot be well recognised by U-net. This paper explores the performance of the U-net architectures in knee MRI segmentation to find a relative structure that can obtain high accuracies for both small and large anatomical structures. To maximise the utilities of U-net architecture, we apply three types of components, residual blocks, squeeze-and-excitation (SE) blocks, and dense blocks, to construct four variants of U-net, namely U-net variants. Among these variants, our experiments show that SE blocks can improve the segmentation accuracies of small labels. We adopt DeepLabv3plus architecture for 3D medical image segmentation by equipping SE blocks based on this discovery. The experimental results show that U-net with SE block achieves higher accuracy in parts of small anatomical structures. In contrast, DeepLabv3plus with SE block performs better on the average dice coefficient of small and large labels.


2021 ◽  
Vol 13 (24) ◽  
pp. 5084
Author(s):  
Daliana Lobo Torres ◽  
Javier Noa Turnes ◽  
Pedro Juan Soto Vega ◽  
Raul Queiroz Feitosa ◽  
Daniel E. Silva ◽  
...  

The availability of remote-sensing multisource data from optical-based satellite sensors has created new opportunities and challenges for forest monitoring in the Amazon Biome. In particular, change-detection analysis has emerged in recent decades to monitor forest-change dynamics, supporting some Brazilian governmental initiatives such as PRODES and DETER projects for biodiversity preservation in threatened areas. In recent years fully convolutional network architectures have witnessed numerous proposals adapted for the change-detection task. This paper comprehensively explores state-of-the-art fully convolutional networks such as U-Net, ResU-Net, SegNet, FC-DenseNet, and two DeepLabv3+ variants on monitoring deforestation in the Brazilian Amazon. The networks’ performance is evaluated experimentally in terms of Precision, Recall, F1-score, and computational load using satellite images with different spatial and spectral resolution: Landsat-8 and Sentinel-2. We also include the results of an unprecedented auditing process performed by senior specialists to visually evaluate each deforestation polygon derived from the network with the highest accuracy results for both satellites. This assessment allowed estimation of the accuracy of these networks simulating a process “in nature” and faithful to the PRODES methodology. We conclude that the high resolution of Sentinel-2 images improves the segmentation of deforestation polygons both quantitatively (in terms of F1-score) and qualitatively. Moreover, the study also points to the potential of the operational use of Deep Learning (DL) mapping as products to be consumed in PRODES.


Sign in / Sign up

Export Citation Format

Share Document