Acceleration of High-Resolution 3D MR Fingerprinting via a Graph Convolutional Network

Author(s):  
Feng Cheng ◽  
Yong Chen ◽  
Xiaopeng Zong ◽  
Weili Lin ◽  
Dinggang Shen ◽  
...  
2020 ◽  
Vol 12 (2) ◽  
pp. 311 ◽  
Author(s):  
Chun Liu ◽  
Doudou Zeng ◽  
Hangbin Wu ◽  
Yin Wang ◽  
Shoujun Jia ◽  
...  

Urban land cover classification for high-resolution images is a fundamental yet challenging task in remote sensing image analysis. Recently, deep learning techniques have achieved outstanding performance in high-resolution image classification, especially the methods based on deep convolutional neural networks (DCNNs). However, the traditional CNNs using convolution operations with local receptive fields are not sufficient to model global contextual relations between objects. In addition, multiscale objects and the relatively small sample size in remote sensing have also limited classification accuracy. In this paper, a relation-enhanced multiscale convolutional network (REMSNet) method is proposed to overcome these weaknesses. A dense connectivity pattern and parallel multi-kernel convolution are combined to build a lightweight and varied receptive field sizes model. Then, the spatial relation-enhanced block and the channel relation-enhanced block are introduced into the network. They can adaptively learn global contextual relations between any two positions or feature maps to enhance feature representations. Moreover, we design a parallel multi-kernel deconvolution module and spatial path to further aggregate different scales information. The proposed network is used for urban land cover classification against two datasets: the ISPRS 2D semantic labelling contest of Vaihingen and an area of Shanghai of about 143 km2. The results demonstrate that the proposed method can effectively capture long-range dependencies and improve the accuracy of land cover classification. Our model obtains an overall accuracy (OA) of 90.46% and a mean intersection-over-union (mIoU) of 0.8073 for Vaihingen and an OA of 88.55% and a mIoU of 0.7394 for Shanghai.


2020 ◽  
Vol 642 ◽  
pp. A22 ◽  
Author(s):  
V. M. Passegger ◽  
A. Bello-García ◽  
J. Ordieres-Meré ◽  
J. A. Caballero ◽  
A. Schweitzer ◽  
...  

Existing and upcoming instrumentation is collecting large amounts of astrophysical data, which require efficient and fast analysis techniques. We present a deep neural network architecture to analyze high-resolution stellar spectra and predict stellar parameters such as effective temperature, surface gravity, metallicity, and rotational velocity. With this study, we firstly demonstrate the capability of deep neural networks to precisely recover stellar parameters from a synthetic training set. Secondly, we analyze the application of this method to observed spectra and the impact of the synthetic gap (i.e., the difference between observed and synthetic spectra) on the estimation of stellar parameters, their errors, and their precision. Our convolutional network is trained on synthetic PHOENIX-ACES spectra in different optical and near-infrared wavelength regions. For each of the four stellar parameters, Teff, log g, [M/H], and v sin i, we constructed a neural network model to estimate each parameter independently. We then applied this method to 50 M dwarfs with high-resolution spectra taken with CARMENES (Calar Alto high-Resolution search for M dwarfs with Exo-earths with Near-infrared and optical Échelle Spectrographs), which operates in the visible (520–960 nm) and near-infrared wavelength range (960–1710 nm) simultaneously. Our results are compared with literature values for these stars. They show mostly good agreement within the errors, but also exhibit large deviations in some cases, especially for [M/H], pointing out the importance of a better understanding of the synthetic gap.


Author(s):  
Teerapong Panboonyuen ◽  
Kulsawasd Jitkajornwanich ◽  
Siam Lawawirojwong ◽  
Panu Srestasathiern ◽  
Peerapon Vateekul

In remote sensing domain, it is crucial to automatically annotate semantics, e.g., river, building, forest, etc, on the raster images. Deep Convolutional Encoder Decoder (DCED) network is the state-of-the-art semantic segmentation for remotely-sensed images. However, the accuracy is still limited, since the network is not designed for remotely sensed images and the training data in this domain is deficient. In this paper, we aim to propose a novel CNN network for semantic segmentation particularly for remote sensing corpora with three main contributions. First, we propose to apply a recent CNN network call ''Global Convolutional Network (GCN)'', since it can capture different resolutions by extracting multi-scale features from different stages of the network. Also, we further enhance the network by improving its backbone using larger numbers of layers, which is suitable for medium resolution remotely sensed images. Second, ''Channel Attention'' is presented into our network in order to select most discriminative filters (features). Third, ''Domain Specific Transfer Learning'' is introduced to alleviate the scarcity issue by utilizing other remotely sensed corpora with different resolutions as pre-trained data. The experiment was then conducted on two given data sets: ($i$) medium resolution data collected from Landsat-8 satellite and ($ii$) very high resolution data called ''ISPRS Vaihingen Challenge Data Set''. The results show that our networks outperformed DCED in terms of $F1$ for 17.48% and 2.49% on medium and very high resolution corpora, respectively.


2019 ◽  
Vol 5 (4) ◽  
pp. 360-375 ◽  
Author(s):  
Fabien H. Wagner ◽  
Alber Sanchez ◽  
Yuliya Tarabalka ◽  
Rodolfo G. Lotte ◽  
Matheus P. Ferreira ◽  
...  

2019 ◽  
Vol 11 (24) ◽  
pp. 2970 ◽  
Author(s):  
Ziran Ye ◽  
Yongyong Fu ◽  
Muye Gan ◽  
Jinsong Deng ◽  
Alexis Comber ◽  
...  

Automated methods to extract buildings from very high resolution (VHR) remote sensing data have many applications in a wide range of fields. Many convolutional neural network (CNN) based methods have been proposed and have achieved significant advances in the building extraction task. In order to refine predictions, a lot of recent approaches fuse features from earlier layers of CNNs to introduce abundant spatial information, which is known as skip connection. However, this strategy of reusing earlier features directly without processing could reduce the performance of the network. To address this problem, we propose a novel fully convolutional network (FCN) that adopts attention based re-weighting to extract buildings from aerial imagery. Specifically, we consider the semantic gap between features from different stages and leverage the attention mechanism to bridge the gap prior to the fusion of features. The inferred attention weights along spatial and channel-wise dimensions make the low level feature maps adaptive to high level feature maps in a target-oriented manner. Experimental results on three publicly available aerial imagery datasets show that the proposed model (RFA-UNet) achieves comparable and improved performance compared to other state-of-the-art models for building extraction.


Sign in / Sign up

Export Citation Format

Share Document