hierarchical learning
Recently Published Documents


TOTAL DOCUMENTS

205
(FIVE YEARS 68)

H-INDEX

17
(FIVE YEARS 3)

Measurement ◽  
2021 ◽  
Vol 186 ◽  
pp. 110240
Author(s):  
Jianwei Liu ◽  
Yun Teng ◽  
Bo Shi ◽  
Xuefeng Ni ◽  
Weichu Xiao ◽  
...  

Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2855
Author(s):  
Rabia Naseem ◽  
Faouzi Alaya Cheikh ◽  
Azeddine Beghdadi ◽  
Khan Muhammad ◽  
Muhammad Sajjad

Cross-modal medical imaging techniques are predominantly being used in the clinical suite. The ensemble learning methods using cross-modal medical imaging adds reliability to several medical image analysis tasks. Motivated by the performance of deep learning in several medical imaging tasks, a deep learning-based denoising method Cross-Modality Guided Denoising Network CMGDNet for removing Rician noise in T1-weighted (T1-w) Magnetic Resonance Images (MRI) is proposed in this paper. CMGDNet uses a guidance image, which is a cross-modal (T2-w) image of better perceptual quality to guide the model in denoising its noisy T1-w counterpart. This cross-modal combination allows the network to exploit complementary information existing in both images and therefore improve the learning capability of the model. The proposed framework consists of two components: Paired Hierarchical Learning (PHL) module and Cross-Modal Assisted Reconstruction (CMAR) module. PHL module uses Siamese network to extract hierarchical features from dual images, which are then combined in a densely connected manner in the CMAR module to finally reconstruct the image. The impact of using registered guidance data is investigated in removing noise as well as retaining structural similarity with the original image. Several experiments were conducted on two publicly available brain imaging datasets available on the IXI database. The quantitative assessment using Peak Signal to noise ratio (PSNR), Structural Similarity Index (SSIM), and Feature Similarity Index (FSIM) demonstrates that the proposed method exhibits 4.7% and 2.3% gain (average), respectively, in SSIM and FSIM values compared to other state-of-the-art denoising methods that do not integrate cross-modal image information in removing various levels of noise.


Author(s):  
G. Bellitto ◽  
F. Proietto Salanitri ◽  
S. Palazzo ◽  
F. Rundo ◽  
D. Giordano ◽  
...  

AbstractIn this work, we propose a 3D fully convolutional architecture for video saliency prediction that employs hierarchical supervision on intermediate maps (referred to as conspicuity maps) generated using features extracted at different abstraction levels. We provide the base hierarchical learning mechanism with two techniques for domain adaptation and domain-specific learning. For the former, we encourage the model to unsupervisedly learn hierarchical general features using gradient reversal at multiple scales, to enhance generalization capabilities on datasets for which no annotations are provided during training. As for domain specialization, we employ domain-specific operations (namely, priors, smoothing and batch normalization) by specializing the learned features on individual datasets in order to maximize performance. The results of our experiments show that the proposed model yields state-of-the-art accuracy on supervised saliency prediction. When the base hierarchical model is empowered with domain-specific modules, performance improves, outperforming state-of-the-art models on three out of five metrics on the DHF1K benchmark and reaching the second-best results on the other two. When, instead, we test it in an unsupervised domain adaptation setting, by enabling hierarchical gradient reversal layers, we obtain performance comparable to supervised state-of-the-art. Source code, trained models and example outputs are publicly available at https://github.com/perceivelab/hd2s.


Author(s):  
Nicolas Bougie ◽  
Ryutaro Ichise

AbstractRecent success in scaling deep reinforcement algorithms (DRL) to complex problems has been driven by well-designed extrinsic rewards, which limits their applicability to many real-world tasks where rewards are naturally extremely sparse. One solution to this problem is to introduce human guidance to drive the agent’s learning. Although low-level demonstrations is a promising approach, it was shown that such guidance may be difficult for experts to demonstrate since some tasks require a large amount of high-quality demonstrations. In this work, we explore human guidance in the form of high-level preferences between sub-goals, leading to drastic reductions in both human effort and cost of exploration. We design a novel hierarchical reinforcement learning method that introduces non-expert human preferences at the high-level, and curiosity to drastically speed up the convergence of subpolicies to reach any sub-goals. We further propose a strategy based on curiosity to automatically discover sub-goals. We evaluate the proposed method on 2D navigation tasks, robotic control tasks, and image-based video games (Atari 2600), which have high-dimensional observations, sparse rewards, and complex state dynamics. The experimental results show that the proposed method can learn significantly faster than traditional hierarchical RL methods and drastically reduces the amount of human effort required over standard imitation learning approaches.


2021 ◽  
Author(s):  
Shanqi Liu ◽  
Licheng Wen ◽  
Jinhao Cui ◽  
Xuemeng Yang ◽  
Junjie Cao ◽  
...  

2021 ◽  
Author(s):  
Li-Cheng Xu ◽  
Shuo-Qing Zhang ◽  
Xin Li ◽  
Miao-Jiong Tang ◽  
Pei-Pei Xie ◽  
...  

Author(s):  
Li-Cheng Xu ◽  
Shuo-Qing Zhang ◽  
Xin Li ◽  
Miao-Jiong Tang ◽  
Pei-Pei Xie ◽  
...  

2021 ◽  
Vol 2021 (1) ◽  
pp. 11902
Author(s):  
Oliver Baumann ◽  
Kannan Srikanth ◽  
Tiberiu Sergiu Ungureanu

Sign in / Sign up

Export Citation Format

Share Document