Global context-aware cervical cell detection with soft scale anchor matching

2021 ◽  
Vol 204 ◽  
pp. 106061
Author(s):  
Yixiong Liang ◽  
Changli Pan ◽  
Wanxin Sun ◽  
Qing Liu ◽  
Yun Du
2021 ◽  
Vol 175 ◽  
pp. 353-365
Author(s):  
Qiqi Zhu ◽  
Yanan Zhang ◽  
Lizeng Wang ◽  
Yanfei Zhong ◽  
Qingfeng Guan ◽  
...  

2020 ◽  
Vol 79 (17-18) ◽  
pp. 12349-12371
Author(s):  
Qingshan She ◽  
Gaoyuan Mu ◽  
Haitao Gan ◽  
Yingle Fan

2020 ◽  
Vol 34 (07) ◽  
pp. 10599-10606 ◽  
Author(s):  
Zuyao Chen ◽  
Qianqian Xu ◽  
Runmin Cong ◽  
Qingming Huang

Deep convolutional neural networks have achieved competitive performance in salient object detection, in which how to learn effective and comprehensive features plays a critical role. Most of the previous works mainly adopted multiple-level feature integration yet ignored the gap between different features. Besides, there also exists a dilution process of high-level features as they passed on the top-down pathway. To remedy these issues, we propose a novel network named GCPANet to effectively integrate low-level appearance features, high-level semantic features, and global context features through some progressive context-aware Feature Interweaved Aggregation (FIA) modules and generate the saliency map in a supervised way. Moreover, a Head Attention (HA) module is used to reduce information redundancy and enhance the top layers features by leveraging the spatial and channel-wise attention, and the Self Refinement (SR) module is utilized to further refine and heighten the input features. Furthermore, we design the Global Context Flow (GCF) module to generate the global context information at different stages, which aims to learn the relationship among different salient regions and alleviate the dilution effect of high-level features. Experimental results on six benchmark datasets demonstrate that the proposed approach outperforms the state-of-the-art methods both quantitatively and qualitatively.


Author(s):  
Wendong Zhang ◽  
Junwei Zhu ◽  
Ying Tai ◽  
Yunbo Wang ◽  
Wenqing Chu ◽  
...  

Recent advances in image inpainting have shown impressive results for generating plausible visual details on rather simple backgrounds. However, for complex scenes, it is still challenging to restore reasonable contents as the contextual information within the missing regions tends to be ambiguous. To tackle this problem, we introduce pretext tasks that are semantically meaningful to estimating the missing contents. In particular, we perform knowledge distillation on pretext models and adapt the features to image inpainting. The learned semantic priors ought to be partially invariant between the high-level pretext task and low-level image inpainting, which not only help to understand the global context but also provide structural guidance for the restoration of local textures. Based on the semantic priors, we further propose a context-aware image inpainting model, which adaptively integrates global semantics and local features in a unified image generator. The semantic learner and the image generator are trained in an end-to-end manner. We name the model SPL to highlight its ability to learn and leverage semantic priors. It achieves the state of the art on Places2, CelebA, and Paris StreetView datasets


Author(s):  
Ruibing Hou ◽  
Bingpeng Ma ◽  
Hong Chang ◽  
Xinqian Gu ◽  
Shiguang Shan ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document