Supervised and Semi-supervised Methods for Abdominal Organ Segmentation: A Review

2021 ◽  
Vol 18 (6) ◽  
pp. 887-914
Author(s):  
Isaac Baffour Senkyire ◽  
Zhe Liu
2021 ◽  
Vol 14 (2) ◽  
pp. 201-214
Author(s):  
Danilo Croce ◽  
Giuseppe Castellucci ◽  
Roberto Basili

In recent years, Deep Learning methods have become very popular in classification tasks for Natural Language Processing (NLP); this is mainly due to their ability to reach high performances by relying on very simple input representations, i.e., raw tokens. One of the drawbacks of deep architectures is the large amount of annotated data required for an effective training. Usually, in Machine Learning this problem is mitigated by the usage of semi-supervised methods or, more recently, by using Transfer Learning, in the context of deep architectures. One recent promising method to enable semi-supervised learning in deep architectures has been formalized within Semi-Supervised Generative Adversarial Networks (SS-GANs) in the context of Computer Vision. In this paper, we adopt the SS-GAN framework to enable semi-supervised learning in the context of NLP. We demonstrate how an SS-GAN can boost the performances of simple architectures when operating in expressive low-dimensional embeddings; these are derived by combining the unsupervised approximation of linguistic Reproducing Kernel Hilbert Spaces and the so-called Universal Sentence Encoders. We experimentally evaluate the proposed approach over a semantic classification task, i.e., Question Classification, by considering different sizes of training material and different numbers of target classes. By applying such adversarial schema to a simple Multi-Layer Perceptron, a classifier trained over a subset derived from 1% of the original training material achieves 92% of accuracy. Moreover, when considering a complex classification schema, e.g., involving 50 classes, the proposed method outperforms state-of-the-art alternatives such as BERT.


IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 35834-35845
Author(s):  
Limin Xia ◽  
Jiahui Zhu ◽  
Zhimin Yu

Technologies ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 2
Author(s):  
Ashish Jaiswal ◽  
Ashwin Ramesh Babu ◽  
Mohammad Zaki Zadeh ◽  
Debapriya Banerjee ◽  
Fillia Makedon

Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets. It is capable of adopting self-defined pseudolabels as supervision and use the learned representations for several downstream tasks. Specifically, contrastive learning has recently become a dominant component in self-supervised learning for computer vision, natural language processing (NLP), and other domains. It aims at embedding augmented versions of the same sample close to each other while trying to push away embeddings from different samples. This paper provides an extensive review of self-supervised methods that follow the contrastive approach. The work explains commonly used pretext tasks in a contrastive learning setup, followed by different architectures that have been proposed so far. Next, we present a performance comparison of different methods for multiple downstream tasks such as image classification, object detection, and action recognition. Finally, we conclude with the limitations of the current methods and the need for further techniques and future directions to make meaningful progress.


2015 ◽  
Vol 34 (4) ◽  
pp. S176-S177
Author(s):  
S. Fedson ◽  
C. Murks ◽  
L. Potter ◽  
s. Qamar ◽  
T. Riley ◽  
...  

Author(s):  
Marina Serper ◽  
Chung‐Heng Liu ◽  
Emily A. Blumberg ◽  
Alexander E. Burdzy ◽  
Stephanie Veasey ◽  
...  

2021 ◽  
Author(s):  
Hyunjae Lee ◽  
Jaewoong Yun ◽  
Hyunjin Choi ◽  
Seongho Joe ◽  
Youngjune L. Gwon
Keyword(s):  

2006 ◽  
Vol 80 ◽  
pp. S19
Author(s):  
B. Wysocka ◽  
Z. Kassam ◽  
G. Lockwood ◽  
L. Dawson ◽  
J. Brierley ◽  
...  

Author(s):  
Xuanlu Xiang ◽  
Zhipeng Wang ◽  
Zhicheng Zhao ◽  
Fei Su

In this paper, aiming at two key problems of instance-level image retrieval, i.e., the distinctiveness of image representation and the generalization ability of the model, we propose a novel deep architecture - Multiple Saliency and Channel Sensitivity Network(MSCNet). Specifically, to obtain distinctive global descriptors, an attention-based multiple saliency learning is first presented to highlight important details of the image, and then a simple but effective channel sensitivity module based on Gram matrix is designed to boost the channel discrimination and suppress redundant information. Additionally, in contrast to most existing feature aggregation methods, employing pre-trained deep networks, MSCNet can be trained in two modes: the first one is an unsupervised manner with an instance loss, and another is a supervised manner, which combines classification and ranking loss and only relies on very limited training data. Experimental results on several public benchmark datasets, i.e., Oxford buildings, Paris buildings and Holidays, indicate that the proposed MSCNet outperforms the state-of-the-art unsupervised and supervised methods.


Sign in / Sign up

Export Citation Format

Share Document