Combining Unsupervised and Supervised Methods for Lesion Segmentation

Author(s):  
Tim Jerman ◽  
Alfiia Galimzianova ◽  
Franjo Pernuš ◽  
Boštjan Likar ◽  
Žiga Špiclin
2019 ◽  
Vol 14 (4) ◽  
pp. 305-313 ◽  
Author(s):  
Suresh Chandra Satapathy ◽  
Steven Lawrence Fernandes ◽  
Hong Lin

Background: Stroke is one of the major causes for the momentary/permanent disability in the human community. Usually, stroke will originate in the brain section because of the neurological deficit and this kind of brain abnormality can be predicted by scrutinizing the periphery of brain region. Magnetic Resonance Image (MRI) is the extensively considered imaging procedure to record the interior sections of the brain to support visual inspection process. Objective: In the proposed work, a semi-automated examination procedure is proposed to inspect the province and the severity of the stroke lesion using the MRI. associations while known disease-lncRNA associations are required only. Method: Recently discovered heuristic approach called the Social Group Optimization (SGO) algorithm is considered to pre-process the test image based on a chosen image multi-thresholding procedure. Later, a chosen segmentation procedure is considered in the post-processing section to mine the stroke lesion from the pre-processed image. Results: In this paper, the pre-processing work is executed with the well known thresholding approaches, such as Shannon’s entropy, Kapur’s entropy and Otsu’s function. Similarly, the postprocessing task is executed using most successful procedures, such as level set, active contour and watershed algorithm. Conclusion: The proposed procedure is experimentally inspected using the benchmark brain stroke database known as Ischemic Stroke Lesion Segmentation (ISLES 2015) challenge database. The results of this experimental work authenticates that, Shannon’s approach along with the LS segmentation offers superior average values compared with the other approaches considered in this research work.</P>


2018 ◽  
Vol 11 (1) ◽  
pp. 286-295
Author(s):  
Sunil Melingi ◽  
◽  
Vijayalakshmi Vivekanand ◽  

2021 ◽  
Vol 14 (2) ◽  
pp. 201-214
Author(s):  
Danilo Croce ◽  
Giuseppe Castellucci ◽  
Roberto Basili

In recent years, Deep Learning methods have become very popular in classification tasks for Natural Language Processing (NLP); this is mainly due to their ability to reach high performances by relying on very simple input representations, i.e., raw tokens. One of the drawbacks of deep architectures is the large amount of annotated data required for an effective training. Usually, in Machine Learning this problem is mitigated by the usage of semi-supervised methods or, more recently, by using Transfer Learning, in the context of deep architectures. One recent promising method to enable semi-supervised learning in deep architectures has been formalized within Semi-Supervised Generative Adversarial Networks (SS-GANs) in the context of Computer Vision. In this paper, we adopt the SS-GAN framework to enable semi-supervised learning in the context of NLP. We demonstrate how an SS-GAN can boost the performances of simple architectures when operating in expressive low-dimensional embeddings; these are derived by combining the unsupervised approximation of linguistic Reproducing Kernel Hilbert Spaces and the so-called Universal Sentence Encoders. We experimentally evaluate the proposed approach over a semantic classification task, i.e., Question Classification, by considering different sizes of training material and different numbers of target classes. By applying such adversarial schema to a simple Multi-Layer Perceptron, a classifier trained over a subset derived from 1% of the original training material achieves 92% of accuracy. Moreover, when considering a complex classification schema, e.g., involving 50 classes, the proposed method outperforms state-of-the-art alternatives such as BERT.


2021 ◽  
Vol 26 (1) ◽  
pp. 93-102
Author(s):  
Yue Zhang ◽  
Shijie Liu ◽  
Chunlai Li ◽  
Jianyu Wang

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5172
Author(s):  
Yuying Dong ◽  
Liejun Wang ◽  
Shuli Cheng ◽  
Yongming Li

Considerable research and surveys indicate that skin lesions are an early symptom of skin cancer. Segmentation of skin lesions is still a hot research topic. Dermatological datasets in skin lesion segmentation tasks generated a large number of parameters when data augmented, limiting the application of smart assisted medicine in real life. Hence, this paper proposes an effective feedback attention network (FAC-Net). The network is equipped with the feedback fusion block (FFB) and the attention mechanism block (AMB), through the combination of these two modules, we can obtain richer and more specific feature mapping without data enhancement. Numerous experimental tests were given by us on public datasets (ISIC2018, ISBI2017, ISBI2016), and a good deal of metrics like the Jaccard index (JA) and Dice coefficient (DC) were used to evaluate the results of segmentation. On the ISIC2018 dataset, we obtained results for DC equal to 91.19% and JA equal to 83.99%, compared with the based network. The results of these two main metrics were improved by more than 1%. In addition, the metrics were also improved in the other two datasets. It can be demonstrated through experiments that without any enhancements of the datasets, our lightweight model can achieve better segmentation performance than most deep learning architectures.


IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 35834-35845
Author(s):  
Limin Xia ◽  
Jiahui Zhu ◽  
Zhimin Yu

Technologies ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 2
Author(s):  
Ashish Jaiswal ◽  
Ashwin Ramesh Babu ◽  
Mohammad Zaki Zadeh ◽  
Debapriya Banerjee ◽  
Fillia Makedon

Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets. It is capable of adopting self-defined pseudolabels as supervision and use the learned representations for several downstream tasks. Specifically, contrastive learning has recently become a dominant component in self-supervised learning for computer vision, natural language processing (NLP), and other domains. It aims at embedding augmented versions of the same sample close to each other while trying to push away embeddings from different samples. This paper provides an extensive review of self-supervised methods that follow the contrastive approach. The work explains commonly used pretext tasks in a contrastive learning setup, followed by different architectures that have been proposed so far. Next, we present a performance comparison of different methods for multiple downstream tasks such as image classification, object detection, and action recognition. Finally, we conclude with the limitations of the current methods and the need for further techniques and future directions to make meaningful progress.


Author(s):  
Tanvir Mahmud ◽  
Md Awsafur Rahman ◽  
Shaikh Anowarul Anowarul Fattah ◽  
Sun-Yuan Kung

Sign in / Sign up

Export Citation Format

Share Document