scholarly journals Weakly Supervised Representation Learning for Endomicroscopy Image Analysis

Author(s):  
Yun Gu ◽  
Khushi Vyas ◽  
Jie Yang ◽  
Guang-Zhong Yang
2020 ◽  
pp. 1-51
Author(s):  
Ivan Vulić ◽  
Simon Baker ◽  
Edoardo Maria Ponti ◽  
Ulla Petti ◽  
Ira Leviant ◽  
...  

We introduce Multi-SimLex, a large-scale lexical resource and evaluation benchmark covering data sets for 12 typologically diverse languages, including major languages (e.g., Mandarin Chinese, Spanish, Russian) as well as less-resourced ones (e.g., Welsh, Kiswahili). Each language data set is annotated for the lexical relation of semantic similarity and contains 1,888 semantically aligned concept pairs, providing a representative coverage of word classes (nouns, verbs, adjectives, adverbs), frequency ranks, similarity intervals, lexical fields, and concreteness levels. Additionally, owing to the alignment of concepts across languages, we provide a suite of 66 crosslingual semantic similarity data sets. Because of its extensive size and language coverage, Multi-SimLex provides entirely novel opportunities for experimental evaluation and analysis. On its monolingual and crosslingual benchmarks, we evaluate and analyze a wide array of recent state-of-the-art monolingual and crosslingual representation models, including static and contextualized word embeddings (such as fastText, monolingual and multilingual BERT, XLM), externally informed lexical representations, as well as fully unsupervised and (weakly) supervised crosslingual word embeddings. We also present a step-by-step data set creation protocol for creating consistent, Multi-Simlex -style resources for additional languages.We make these contributions—the public release of Multi-SimLex data sets, their creation protocol, strong baseline results, and in-depth analyses which can be be helpful in guiding future developments in multilingual lexical semantics and representation learning—available via aWeb site that will encourage community effort in further expansion of Multi-Simlex to many more languages. Such a large-scale semantic resource could inspire significant further advances in NLP across languages.


Author(s):  
Penghui Wei ◽  
Wenji Mao ◽  
Guandan Chen

Analyzing public attitudes plays an important role in opinion mining systems. Stance detection aims to determine from a text whether its author is in favor of, against, or neutral towards a given target. One challenge of this task is that a text may not explicitly express an attitude towards the target, but existing approaches utilize target content alone to build models. Moreover, although weakly supervised approaches have been proposed to ease the burden of manually annotating largescale training data, such approaches are confronted with noisy labeling problem. To address the above two issues, in this paper, we propose a Topic-Aware Reinforced Model (TARM) for weakly supervised stance detection. Our model consists of two complementary components: (1) a detection network that incorporates target-related topic information into representation learning for identifying stance effectively; (2) a policy network that learns to eliminate noisy instances from auto-labeled data based on off-policy reinforcement learning. Two networks are alternately optimized to improve each other’s performances. Experimental results demonstrate that our proposed model TARM outperforms the state-of-the-art approaches.


2019 ◽  
Vol 58 ◽  
pp. 101535 ◽  
Author(s):  
Agisilaos Chartsias ◽  
Thomas Joyce ◽  
Giorgos Papanastasiou ◽  
Scott Semple ◽  
Michelle Williams ◽  
...  

2021 ◽  
Author(s):  
◽  
Zhangwei (Alex) Yang

Lately, deep convolutional neural networks are rapidly transforming and enhancing computer vision accuracy and performance, and pursuing higher-level and interpretable object recognition. Superpixel-based methodologies have been used in conventional computer vision research where their efficient representation has superior effects. In contemporary computer vision research driven by deep neural networks, superpixel-based approaches mainly rely on oversegmentation to provide a more efficient representation of the imagery data, especially when the computation is too expensive in time or memory to perform in pairwise similarity regularization or complex graphical probabilistic inference. In this dissertation, we proposed a novel superpixel-enabled deep neural network paradigm by relaxing some of the prior assumptions in the conventional superpixel-based methodologies and exploring its capabilities in the context of advanced deep convolutional neural networks. This produces novel neural network architectures that can achieve higher-level object relation modeling, weakly supervised segmentation, high explainability, and facilitate insightful visualizations. This approach has the advantage of being an efficient representation of the visual signal and has the capability to dissect out relevant object components from other background noise by spatially re-organizing visual features. Specifically, we have created superpixel models that join graphical neural network techniques and multiple-instance learning to achieve weakly supervised object detection and generate precise object bounding without pixel-level training labels. This dissection and the subsequent learning by the architecture promotes explainable models, whereby the human users of the models can see the parts of the objects that have led to recognition. Most importantly, this neural design's natural result goes beyond abstract rectangular bounds of an object occurrence (e.g., bounding box or image chip), but instead approaches efficient parts-based segmented recognition. It has been tested on commercial remote sensing satellite imagery and achieved success. Additionally, We have developed highly efficient monocular indoor depth estimation based on superpixel feature extraction. Furthermore, we have demonstrated state-of-theart weakly supervised object detection performance on two contemporary benchmark data sets, MS-COCO and VOC 2012. In the future, deep learning techniques based on superpixel-enabled image analysis can be further optimized in accuracy and computational performance; and it will also be interesting to evaluate in other research domains, such as those involving medical imagery, infrared imagery, or hyperspectral imagery.


Object detection is closely related with video and image analysis. Under computer vision technology, object detection model training with image-level labels only is challenging research area.Researchers have not yet discovered accurate model for Weakly Supervised Object Detection (WSOD). WSOD is used for detecting and localizing the objects under the supervision of image level annotations only.The proposed work usesself-paced approach which is applied on region proposal network of Faster R-CNN architecture which gives better solution from previous weakly-supervised object detectors and it can be applied for computer visionapplications in near future.


2020 ◽  
Vol 50 (9) ◽  
pp. 3950-3962 ◽  
Author(s):  
Xi Wang ◽  
Hao Chen ◽  
Caixia Gan ◽  
Huangjing Lin ◽  
Qi Dou ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document