scholarly journals Weakly Supervised Learning for Evaluating Road Surface Condition from Wheelchair Driving Data

Information ◽  
2019 ◽  
Vol 11 (1) ◽  
pp. 2
Author(s):  
Takumi Watanabe ◽  
Hiroki Takahashi ◽  
Yusuke Iwasawa ◽  
Yutaka Matsuo ◽  
Ikuko Eguchi Yairi

Providing accessibility information about sidewalks for people with difficulties with moving is an important social issue. We previously proposed a fully supervised machine learning approach for providing accessibility information by estimating road surface conditions using wheelchair accelerometer data with manually annotated road surface condition labels. However, manually annotating road surface condition labels is expensive and impractical for extensive data. This paper proposes and evaluates a novel method for estimating road surface conditions without human annotation by applying weakly supervised learning. The proposed method only relies on positional information while driving for weak supervision to learn road surface conditions. Our results demonstrate that the proposed method learns detailed and subtle features of road surface conditions, such as the difference in ascending and descending of a slope, the angle of slopes, the exact locations of curbs, and the slight differences of similar pavements. The results demonstrate that the proposed method learns feature representations that are discriminative for a road surface classification task. When the amount of labeled data is 10% or less in a semi-supervised setting, the proposed method outperforms a fully supervised method that uses manually annotated labels to learn feature representations of road surface conditions.

2021 ◽  
Vol 19 (2) ◽  
pp. 5-16
Author(s):  
E. P. Bruches ◽  
T. V. Batura

We propose a method for scientific terms extraction from the texts in Russian based on weakly supervised learning. This approach doesn't require a large amount of hand-labeled data. To implement this method we collected a list of terms in a semi-automatic way and then annotated texts of scientific articles with these terms. These texts we used to train a model. Then we used predictions of this model on another part of the text collection to extend the train set. The second model was trained on both text collections: annotated with a dictionary and by a second model. Obtained results showed that giving additional data, annotated even in an automatic way, improves the quality of scientific terms extraction.


2020 ◽  
Vol 34 (04) ◽  
pp. 4052-4059
Author(s):  
Lan-Zhe Guo ◽  
Feng Kuang ◽  
Zhang-Xun Liu ◽  
Yu-Feng Li ◽  
Nan Ma ◽  
...  

Weakly supervised learning aims at coping with scarce labeled data. Previous weakly supervised studies typically assume that there is only one kind of weak supervision in data. In many applications, however, raw data usually contains more than one kind of weak supervision at the same time. For example, in user experience enhancement from Didi, one of the largest online ride-sharing platforms, the ride comment data contains severe label noise (due to the subjective factors of passengers) and severe label distribution bias (due to the sampling bias). We call such a problem as ‘compound weakly supervised learning’. In this paper, we propose the CWSL method to address this problem based on Didi ride-sharing comment data. Specifically, an instance reweighting strategy is employed to cope with severe label noise in comment data, where the weights for harmful noisy instances are small. Robust criteria like AUC rather than accuracy and the validation performance are optimized for the correction of biased data label. Alternating optimization and stochastic gradient methods accelerate the optimization on large-scale data. Experiments on Didi ride-sharing comment data clearly validate the effectiveness. We hope this work may shed some light on applying weakly supervised learning to complex real situations.


Author(s):  
Chidubem Arachie ◽  
Bert Huang

We consider the task of training classifiers without labels. We propose a weakly supervised method—adversarial label learning—that trains classifiers to perform well against an adversary that chooses labels for training data. The weak supervision constrains what labels the adversary can choose. The method therefore minimizes an upper bound of the classifier’s error rate using projected primal-dual subgradient descent. Minimizing this bound protects against bias and dependencies in the weak supervision. Experiments on real datasets show that our method can train without labels and outperforms other approaches for weakly supervised learning.


Author(s):  
Yu-Feng Li

Weakly supervised learning (WSL) refers to learning from a large amount of weak supervision data. This includes i) incomplete supervision (e.g., semi-supervised learning); ii) inexact supervision (e.g., multi-instance learning) and iii) inaccurate supervision (e.g., label noise learning). Unlike supervised learning which typically achieves performance improvement with more labeled data, WSL may sometimes even degenerate performance with more weak supervision data. It is thus desired to study safe WSL, which could robustly improve performance with weak supervision data. In this article, we share our understanding of the problem from in-distribution data to out-of-distribution data, and discuss possible ways to alleviate it, from the aspects of worst-case analysis, ensemble-learning, and bi-level optimization. We also share some open problems, to inspire future researches.


2021 ◽  
Vol 7 (1) ◽  
pp. 203-211
Author(s):  
Chengliang Tang ◽  
Gan Yuan ◽  
Tian Zheng

Author(s):  
Joao Gabriel Camacho Presotto ◽  
Lucas Pascotti Valem ◽  
Nikolas Gomes de Sa ◽  
Daniel Carlos Guimaraes Pedronette ◽  
Joao Paulo Papa

Sign in / Sign up

Export Citation Format

Share Document