scholarly journals Three-Year Review of the 2018–2020 SHL Challenge on Transportation and Locomotion Mode Recognition From Mobile Sensors

2021 ◽  
Vol 3 ◽  
Author(s):  
Lin Wang ◽  
Hristijan Gjoreski ◽  
Mathias Ciliberto ◽  
Paula Lago ◽  
Kazuya Murao ◽  
...  

The Sussex-Huawei Locomotion-Transportation (SHL) Recognition Challenges aim to advance and capture the state-of-the-art in locomotion and transportation mode recognition from smartphone motion (inertial) sensors. The goal of this series of machine learning and data science challenges was to recognize eight locomotion and transportation activities (Still, Walk, Run, Bus, Car, Train, Subway). The three challenges focused on time-independent (SHL 2018), position-independent (SHL 2019) and user-independent (SHL 2020) evaluations, respectively. Overall, we received 48 submissions (out of 93 teams who registered interest) involving 201 scientists over the three years. The survey captures the state-of-the-art through a meta-analysis of the contributions to the three challenges, including approaches, recognition performance, computational requirements, software tools and frameworks used. It was shown that state-of-the-art methods can distinguish with relative ease most modes of transportation, although the differentiating between subtly distinct activities, such as rail transport (Train and Subway) and road transport (Bus and Car) still remains challenging. We summarize insightful methods from participants that could be employed to address practical challenges of transportation mode recognition, for instance, to tackle over-fitting, to employ robust representations, to exploit data augmentation, and to exploit smart post-processing techniques to improve performance. Finally, we present baseline results to compare the three challenges with a unified recognition pipeline and decision window length.

Author(s):  
Rajae Moumen ◽  
Raddouane Chiheb ◽  
Rdouan Faizi

The aim of this research is to propose a fully convolutional approach to address the problem of real-time scene text detection for Arabic language. Text detection is performed using a two-steps multi-scale approach. The first step uses light-weighted fully convolutional network: TextBlockDetector FCN, an adaptation of VGG-16 to eliminate non-textual elements, localize wide scale text and give text scale estimation. The second step determines narrow scale range of text using fully convolutional network for maximum performance. To evaluate the system, we confront the results of the framework to the results obtained with single VGG-16 fully deployed for text detection in one-shot; in addition to previous results in the state-of-the-art. For training and testing, we initiate a dataset of 575 images manually processed along with data augmentation to enrich training process. The system scores a precision of 0.651 vs 0.64 in the state-of-the-art and a FPS of 24.3 vs 31.7 for a VGG-16 fully deployed.


2021 ◽  
Vol 13 (24) ◽  
pp. 5009
Author(s):  
Lingbo Huang ◽  
Yushi Chen ◽  
Xin He

In recent years, supervised learning-based methods have achieved excellent performance for hyperspectral image (HSI) classification. However, the collection of training samples with labels is not only costly but also time-consuming. This fact usually causes the existence of weak supervision, including incorrect supervision where mislabeled samples exist and incomplete supervision where unlabeled samples exist. Focusing on the inaccurate supervision and incomplete supervision, the weakly supervised classification of HSI is investigated in this paper. For inaccurate supervision, complementary learning (CL) is firstly introduced for HSI classification. Then, a new method, which is based on selective CL and convolutional neural network (SeCL-CNN), is proposed for classification with noisy labels. For incomplete supervision, a data augmentation-based method, which combines mixup and Pseudo-Label (Mix-PL) is proposed. And then, a classification method, which combines Mix-PL and CL (Mix-PL-CL), is designed aiming at better semi-supervised classification capacity of HSI. The proposed weakly supervised methods are evaluated on three widely-used hyperspectral datasets (i.e., Indian Pines, Houston, and Salinas datasets). The obtained results reveal that the proposed methods provide competitive results compared to the state-of-the-art methods. For inaccurate supervision, the proposed SeCL-CNN has outperformed the state-of-the-art method (i.e., SSDP-CNN) by 0.92%, 1.84%, and 1.75% in terms of OA on the three datasets, when the noise ratio is 30%. And for incomplete supervision, the proposed Mix-PL-CL has outperformed the state-of-the-art method (i.e., AROC-DP) by 1.03%, 0.70%, and 0.82% in terms of OA on the three datasets, with 25 training samples per class.


2020 ◽  
Vol 34 (05) ◽  
pp. 9474-9481
Author(s):  
Yichun Yin ◽  
Lifeng Shang ◽  
Xin Jiang ◽  
Xiao Chen ◽  
Qun Liu

Neural dialog state trackers are generally limited due to the lack of quantity and diversity of annotated training data. In this paper, we address this difficulty by proposing a reinforcement learning (RL) based framework for data augmentation that can generate high-quality data to improve the neural state tracker. Specifically, we introduce a novel contextual bandit generator to learn fine-grained augmentation policies that can generate new effective instances by choosing suitable replacements for specific context. Moreover, by alternately learning between the generator and the state tracker, we can keep refining the generative policies to generate more high-quality training data for neural state tracker. Experimental results on the WoZ and MultiWoZ (restaurant) datasets demonstrate that the proposed framework significantly improves the performance over the state-of-the-art models, especially with limited training data.


Engineering ◽  
2019 ◽  
Vol 5 (2) ◽  
pp. 234-242 ◽  
Author(s):  
Yuequan Bao ◽  
Zhicheng Chen ◽  
Shiyin Wei ◽  
Yang Xu ◽  
Zhiyi Tang ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7269
Author(s):  
Chengjuan Ren ◽  
Hyunjun Jung ◽  
Sukhoon Lee ◽  
Dongwon Jeong

Coastal waste not only has a seriously destructive effect on human life and marine ecosystems, but it also poses a long-term economic and environmental threat. To solve the issues of a poor manual coastal waste sorting environment, such as low sorting efficiency and heavy tasks, we develop a novel deep convolutional neural network by combining several strategies to realize intelligent waste recognition and classification based on the state-of-the-art Faster R-CNN framework. Firstly, to effectively detect small objects, we consider multiple-scale fusion to get rich semantic information from the shallower feature map. Secondly, RoI Align is introduced to solve positioning deviation caused by the regions of interest pooling. Moreover, it is necessary to correct key parameters and take on data augmentation to improve model performance. Besides, we create a new waste object dataset, named IST-Waste, which is made publicly to facilitate future research in this field. As a consequence, the experiment shows that the algorithm’s mAP reaches 83%. Detection performance is significantly better than Faster R-CNN and SSD. Thus, the developed scheme achieves higher accuracy and better performance against the state-of-the-art alternative.


Sign in / Sign up

Export Citation Format

Share Document