discrimination information
Recently Published Documents


TOTAL DOCUMENTS

102
(FIVE YEARS 8)

H-INDEX

15
(FIVE YEARS 0)

2021 ◽  
Vol 152 ◽  
pp. 111469
Author(s):  
Arkadiy Blank ◽  
Natalia Suhareva ◽  
Mikhail Tsyganov


2021 ◽  
Vol 10 (9) ◽  
pp. 591
Author(s):  
Qingtian Ke ◽  
Peng Zhang

Change detection based on bi-temporal remote sensing images has made significant progress in recent years, aiming to identify the changed and unchanged pixels between a registered pair of images. However, most learning-based change detection methods only utilize fused high-level features from the feature encoder and thus miss the detailed representations that low-level feature pairs contain. Here we propose a multi-level change contextual refinement network (MCCRNet) to strengthen the multi-level change representations of feature pairs. To effectively capture the dependencies of feature pairs while avoiding fusing them, our atrous spatial pyramid cross attention (ASPCA) module introduces a crossed spatial attention module and a crossed channel attention module to emphasize the position importance and channel importance of each feature while simultaneously keeping the scale of input and output the same. This module can be plugged into any feature extraction layer of a Siamese change detection network. Furthermore, we propose a change contextual representations (CCR) module from the perspective of the relationship between the change pixels and the contextual representation, named change region contextual representations. The CCR module aims to correct changed pixels mistakenly predicted as unchanged by a class attention mechanism. Finally, we introduce an effective sample number adaptively weighted loss to solve the class-imbalanced problem of change detection datasets. On the whole, compared with other attention modules that only use fused features from the highest feature pairs, our method can capture the multi-level spatial, channel, and class context of change discrimination information. The experiments are performed with four public change detection datasets of various image resolutions. Compared to state-of-the-art methods, our MCCRNet achieved superior performance on all datasets (i.e., LEVIR, Season-Varying Change Detection Dataset, Google Data GZ, and DSIFN) with improvements of 0.47%, 0.11%, 2.62%, and 3.99%, respectively.



2021 ◽  
Vol 104 ◽  
pp. 104364
Author(s):  
Ronghua Shang ◽  
Yang Meng ◽  
Weitong Zhang ◽  
Fanhua Shang ◽  
Licheng Jiao ◽  
...  


2021 ◽  
Vol 15 ◽  
Author(s):  
Yufang Dan ◽  
Jianwen Tao ◽  
Jianjing Fu ◽  
Di Zhou

The purpose of the latest brain computer interface is to perform accurate emotion recognition through the customization of their recognizers to each subject. In the field of machine learning, graph-based semi-supervised learning (GSSL) has attracted more and more attention due to its intuitive and good learning performance for emotion recognition. However, the existing GSSL methods are sensitive or not robust enough to noise or outlier electroencephalogram (EEG)-based data since each individual subject may present noise or outlier EEG patterns in the same scenario. To address the problem, in this paper, we invent a Possibilistic Clustering-Promoting semi-supervised learning method for EEG-based Emotion Recognition. Specifically, it constrains each instance to have the same label membership value with its local weighted mean to improve the reliability of the recognition method. In addition, a regularization term about fuzzy entropy is introduced into the objective function, and the generalization ability of membership function is enhanced by increasing the amount of sample discrimination information, which improves the robustness of the method to noise and the outlier. A large number of experimental results on the three real datasets (i.e., DEAP, SEED, and SEED-IV) show that the proposed method improves the reliability and robustness of the EEG-based emotion recognition.



Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Tingting Xu ◽  
Ye Zhao ◽  
Xueliang Liu

Zero-shot learning is dedicated to solving the classification problem of unseen categories, while generalized zero-shot learning aims to classify the samples selected from both seen classes and unseen classes, in which “seen” and “unseen” classes indicate whether they can be used in the training process, and if so, they indicate seen classes, and vice versa. Nowadays, with the promotion of deep learning technology, the performance of zero-shot learning has been greatly improved. Generalized zero-shot learning is a challenging topic that has promising prospects in many realistic scenarios. Although the zero-shot learning task has made gratifying progress, there is still a strong deviation between seen classes and unseen classes in the existing methods. Recent methods focus on learning a unified semantic-aligned visual representation to transfer knowledge between two domains, while ignoring the intrinsic characteristics of visual features which are discriminative enough to be classified by itself. To solve the above problems, we propose a novel model that uses the discriminative information of visual features to optimize the generative module, in which the generative module is a dual generation network framework composed of conditional VAE and improved WGAN. Specifically, the model uses the discrimination information of visual features, according to the relevant semantic embedding, synthesizes the visual features of unseen categories by using the learned generator, and then trains the final softmax classifier by using the generated visual features, thus realizing the recognition of unseen categories. In addition, this paper also analyzes the effect of the additional classifiers with different structures on the transmission of discriminative information. We have conducted a lot of experiments on six commonly used benchmark datasets (AWA1, AWA2, APY, FLO, SUN, and CUB). The experimental results show that our model outperforms several state-of-the-art methods for both traditional as well as generalized zero-shot learning.



Author(s):  
Abbas Eftekharian ◽  
Guoxin Qiu

Ranked set sampling (RSS) and some of its variants are sampling designs that are applied widely in different areas. When the underlying population contains different subpopulations, we can use stratified ranked set sampling (SRSS) which combines the advantages of stratification with RSS. In the present paper, we consider the information content of SRSS in terms of extropy measure. Some results using stochastic orders properties are obtained. The effect of imperfect ranking on discrimination information is analytically investigated. It is proved that discrimination information between the perfect SRSS and simple random sampling (SRS) data sets performs better than that of between the imperfect SRSS and SRS data sets.



Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7036
Author(s):  
Chao Han ◽  
Xiaoyang Li ◽  
Zhen Yang ◽  
Deyun Zhou ◽  
Yiyang Zhao ◽  
...  

Domain adaptation aims to handle the distribution mismatch of training and testing data, which achieves dramatic progress in multi-sensor systems. Previous methods align the cross-domain distributions by some statistics, such as the means and variances. Despite their appeal, such methods often fail to model the discriminative structures existing within testing samples. In this paper, we present a sample-guided adaptive class prototype method, which consists of the no distribution matching strategy. Specifically, two adaptive measures are proposed. Firstly, the modified nearest class prototype is raised, which allows more diversity within same class, while keeping most of the class wise discrimination information. Secondly, we put forward an easy-to-hard testing scheme by taking into account the different difficulties in recognizing target samples. Easy samples are classified and selected to assist the prediction of hard samples. Extensive experiments verify the effectiveness of the proposed method.



2020 ◽  
Vol 57 (6) ◽  
pp. 102344
Author(s):  
Weifeng Hu ◽  
Baosen Ma ◽  
Zeqiang Li ◽  
Yujun Li ◽  
Yue Wang


Sign in / Sign up

Export Citation Format

Share Document