linear classifier
Recently Published Documents


TOTAL DOCUMENTS

149
(FIVE YEARS 39)

H-INDEX

17
(FIVE YEARS 2)

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Emre Onemli ◽  
Sulayman Joof ◽  
Cemanur Aydinalp ◽  
Nural Pastacı Özsobacı ◽  
Fatma Ateş Alkan ◽  
...  

AbstractMammary carcinoma, breast cancer, is the most commonly diagnosed cancer type among women. Therefore, potential new technologies for the diagnosis and treatment of the disease are being investigated. One promising technique is microwave applications designed to exploit the inherent dielectric property discrepancy between the malignant and normal tissues. In theory, the anomalies can be characterized by simply measuring the dielectric properties. However, the current measurement technique is error-prone and a single measurement is not accurate enough to detect anomalies with high confidence. This work proposes to classify the rat mammary carcinoma, based on collected large-scale in vivo S$$_{11}$$ 11 measurements and corresponding tissue dielectric properties with a circular diffraction antenna. The tissues were classified with high accuracy in a reproducible way by leveraging a learning-based linear classifier. Moreover, the most discriminative S$$_{11}$$ 11 measurement was identified, and to our surprise, using the discriminative measurement along with a linear classifier an 86.92% accuracy was achieved. These findings suggest that a narrow band microwave circuitry can support the antenna enabling a low-cost automated microwave diagnostic system.


2021 ◽  
Vol 2021 ◽  
pp. 1-5
Author(s):  
Chengkuan Yao ◽  
Liyong Cao ◽  
Jianhua Xu ◽  
Mingya Yang

The Support Vector Machine proposed by Vapnik is a generalized linear classifier which makes binary classification of data based on the supervised learning. SVM has been rapidly developed and has derived a series of improved and extended algorithms, which have been applied in pattern recognition, image recognition, etc. Among the many improved algorithms, the technique of regulating the ratio of two penalty parameters according to the ratio of the sample quantities of the two classes has been widely accepted. However, the technique has not been verified in the way of rigorous mathematical proof. The experiments based on USPS sets in the study were designed to test the accuracy of the theory. The optimal parameters of the USPS sets were found through the grid-scanning method, which showed that the theory is not accurate in any case because there is absolutely no linear relationship between ratios of penalty parameters and sample sizes.


2021 ◽  
Vol 13 (20) ◽  
pp. 4143
Author(s):  
Jianrong Zhang ◽  
Hongwei Zhao ◽  
Jiao Li

Remote sensing scene classification remains challenging due to the complexity and variety of scenes. With the development of attention-based methods, Convolutional Neural Networks (CNNs) have achieved competitive performance in remote sensing scene classification tasks. As an important method of the attention-based model, the Transformer has achieved great success in the field of natural language processing. Recently, the Transformer has been used for computer vision tasks. However, most existing methods divide the original image into multiple patches and encode the patches as the input of the Transformer, which limits the model’s ability to learn the overall features of the image. In this paper, we propose a new remote sensing scene classification method, Remote Sensing Transformer (TRS), a powerful “pure CNNs→Convolution + Transformer → pure Transformers” structure. First, we integrate self-attention into ResNet in a novel way, using our proposed Multi-Head Self-Attention layer instead of 3 × 3 spatial revolutions in the bottleneck. Then we connect multiple pure Transformer encoders to further improve the representation learning performance completely depending on attention. Finally, we use a linear classifier for classification. We train our model on four public remote sensing scene datasets: UC-Merced, AID, NWPU-RESISC45, and OPTIMAL-31. The experimental results show that TRS exceeds the state-of-the-art methods and achieves higher accuracy.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Yoon Ho Jang ◽  
Woohyun Kim ◽  
Jihun Kim ◽  
Kyung Seok Woo ◽  
Hyun Jae Lee ◽  
...  

Abstract Recent advances in physical reservoir computing, which is a type of temporal kernel, have made it possible to perform complicated timing-related tasks using a linear classifier. However, the fixed reservoir dynamics in previous studies have limited application fields. In this study, temporal kernel computing was implemented with a physical kernel that consisted of a W/HfO2/TiN memristor, a capacitor, and a resistor, in which the kernel dynamics could be arbitrarily controlled by changing the circuit parameters. After the capability of the temporal kernel to identify the static MNIST data was proven, the system was adopted to recognize the sequential data, ultrasound (malignancy of lesions) and electrocardiogram (arrhythmia), that had a significantly different time constant (10−7 vs. 1 s). The suggested system feasibly performed the tasks by simply varying the capacitance and resistance. These functionalities demonstrate the high adaptability of the present temporal kernel compared to the previous ones.


2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Meitao Gong

According to the basic principle of piecewise linear classifier and its application in the field of infrared chemical remote sensing monitoring, the characteristics of unilateral piecewise linear classifier applied to the infrared spectrum identification of chemical agents are studied. With the characteristic of separate transmission, the characteristic recovery with the total observed deviation is used for the model. The relaxation factors are used to replace the constrained conditions that cannot be optimized into constrained separate line segment calculation conditions. Experiments show that the result of signal recovery is better than traditional Wiener filtering and Richardson–Lucy methods.


2021 ◽  
Vol 25 (5) ◽  
pp. 1273-1290
Author(s):  
Shuangxi Wang ◽  
Hongwei Ge ◽  
Jinlong Yang ◽  
Shuzhi Su

It is an open question to learn an over-complete dictionary from a limited number of face samples, and the inherent attributes of the samples are underutilized. Besides, the recognition performance may be adversely affected by the noise (and outliers), and the strict binary label based linear classifier is not appropriate for face recognition. To solve above problems, we propose a virtual samples based robust block-diagonal dictionary learning for face recognition. In the proposed model, the original samples and virtual samples are combined to solve the small sample size problem, and both the structure constraint and the low rank constraint are exploited to preserve the intrinsic attributes of the samples. In addition, the fidelity term can effectively reduce negative effects of noise (and outliers), and the ε-dragging is utilized to promote the performance of the linear classifier. Finally, extensive experiments are conducted in comparison with many state-of-the-art methods on benchmark face datasets, and experimental results demonstrate the efficacy of the proposed method.


2021 ◽  
Author(s):  
Liangchen Hu

As one of the ways to acquire efficient image compact representation, graph embedding (GE) based manifold learning has been widely developed over the last two decades. Good graph embedding depends on the construction of graphs concerning intra-class compactness and inter-class separability, which are crucial indicators of the effectiveness of a model in generating discriminative features. Unsupervised approaches are intended to reveal the data structure information from a local or global perspective, but the resulting compact representation often has poorly inter-class margins due to the lack of label information. Moreover, supervised techniques only consider enhancing the adjacency affinity within the class but excluding the affinity of different classes, which results in the inability to fully capture the marginal structure between distributions of different classes. To overcome these issues, we propose a learning framework that implements Category-Oriented Self-Learning Graph Embedding (COSLGE), in which we achieve a flexible low-dimensional compact representation by imposing an adaptive graph learning process across the entire data while examining the inter-class separability of low-dimensional embedding by jointly learning a linear classifier. Besides, our framework can easily be extended to the semi-supervised situation. Extensive experiments on several widely-used benchmark databases demonstrate the effectiveness of the proposed method comparing with some state-of-the-art approaches.


2021 ◽  
Author(s):  
Liangchen Hu

As one of the ways to acquire efficient image compact representation, graph embedding (GE) based manifold learning has been widely developed over the last two decades. Good graph embedding depends on the construction of graphs concerning intra-class compactness and inter-class separability, which are crucial indicators of the effectiveness of a model in generating discriminative features. Unsupervised approaches are intended to reveal the data structure information from a local or global perspective, but the resulting compact representation often has poorly inter-class margins due to the lack of label information. Moreover, supervised techniques only consider enhancing the adjacency affinity within the class but excluding the affinity of different classes, which results in the inability to fully capture the marginal structure between distributions of different classes. To overcome these issues, we propose a learning framework that implements Category-Oriented Self-Learning Graph Embedding (COSLGE), in which we achieve a flexible low-dimensional compact representation by imposing an adaptive graph learning process across the entire data while examining the inter-class separability of low-dimensional embedding by jointly learning a linear classifier. Besides, our framework can easily be extended to the semi-supervised situation. Extensive experiments on several widely-used benchmark databases demonstrate the effectiveness of the proposed method comparing with some state-of-the-art approaches.


Author(s):  
Emadeldeen Eldele ◽  
Mohamed Ragab ◽  
Zhenghua Chen ◽  
Min Wu ◽  
Chee Keong Kwoh ◽  
...  

Learning decent representations from unlabeled time-series data with temporal dynamics is a very challenging task. In this paper, we propose an unsupervised Time-Series representation learning framework via Temporal and Contextual Contrasting (TS-TCC), to learn time-series representation from unlabeled data. First, the raw time-series data are transformed into two different yet correlated views by using weak and strong augmentations. Second, we propose a novel temporal contrasting module to learn robust temporal representations by designing a tough cross-view prediction task. Last, to further learn discriminative representations, we propose a contextual contrasting module built upon the contexts from the temporal contrasting module. It attempts to maximize the similarity among different contexts of the same sample while minimizing similarity among contexts of different samples. Experiments have been carried out on three real-world time-series datasets. The results manifest that training a linear classifier on top of the features learned by our proposed TS-TCC performs comparably with the supervised training. Additionally, our proposed TS-TCC shows high efficiency in few-labeled data and transfer learning scenarios. The code is publicly available at https://github.com/emadeldeen24/TS-TCC.


Sign in / Sign up

Export Citation Format

Share Document