Semi-supervised learning using autodidactic interpolation on sparse representation-based multiple one-dimensional embedding

Author(s):  
Hao Deng ◽  
Chao Ma ◽  
Lijun Shen ◽  
Chuanwu Yang

In this paper, we present a novel semi-supervised classification method based on sparse representation (SR) and multiple one-dimensional embedding-based adaptive interpolation (M1DEI). The main idea of M1DEI is to embed the data into multiple one-dimensional (1D) manifolds satisfying that the connected samples have shortest distance. In this way, the problem of high-dimensional data classification is transformed into a 1D classification problem. By alternating interpolation and averaging on the multiple 1D manifolds, the labeled sample set of the data can enlarge gradually. Obviously, proper metric facilitates more accurate embedding and further helps improve the classification performance. We develop a SR-based metric, which measures the affinity between samples more accurately than the common Euclidean distance. The experimental results on several databases show the effectiveness of the improvement.

Author(s):  
Y. Wang ◽  
Yuan Yan Tang ◽  
Luoqing Li ◽  
Jianzhong Wang

This paper presents a novel classifier based on collaborative representation (CR) and multiple one-dimensional (1D) embedding with applications to face recognition. To use multiple 1D embedding (1DME) framework in semi-supervised learning is first proposed by one of the authors, J. Wang, in 2014. The main idea of the multiple 1D embedding is the following: Given a high-dimensional dataset, we first map it onto several different 1D sequences on the line while keeping the proximity of data points in the original ambient high-dimensional space. By this means, a classification problem on high dimension reduces to the one in a 1D framework, which can be efficiently solved by any classical 1D regularization method, for instance, an interpolation scheme. The dissimilarity metric plays an important role in learning a decent 1DME of the original dataset. Our another contribution is to develop a collaborative representation based dissimilarity (CRD) metric. Compared to the conventional Euclidean distance based metric, the proposed method can lead to better results. The experimental results on real-world databases verify the efficacy of the proposed method.


Author(s):  
Yalong Song ◽  
Hong Li ◽  
Jianzhong Wang ◽  
Kit Ian Kou

In this paper, we present a novel multiple 1D-embedding based clustering (M1DEBC) scheme for hyperspectral image (HSI) classification. This novel clustering scheme is an iteration algorithm of 1D-embedding based regularization, which is first proposed by J. Wang [Semi-supervised learning using ensembles of multiple 1D-embedding-based label boosting, Int. J. Wavelets[Formula: see text] Multiresolut. Inf. Process. 14(2) (2016) 33 pp.; Semi-supervised learning using multiple one-dimensional embedding-based adaptive interpolation, Int. J. Wavelets[Formula: see text] Multiresolut. Inf. Process. 14(2) (2016) 11 pp.]. In the algorithm, at each iteration, we do the following three steps. First, we construct a 1D multi-embedding, which contains [Formula: see text] different versions of 1D embedding. Each of them is realized by an isometric mapping that maps all the pixels in a HSI into a line such that the sum of the distances of adjacent pixels in the original space is minimized. Second, for each 1D embedding, we use the regularization method to find a pre-classifier to give each unlabeled sample a preliminary label. If all of the [Formula: see text] different versions of regularization vote the same preliminary label, then we call it a feasible confident sample. All the feasible confident samples and their corresponding labels constitute the auxiliary set. We randomly select a part of the elements from the auxiliary set to construct the newborn labeled set. Finally, we add the newborn labeled set into the labeled sample set. Thus, the labeled sample set is gradually enlarged in the process of the iteration. The iteration terminates until the updated labeled set reaches a certain size. Our experimental results on real hyperspectral datasets confirm the effectiveness of the proposed scheme.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1714
Author(s):  
Mohamed Marey ◽  
Hala Mostafa

In this work, we propose a general framework to design a signal classification algorithm over time selective channels for wireless communications applications. We derive an upper bound on the maximum number of observation samples over which the channel response is an essential invariant. The proposed framework relies on dividing the received signal into blocks, and each of them has a length less than the mentioned bound. Then, these blocks are fed into a number of classifiers in a parallel fashion. A final decision is made through a well-designed combiner and detector. As a case study, we employ the proposed framework on a space-time block-code classification problem by developing two combiners and detectors. Monte Carlo simulations show that the proposed framework is capable of achieving excellent classification performance over time selective channels compared to the conventional algorithms.


2021 ◽  
Vol 13 (4) ◽  
pp. 547
Author(s):  
Wenning Wang ◽  
Xuebin Liu ◽  
Xuanqin Mou

For both traditional classification and current popular deep learning methods, the limited sample classification problem is very challenging, and the lack of samples is an important factor affecting the classification performance. Our work includes two aspects. First, the unsupervised data augmentation for all hyperspectral samples not only improves the classification accuracy greatly with the newly added training samples, but also further improves the classification accuracy of the classifier by optimizing the augmented test samples. Second, an effective spectral structure extraction method is designed, and the effective spectral structure features have a better classification accuracy than the true spectral features.


2021 ◽  
Vol 11 (2) ◽  
pp. 609
Author(s):  
Tadeusz Chyży ◽  
Monika Mackiewicz

The conception of special finite elements called multi-area elements for the analysis of structures with different stiffness areas has been presented in the paper. A new type of finite element has been determined in order to perform analyses and calculations of heterogeneous, multi-coherent, and layered structures using fewer finite elements and it provides proper accuracy of the results. The main advantage of the presented special multi-area elements is the possibility that areas of the structure with different stiffness and geometrical parameters can be described by single element integrated in subdivisions (sub-areas). The formulation of such elements has been presented with the example of one-dimensional elements. The main idea of developed elements is the assumption that the deformation field inside the element is dependent on its geometry and stiffness distribution. The deformation field can be changed and adjusted during the calculation process that is why such elements can be treated as self-adaptive. The application of the self-adaptation method on strain field should simplify the analysis of complex non-linear problems and increase their accuracy. In order to confirm the correctness of the established assumptions, comparative analyses have been carried out and potential areas of application have been indicated.


2019 ◽  
pp. 152808371986693 ◽  
Author(s):  
Changchun Ji ◽  
Yudong Wang ◽  
Yafeng Sun

In order to decrease the fiber diameter and reduce the energy consumption in the melt-blowing process, a new slot die with internal stabilizers was designed. Using computational fluid dynamics technology, the new slot die was investigated. In the numerical simulation, the calculation data were validated with the laboratory measurement data. This work shows that the new slot die could increase the average velocity on the centerline of the air-flow field by 6.9%, compared with the common slot die. Simultaneously, the new slot die could decrease the back-flow velocity and the rate of temperature decay in the region close to the die head. The new slot die could reduce the peak value of the turbulent kinetic energy and make the fiber movements more gradual. With the one-dimensional drawing model, it proves that the new slot die has more edge on the decrease of fiber diameter than the common slot die.


Author(s):  
Jing Jin ◽  
Hua Fang ◽  
Ian Daly ◽  
Ruocheng Xiao ◽  
Yangyang Miao ◽  
...  

The common spatial patterns (CSP) algorithm is one of the most frequently used and effective spatial filtering methods for extracting relevant features for use in motor imagery brain–computer interfaces (MI-BCIs). However, the inherent defect of the traditional CSP algorithm is that it is highly sensitive to potential outliers, which adversely affects its performance in practical applications. In this work, we propose a novel feature optimization and outlier detection method for the CSP algorithm. Specifically, we use the minimum covariance determinant (MCD) to detect and remove outliers in the dataset, then we use the Fisher score to evaluate and select features. In addition, in order to prevent the emergence of new outliers, we propose an iterative minimum covariance determinant (IMCD) algorithm. We evaluate our proposed algorithm in terms of iteration times, classification accuracy and feature distribution using two BCI competition datasets. The experimental results show that the average classification performance of our proposed method is 12% and 22.9% higher than that of the traditional CSP method in two datasets ([Formula: see text]), and our proposed method obtains better performance in comparison with other competing methods. The results show that our method improves the performance of MI-BCI systems.


Author(s):  
Canyi Du ◽  
Rui Zhong ◽  
Yishen Zhuo ◽  
Xinyu Zhang ◽  
Feifei Yu ◽  
...  

Abstract Traditional engine fault diagnosis methods usually need to extract the features manually before classifying them by the pattern recognition method, which makes it difficult to solve the end-to-end fault diagnosis problem. In recent years, deep learning has been applied in different fields, bringing considerable convenience to technological change, and its application in the automotive field also has many applications, such as image recognition, language processing, and assisted driving. In this paper, a one-dimensional convolutional neural network (1D-CNN) in deep learning is used to process vibration signals to achieve fault diagnosis and classification. By collecting the vibration signal data of different engine working conditions, the collected data are organized into several sets of data in a working cycle, which are divided into a training sample set and a test sample set. Then, a one-dimensional convolutional neural network model is built in Python to allow the feature filter (convolution kernel) to learn the data from the training set and these convolution checks process the input data of the test set. Convolution and pooling extract features to output to a new space, which is characterized by learning features directly from the original vibration signals and completing fault diagnosis. The experimental results show that the pattern recognition method based on a one-dimensional convolutional neural network can be effectively applied to engine fault diagnosis and has higher diagnostic accuracy than traditional methods.


Neofilolog ◽  
1970 ◽  
pp. 247-256
Author(s):  
Małgorzata Spychała

The article discusses task-based learning (TBL) in Spanish: enfoque por tareas, which is a teaching approach whose aim is to develop the learner’s communicative competence as well as to help the teacher activate language learners in the classroom – in this case learners of Spanish. The article describes the main objectives of the task and projects defined in the Common European Framework, including the proposed activities designed to fulfill a given task. The final section presents a sample set of lessons following TBL and analyzes the advantages and disadvantages of this approach.


2019 ◽  
Author(s):  
Seda Bilaloglu ◽  
Joyce Wu ◽  
Eduardo Fierro ◽  
Raul Delgado Sanchez ◽  
Paolo Santiago Ocampo ◽  
...  

AbstractVisual analysis of solid tissue mounted on glass slides is currently the primary method used by pathologists for determining the stage, type and subtypes of cancer. Although whole slide images are usually large (10s to 100s thousands pixels wide), an exhaustive though time-consuming assessment is necessary to reduce the risk of misdiagnosis. In an effort to address the many diagnostic challenges faced by trained experts, recent research has been focused on developing automatic prediction systems for this multi-class classification problem. Typically, complex convolutional neural network (CNN) architectures, such as Google’s Inception, are used to tackle this problem. Here, we introduce a greatly simplified CNN architecture, PathCNN, which allows for more efficient use of computational resources and better classification performance. Using this improved architecture, we trained simultaneously on whole-slide images from multiple tumor sites and corresponding non-neoplastic tissue. Dimensionality reduction analysis of the weights of the last layer of the network capture groups of images that faithfully represent the different types of cancer, highlighting at the same time differences in staining and capturing outliers, artifacts and misclassification errors. Our code is available online at: https://github.com/sedab/PathCNN.


Sign in / Sign up

Export Citation Format

Share Document