class invariant
Recently Published Documents


TOTAL DOCUMENTS

24
(FIVE YEARS 4)

H-INDEX

6
(FIVE YEARS 0)

Mathematics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 80
Author(s):  
Asif Khan ◽  
Jun-Sik Kim ◽  
Heung Soo Kim

A simulation model can provide insight into the characteristic behaviors of different health states of an actual system; however, such a simulation cannot account for all complexities in the system. This work proposes a transfer learning strategy that employs simple computer simulations for fault diagnosis in an actual system. A simple shaft-disk system was used to generate a substantial set of source data for three health states of a rotor system, and that data was used to train, validate, and test a customized deep neural network. The deep learning model, pretrained on simulation data, was used as a domain and class invariant generalized feature extractor, and the extracted features were processed with traditional machine learning algorithms. The experimental data sets of an RK4 rotor kit and a machinery fault simulator (MFS) were employed to assess the effectiveness of the proposed approach. The proposed method was also validated by comparing its performance with the pre-existing deep learning models of GoogleNet, VGG16, ResNet18, AlexNet, and SqueezeNet in terms of feature extraction, generalizability, computational cost, and size and parameters of the networks.


Author(s):  
Songyang Zhang ◽  
Jiale Zhou ◽  
Xuming He

Few-shot video classification aims to learn new video categories with only a few labeled examples, alleviating the burden of costly annotation in real-world applications. However, it is particularly challenging to learn a class-invariant spatial-temporal representation in such a setting. To address this, we propose a novel matching-based few-shot learning strategy for video sequences in this work. Our main idea is to introduce an implicit temporal alignment for a video pair, capable of estimating the similarity between them in an accurate and robust manner. Moreover, we design an effective context encoding module to incorporate spatial and feature channel context, resulting in better modeling of intra-class variations. To train our model, we develop a multi-task loss for learning video matching, leading to video features with better generalization. Extensive experimental results on two challenging benchmarks, show that our method outperforms the prior arts with a sizable margin on Something-Something-V2 and competitive results on Kinetics.


Author(s):  
Christian Weiß

AbstractInterval exchange transformations are typically uniquely ergodic maps and therefore have uniformly distributed orbits. Their degree of uniformity can be measured in terms of the star-discrepancy. Few examples of interval exchange transformations with low-discrepancy orbits are known so far and only for $$n=2,3$$ n = 2 , 3 intervals, there are criteria to completely characterize those interval exchange transformations. In this paper, it is shown that having low-discrepancy orbits is a conjugacy class invariant under composition of maps. To a certain extent, this approach allows us to distinguish interval exchange transformations with low-discrepancy orbits from those without. For $$n=4$$ n = 4 intervals, the classification is almost complete with the only exceptional case having monodromy invariant $$\rho = (4,3,2,1)$$ ρ = ( 4 , 3 , 2 , 1 ) . This particular monodromy invariant is discussed in detail.


2021 ◽  
Vol 12 ◽  
Author(s):  
Ming-Chi Tseng ◽  
Wen-Chung Wang

Mixture item response theory (IRT) models include a mixture of latent subpopulations such that there are qualitative differences between subgroups but within each subpopulation the measure model based on a continuous latent variable holds. Under this modeling framework, students can be characterized by both their location on a continuous latent variable and by their latent class membership according to Students’ responses. It is important to identify anchor items for constructing a common scale between latent classes beforehand under the mixture IRT framework. Then, all model parameters across latent classes can be estimated on the common scale. In the study, we proposed Q-matrix anchored mixture Rasch model (QAMRM), including a Q-matrix and the traditional mixture Rasch model. The Q-matrix in QAMRM can use class invariant items to place all model parameter estimates from different latent classes on a common scale regardless of the ability distribution. A simulation study was conducted, and it was found that the estimated parameters of the QAMRM recovered fairly well. A real dataset from the Certificate of Proficiency in English was analyzed with the QAMRM, LCDM. It was found the QAMRM outperformed the LCDM in terms of model fit indices.


2018 ◽  
Vol 8 (12) ◽  
pp. 2529 ◽  
Author(s):  
Xiaoqing Wang ◽  
Xiangjun Wang

When large-scale annotated data are not available for certain image classification tasks, training a deep convolutional neural network model becomes challenging. Some recent domain adaptation methods try to solve this problem using generative adversarial networks and have achieved promising results. However, these methods are based on a shared latent space assumption and they do not consider the situation when shared high level representations in different domains do not exist or are not ideal as they assumed. To overcome this limitation, we propose a neural network structure called coupled generative adversarial autoencoders (CGAA) that allows a pair of generators to learn the high-level differences between two domains by sharing only part of the high-level layers. Additionally, by introducing a class consistent loss calculated by a stand-alone classifier into the generator optimization, our model is able to generate class invariant style-transferred images suitable for classification tasks in domain adaptation. We apply CGAA to several domain transferred image classification scenarios including several benchmark datasets. Experiment results have shown that our method can achieve state-of-the-art classification results.


2012 ◽  
Vol 2012 ◽  
pp. 1-14 ◽  
Author(s):  
Nipen Saikia

We define a new parameter involving quotient of Ramanujan's function for positive real numbers and and study its several properties. We prove some general theorems for the explicit evaluations of the parameter and find many explicit values. Some values of are then used to find some new and known values of Ramanujan's class invariant .


2011 ◽  
Vol 14 ◽  
pp. 108-126 ◽  
Author(s):  
Reinier Bröker

AbstractWe develop a new p-adic algorithm to compute the minimal polynomial of a class invariant. Our approach works for virtually any modular function yielding class invariants. The main algorithmic tool is modular polynomials, a concept which we generalize to functions of higher level.


Author(s):  
Bai Xiao ◽  
Yi-Zhe Song ◽  
Anupriya Balika ◽  
Peter M. Hall
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document