discriminative learning
Recently Published Documents


TOTAL DOCUMENTS

369
(FIVE YEARS 87)

H-INDEX

30
(FIVE YEARS 4)

2021 ◽  
pp. 1-15
Author(s):  
Yongjie Chu ◽  
Touqeer Ahmad ◽  
Lindu Zhao

Low-resolution face recognition with one-shot is a prevalent problem encountered in law enforcement, where it generally requires to recognize the low-resolution face images captured by surveillance cameras with the only one high-resolution profile face image in the database. The problem is very tough because the available samples is quite few and the quality of unknown images is quite low. To effectively address this issue, this paper proposes Adapted Discriminative Coupled Mappings (AdaDCM) approach, which integrates domain adaptation and discriminative learning. To achieve good domain adaptation performance for small size dataset, a new domain adaptation technique called Bidirectional Locality Matching-based Domain Adaptation (BLM-DA) is first developed. Then the proposed AdaDCM is formulated by unifying BLM-DA and discriminative coupled mappings into a single framework. AdaDCM is extensively evaluated on FERET, LFW, and SCface databases, which includes LR face images obtained in constrained, unconstrained, and real-world environment. The promising results on these datasets demonstrate the effectiveness of AdaDCM in LR face recognition with one-shot.


Author(s):  
Yu-Ying Chuang ◽  
R. Harald Baayen

Naive discriminative learning (NDL) and linear discriminative learning (LDL) are simple computational algorithms for lexical learning and lexical processing. Both NDL and LDL assume that learning is discriminative, driven by prediction error, and that it is this error that calibrates the association strength between input and output representations. Both words’ forms and their meanings are represented by numeric vectors, and mappings between forms and meanings are set up. For comprehension, form vectors predict meaning vectors. For production, meaning vectors map onto form vectors. These mappings can be learned incrementally, approximating how children learn the words of their language. Alternatively, optimal mappings representing the end state of learning can be estimated. The NDL and LDL algorithms are incorporated in a computational theory of the mental lexicon, the ‘discriminative lexicon’. The model shows good performance both with respect to production and comprehension accuracy, and for predicting aspects of lexical processing, including morphological processing, across a wide range of experiments. Since, mathematically, NDL and LDL implement multivariate multiple regression, the ‘discriminative lexicon’ provides a cognitively motivated statistical modeling approach to lexical processing.


2021 ◽  
Author(s):  
Jessica Nieder ◽  
Ruben van de Vijver ◽  
Fabian Tomaschek

Grammatical knowledge of native speakers has often been investigated in so-called wug tests, in which participants have to inflect pseudo-word forms (wugs). Typically it has been argued that in inflecting these pseudo-words, speakers apply their knowledge of word formation processes. However, it remains unclear what exactly this knowledge is and how it is learned.According to one theory, the knowledge is best characterized as abstractions and rules that specify how units can be combined. Another theory maintains that it is best characterized by analogy. In both cases the knowledge is learned by association based on positive evidence alone.In this paper, we model the classification of pseudo-words to Maltese plural classes on the basis of phonetic input using a shallow neural network trained with an error-driven learning algorithm. We demonstrate that the classification patterns mirror those of Maltese native speakers in a wug test. Our results indicate that speakers rely on gradient knowledge of a relation between the phonetics of whole words and plural classes, which is learned in an error-driven way.


2021 ◽  
Vol 12 ◽  
Author(s):  
Maria Heitmeier ◽  
Yu-Ying Chuang ◽  
R. Harald Baayen

This study addresses a series of methodological questions that arise when modeling inflectional morphology with Linear Discriminative Learning. Taking the semi-productive German noun system as example, we illustrate how decisions made about the representation of form and meaning influence model performance. We clarify that for modeling frequency effects in learning, it is essential to make use of incremental learning rather than the end-state of learning. We also discuss how the model can be set up to approximate the learning of inflected words in context. In addition, we illustrate how in this approach the wug task can be modeled. The model provides an excellent memory for known words, but appropriately shows more limited performance for unseen data, in line with the semi-productivity of German noun inflection and generalization performance of native German speakers.


Author(s):  
Xiaoping Zhou ◽  

Millimeter-wave (mmWave) massive MIMO (multiple-input multiple-output) is a promising technology as it provides significant beamforming gains and interference reduction capabilities due to the large number of antennas. However, mmWave massive MIMO is computationally demanding, as the high antenna count results in high-dimensional matrix operations when conventional MIMO processing is applied. Hybrid precoding is an effective solution for the mmWave massive MIMO systems to significantly decrease the number of radio frequency (RF) chains without an apparent sum-rate loss. In this paper, we propose user clustering hybrid precoding to enable efficient and low-complexity operation in high-dimensional mmWave massive MIMO, where a large number of antennas are used in low-dimensional manifolds. By modeling each user set as a manifold, we formulate the problem as clustering-oriented multi-manifolds learning. The manifold discriminative learning seek to learn the embedding low-dimensional manifolds, where manifolds with different user cluster labels are better separated, and the local spatial correlation of the high-dimensional channels within each manifold is enhanced. Most of the high-dimensional channels are embedded in the low-dimensional manifolds by manifold discriminative learning, while retaining the potential spatial correlation of the high-dimensional channels. The nonlinearity of high-dimensional channel is transformed into global and local nonlinearity to achieve dimensionality reduction. Through proper user clustering, the hybrid precoding is investigated for the sum-rate maximization problem by manifold quasi conjugate gradient methods. The high signal to interference plus noise ratio (SINR) is achieved and the computational complexity is reduced by avoiding the conventional schemes to deal with high-dimensional channel parameters. Performance evaluations show that the proposed scheme can obtain near-optimal sum-rate and considerably higher spectral efficiency than some existing solutions


2021 ◽  
pp. 1-15
Author(s):  
Yongjie Chu ◽  
Lindu Zhao ◽  
Touqeer Ahmad

In this paper, an enhanced discriminative feature learning (EDFL) method is proposed to address single sample per person (SSPP) face recognition. With a separate auxiliary dataset, EDFL integrates Fisher discriminative learning and domain adaptation into a unified framework. The separate auxiliary dataset and the gallery/probe dataset are from two different domains (named source and target domains respectively) and have different data distributions. EDFL is modeled to transfer the discriminative knowledge learned from the source domain to the target domain for classification. Since the gallery set with SSPP contains scarce number of samples, it is hard to accurately represent the data distribution of the target domain, which hinders the adaptation effect. To overcome this problem, the generalized domain adaption (GDA) method is proposed to realize good overall domain adaptation when one domain contains limited samples. GDA considers the both global and local domain adaptation effect at the same time. Further, to guarantee that the learned domain adaptation components are optimal for discriminative learning, the domain adaptation and Fisher discriminant model learning are unified into a single framework and an efficient algorithm is designed to optimize them. The effectiveness of the proposed approach is demonstrated by extensive evaluation and comparison with some state-of-the-art methods.


2021 ◽  
Author(s):  
Xu Zhang ◽  
Youjia Zhang ◽  
Zuyu Zhang ◽  
Jinzhuo Liu

2021 ◽  
Vol 12 ◽  
Author(s):  
Dominic Schmitz ◽  
Ingo Plag ◽  
Dinah Baer-Henney ◽  
Simon David Stein

Recent research has shown that seemingly identical suffixes such as word-final /s/ in English show systematic differences in their phonetic realisations. Most recently, durational differences between different types of /s/ have been found to also hold for pseudowords: the duration of /s/ is longest in non-morphemic contexts, shorter with suffixes, and shortest in clitics. At the theoretical level such systematic differences are unexpected and unaccounted for in current theories of speech production. Following a recent approach, we implemented a linear discriminative learning network trained on real word data in order to predict the duration of word-final non-morphemic and plural /s/ in pseudowords using production data by a previous production study. It is demonstrated that the duration of word-final /s/ in pseudowords can be predicted by LDL networks trained on real word data. That is, duration of word-final /s/ in pseudowords can be predicted based on their relations to the lexicon.


2021 ◽  
Vol 12 ◽  
Author(s):  
Simon David Stein ◽  
Ingo Plag

Recent evidence for the influence of morphological structure on the phonetic output goes unexplained by established models of speech production and by theories of the morphology-phonology interaction. Linear discriminative learning (LDL) is a recent computational approach in which such effects can be expected. We predict the acoustic duration of 4,530 English derivative tokens with the morphological functions DIS, NESS, LESS, ATION, and IZE in natural speech data by using predictors derived from a linear discriminative learning network. We find that the network is accurate in learning speech production and comprehension, and that the measures derived from it are successful in predicting duration. For example, words are lengthened when the semantic support of the word's predicted articulatory path is stronger. Importantly, differences between morphological categories emerge naturally from the network, even when no morphological information is provided. The results imply that morphological effects on duration can be explained without postulating theoretical units like the morpheme, and they provide further evidence that LDL is a promising alternative for modeling speech production.


Sign in / Sign up

Export Citation Format

Share Document