feature structures
Recently Published Documents


TOTAL DOCUMENTS

95
(FIVE YEARS 8)

H-INDEX

10
(FIVE YEARS 1)

Author(s):  
Jie Yin ◽  
Xuefeng Yan

Although the model based on an autoencoder (AE) exhibits strong feature extraction capability without data labeling, such model is less likely to consider the structural distribution of the original data and the extracted feature is uninterpretable. In this study, a new stacked sparse AE (SSAE) based on the preservation of local and global feature structures is proposed for fault detection. Two additional loss terms are included in the loss function of SSAE to retain the local and global structures of the original data. The preservation of the local feature considers the nearest neighbor of data in space, while that of the global feature considers the variance information of data. The final feature is not only a deep representation of data, but it also retains structural information as much as possible. The proposed model demonstrates remarkable detection performance in case studies of a numerical process and the Tennessee Eastman process.


Author(s):  
Robert Worden

Bayesian formulations of learning imply that whenever the evidence for a correlation between events in an animal’s habitat is sufficient, the correlation is learned. This implies that regularities can be learnt rapidly, from small numbers of learning examples. This speed of learning gives maximum possible fitness, and no faster learning is possible. There is evidence in many domains that animals and people can learn at nearly Bayesian optimal speeds. These domains include associative conditioning, and the more complex domains of navigation and language. There are computational models of learning which learn at near-Bayesian speeds in complex domains, and which can scale well – to learn thousands of pieces of knowledge (i.e., relations and associations). These are not neural net models. They can be defined in computational terms, as algorithms and data structures at David Marr’s [1] Level Two. Their key data structures are composite feature structures, which are graphs of multiple linked nodes. This leads to the hypothesis that animal learning results not from deep neural nets (which typically require thousands of training exam-ples), but from neural implementations of the Level Two models of fast learning; and that neu-rons provide the facilities needed to implement those models at Marr’s Level Three. The required facilities include feature structures, dynamic binding, one-shot memory for many feature struc-tures, pattern-based associative retrieval, unification and generalization of feature structures. These may be supported by multiplexing of data and metadata in the same neural fibres.


2021 ◽  
Author(s):  
Zhikun Wu ◽  
Jingwu Dong ◽  
Zibao Gan ◽  
Wanmiao Gu ◽  
Qing You ◽  
...  

Biomimetics ◽  
2020 ◽  
Vol 5 (2) ◽  
pp. 29
Author(s):  
Hipassia M. Moura ◽  
Miriam M. Unterlass

Biogenic metal oxides (MxOy) feature structures as highly functional and unique as the organisms generating them. They have caught the attention of scientists for the development of novel materials by biomimicry. In order to understand how biogenic MxOy could inspire novel technologies, we have reviewed examples of all biogenic MxOy, as well as the current state of understanding of the interactions between the inorganic MxOy and the biological matter they originate from and are connected to. In this review, we first summarize the origins of the precursors that living nature converts into MxOy. From the point-of-view of our materials chemists, we present an overview of the biogenesis of silica, iron and manganese oxides, as the only reported biogenic MxOy to date. These MxOy are found across all five kingdoms (bacteria, protoctista, fungi, plants and animals). We discuss the key molecules involved in the biosynthesis of MxOy, the functionality of the MxOy structures, and the techniques by which the biogenic MxOy can be studied. We close by outlining the biomimetic approaches inspired by biogenic MxOy materials and their challenges, and we point at promising directions for future organic-inorganic materials and their synthesis.


Author(s):  
Matías Guzmán Naranjo

This paper presents a formalization of proportional analogy using typed feature structures, which retains all key elements of analogical models of morphology. With the Kasem number system as an example, I show that using this model it is possible to express partial analogies which are unified into complete analogies. The analysis presented is accompanied by a complete TRALE implementation.


2019 ◽  
Vol 31 (10) ◽  
Author(s):  
Ondřej Kroutil ◽  
Martin Kabeláč ◽  
Vlastimil Dorčák ◽  
Jan Vacek

Author(s):  
Peter Svenonius

Syntactic features are formal properties of syntactic objects which determine how they behave with respect to syntactic constraints and operations (such as selection, licensing, agreement, and movement). Syntactic features can be contrasted with properties which are purely phonological, morphological, or semantic, but many features are relevant both to syntax and morphology, or to syntax and semantics, or to all three components. The formal theory of syntactic features builds on the theory of phonological features, and normally takes morphosyntactic features (those expressed in morphology) to be the central case, with other, possibly more abstract features being modeled on the morphosyntactic ones. Many aspects of the formal nature of syntactic features are currently unresolved. Some traditions (such as HPSG) make use of rich feature structures as an analytic tool, while others (such as Minimalism) pursue simplicity in feature structures in the interest of descriptive restrictiveness. Nevertheless, features are essential to all explicit analyses.


Author(s):  
Tongliang Liu ◽  
Qiang Yang ◽  
Dacheng Tao

Transfer learning transfers knowledge across domains to improve the learning performance. Since feature structures generally represent the common knowledge across different domains, they can be transferred successfully even though the labeling functions across domains differ arbitrarily. However, theoretical justification for this success has remained elusive. In this paper, motivated by self-taught learning, we regard a set of bases as a feature structure of a domain if the bases can (approximately) reconstruct any observation in this domain. We propose a general analysis scheme to theoretically justify that if the source and target domains share similar feature structures, the source domain feature structure is transferable to the target domain, regardless of the change of the labeling functions across domains. The transferred structure is interpreted to function as a regularization matrix which benefits the learning process of the target domain task. We prove that such transfer enables the corresponding learning algorithms to be uniformly stable. Specifically, we illustrate the existence of feature structure transfer in two well-known transfer learning settings: domain adaptation and learning to learn.


Author(s):  
John Carroll

This chapter introduces key concepts and techniques for natural-language parsing: that is, finding the grammatical structure of sentences. The chapter introduces the fundamental algorithms for parsing with context-free (CF) phrase structure grammars, how these deal with ambiguous grammars, and how CF grammars and associated disambiguation models can be derived from syntactically annotated text. It goes on to consider dependency analysis, and outlines the main approaches to dependency parsing based both on manually written grammars and on learning from text annotated with dependency structures. It finishes with an overview of techniques used for parsing with grammars that use feature structures to encode linguistic information.


Sign in / Sign up

Export Citation Format

Share Document