feature structure
Recently Published Documents


TOTAL DOCUMENTS

124
(FIVE YEARS 26)

H-INDEX

12
(FIVE YEARS 1)

2022 ◽  
Vol 12 (1) ◽  
pp. 80
Author(s):  
Zhuqing Jiao ◽  
Siwei Chen ◽  
Haifeng Shi ◽  
Jia Xu

Feature selection for multiple types of data has been widely applied in mild cognitive impairment (MCI) and Alzheimer’s disease (AD) classification research. Combining multi-modal data for classification can better realize the complementarity of valuable information. In order to improve the classification performance of feature selection on multi-modal data, we propose a multi-modal feature selection algorithm using feature correlation and feature structure fusion (FC2FS). First, we construct feature correlation regularization by fusing a similarity matrix between multi-modal feature nodes. Then, based on manifold learning, we employ feature matrix fusion to construct feature structure regularization, and learn the local geometric structure of the feature nodes. Finally, the two regularizations are embedded in a multi-task learning model that introduces low-rank constraint, the multi-modal features are selected, and the final features are linearly fused and input into a support vector machine (SVM) for classification. Different controlled experiments were set to verify the validity of the proposed method, which was applied to MCI and AD classification. The accuracy of normal controls versus Alzheimer’s disease, normal controls versus late mild cognitive impairment, normal controls versus early mild cognitive impairment, and early mild cognitive impairment versus late mild cognitive impairment achieve 91.85 ± 1.42%, 85.33 ± 2.22%, 78.29 ± 2.20%, and 77.67 ± 1.65%, respectively. This method makes up for the shortcomings of the traditional multi-modal feature selection based on subjects and fully considers the relationship between feature nodes and the local geometric structure of feature space. Our study not only enhances the interpretation of feature selection but also improves the classification performance, which has certain reference values for the identification of MCI and AD.


2021 ◽  
Author(s):  
◽  
Kemel Jouini

<p>My thesis deals with dependency relations in the structure of sentences in Arabic and how properties of verbal morphology and associated lexical items dictate how sentences are derived. I adopt the probe-goal-Agree Minimalist view that variation between languages (even those that are closely related, such as Standard Arabic and Tunisian Arabic) is due to the 'feature structure' of functional elements that enter into the derivation.  In particular, the essential architecture of sentences expressing the dependency relations verbs and associated elements have with the 'functional' portion of sentences (i.e., tense/modality properties) is universal in that these dependency relations will be expressed on the basis of the same feature structure cross-linguistically. However, this architecture still allows for the kind of parametric variation that exists even between closely related languages.  In this context, I am interested in the status of subject-verb agreement configurations, in both VSO and SVO word orderings, and wh- and other A’-dependencies in Standard Arabic (with comparisons to some modern spoken varieties of Arabic, where appropriate). The analysis is shown to extend to other V-raising languages of the Semitic/Celtic type with ‘basic’ VSO word ordering. A possible extension of the analysis to the V2 phenomenology is also discussed and the major role played by the raising of V-v to T and the raising of T to Agr(s) or T to Fin is highlighted.  An important aspect of my analysis is a proper understanding of the dependency relations involved in the derivation of the relevant sentences where the role of the CP domain projections, verb-movement, feature identification and/or feature valuation along with clause type is essential for interpretation at the interface at the output of syntax. In this feature-based analysis of parametric and micro-parametric variation, I show that variation between typologically similar and typologically different languages is minimal in that it is limited to the interaction of feature combinations in the derivation of sentences.  These feature combinations concern the feature structure of the T-node in relation to the position where T is spelled out at the interface. In particular, T raises to Agr(s) or to Fin in some languages and/or structures. Such raising processes are important in subject-verb agreement configurations cross-linguistically involving combinations of T-features and D-features, which would differ in interpretability (i.e., interpretable vs. uninterpretable) as the basis for feature valuation. Similar feature combinations also drive the raising processes in wh-dependencies with some F-feature (mainly related to ‘focus’) interacting with the T-features of Fin.  I propose that two modes of licensing of these feature combinations are at work. The first mode of licensing is the basic head-head agreement relation. This agreement relation is the basis for verb-movement to the functional field above vP/VP in V-raising languages. The second mode of licensing is the Spec-head agreement relation, brought about by the Merge (internal or external) of D(P) elements in A-dependencies and the Merge of wh-elements in A’-dependencies.  In dependency relations other than subject-verb agreement and wh-dependencies, I propose that the licensing of these feature combinations is strictly a question of ‘identification’ via head-head agreement whereby a feature on a functional head does not need to be valued, but it still needs to be ‘identified’ for the well-formedness of the C-(Agr[s])-T dependency. This is the case of the interpretable D-feature of the Top node in Topic-comment structures and the interpretable F-feature of the two functional head nodes, Mod(al) and Neg, in relation to the T-features of Fin in a V-raising language like Standard Arabic.</p>


2021 ◽  
Author(s):  
◽  
Kemel Jouini

<p>My thesis deals with dependency relations in the structure of sentences in Arabic and how properties of verbal morphology and associated lexical items dictate how sentences are derived. I adopt the probe-goal-Agree Minimalist view that variation between languages (even those that are closely related, such as Standard Arabic and Tunisian Arabic) is due to the 'feature structure' of functional elements that enter into the derivation.  In particular, the essential architecture of sentences expressing the dependency relations verbs and associated elements have with the 'functional' portion of sentences (i.e., tense/modality properties) is universal in that these dependency relations will be expressed on the basis of the same feature structure cross-linguistically. However, this architecture still allows for the kind of parametric variation that exists even between closely related languages.  In this context, I am interested in the status of subject-verb agreement configurations, in both VSO and SVO word orderings, and wh- and other A’-dependencies in Standard Arabic (with comparisons to some modern spoken varieties of Arabic, where appropriate). The analysis is shown to extend to other V-raising languages of the Semitic/Celtic type with ‘basic’ VSO word ordering. A possible extension of the analysis to the V2 phenomenology is also discussed and the major role played by the raising of V-v to T and the raising of T to Agr(s) or T to Fin is highlighted.  An important aspect of my analysis is a proper understanding of the dependency relations involved in the derivation of the relevant sentences where the role of the CP domain projections, verb-movement, feature identification and/or feature valuation along with clause type is essential for interpretation at the interface at the output of syntax. In this feature-based analysis of parametric and micro-parametric variation, I show that variation between typologically similar and typologically different languages is minimal in that it is limited to the interaction of feature combinations in the derivation of sentences.  These feature combinations concern the feature structure of the T-node in relation to the position where T is spelled out at the interface. In particular, T raises to Agr(s) or to Fin in some languages and/or structures. Such raising processes are important in subject-verb agreement configurations cross-linguistically involving combinations of T-features and D-features, which would differ in interpretability (i.e., interpretable vs. uninterpretable) as the basis for feature valuation. Similar feature combinations also drive the raising processes in wh-dependencies with some F-feature (mainly related to ‘focus’) interacting with the T-features of Fin.  I propose that two modes of licensing of these feature combinations are at work. The first mode of licensing is the basic head-head agreement relation. This agreement relation is the basis for verb-movement to the functional field above vP/VP in V-raising languages. The second mode of licensing is the Spec-head agreement relation, brought about by the Merge (internal or external) of D(P) elements in A-dependencies and the Merge of wh-elements in A’-dependencies.  In dependency relations other than subject-verb agreement and wh-dependencies, I propose that the licensing of these feature combinations is strictly a question of ‘identification’ via head-head agreement whereby a feature on a functional head does not need to be valued, but it still needs to be ‘identified’ for the well-formedness of the C-(Agr[s])-T dependency. This is the case of the interpretable D-feature of the Top node in Topic-comment structures and the interpretable F-feature of the two functional head nodes, Mod(al) and Neg, in relation to the T-features of Fin in a V-raising language like Standard Arabic.</p>


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Colin P Davis

This paper examines plural suppletion in the Barguzin dialect of Buryat (Mongolic, Russia), which occurs only in accusative and genitive noun phrases. The restricted distribution of this process, specifically its absence in oblique cases, is significant for recent research on the typology of suppletion and the feature structure of case. For much work in this vein, this plural suppletion would qualify as having an ‘ABA’ pattern, which is predicted to be unattested. I argue that the suppletive plural morpheme in question is a portmanteau whose morphological requirements cause it to conflict, for independent reasons, with the realization of oblique noun phrases. Consequently, I argue that its distribution does not falsify the theories that normally ban ABA patterns, but rather instantiates a principled exception to them which sharpens our understanding of them.


2021 ◽  
Author(s):  
Sarah Solomon ◽  
Anna Schapiro

Concepts contain rich structures that support flexible semantic cognition. These structures can be characterized by patterns of feature covariation: certain clusters of features tend to occur in the same items (e.g., feathers, wings, can fly). Existing computational models demonstrate how this kind of structure can be leveraged to slowly learn the distinctions between categories, on developmental timescales. It is not clear whether and how we leverage feature structure to quickly learn a novel category. We thus investigated how the internal structure of a new category is extracted from experience and what kinds of representations guide this learning. We predicted that humans can leverage feature clusters within an individual category to benefit learning and that this relies on the rapid formation of distributed representations. Novel categories were designed with patterns of feature associations determined by carefully constructed graph structures (Modular, Random, and Lattice). In Experiment 1, a feature inference task using verbal stimuli revealed that Modular categories—containing clusters of reliably covarying features—were more easily learned than non-Modular categories. Experiment 2 replicated this effect using visual categories. In Experiment 3, a temporal statistical learning paradigm revealed that this Modular benefit persisted even when category structure was incidental to the task. We found that a neural network model employing distributed representations was able to account for the effects, whereas prototype and exemplar models could not. The findings constrain theories of category learning and of structure learning more broadly, suggesting that humans quickly form distributed representations that reflect coherent feature structure.


Author(s):  
Rodica Frimu ◽  
Laurent Dekydtspotter

We propose that feature bundles derived in syntactic computations activate congruent vocabulary entries inducing feature-based conceptual-structure processes in retrieval. Thus, for the French future tense, an inflectional node baring Number: Plural activates the forms mangera (EAT-FUT.3PS.SG) and mangeront (EAT-FUT.3PS.PL), which compete for insertion following the Subset Principle of Distributed Morphology. Indeed, the affix -a (3PS.SG) encodes Number with no further specification (notated Number: Ø), whereas -ont (3PS.PL) encodes Number: Plural, where Number: Ø is a subset of Number: Plural. This feature structure defines an information scale where plural-marked -ont is stronger. On this scale, informationally weaker -a (3PS.SG) is interpreted as [-Plural] in contrast with -ont via a scalar inference, becoming unsuitable for insertion. Thus, -ont (3PS.PL) is selected when -a (3PS.SG) is eliminated. We present evidence of conceptual-structure processing linked to underspecified morphology. In forced-pace reading and listening tasks, 19 native speaker subjects per task classified picture probes accompanying matching and mismatching subject-verb future tense agreement. Classification times for pictures semantically linked to the verb probed for an interaction between the processing of agreement morphology and the ongoing conceptual processing of the sentence. Classification times were modulated by the type of morphological mismatch. Singular verb form mangera (EAT-FUT.3PS.SG) slowed down picture classifications in plural contexts, whereas plural verb form mangeront (EAT-FUT.3PS.PL) in singular contexts did not. This interaction between purely formal agreement and conceptual-structure processing is unexplained by interface relations, frequency, information load, and phonological cohort activation. It suggests that domain-general principles of inference enrich domain-specific feature-based computations.


2021 ◽  
Author(s):  
Qian Wang ◽  
Xinxin Fang ◽  
Pin Hao ◽  
Wenwen Chi ◽  
Fang Huang ◽  
...  

Herein, porous hierarchical bronze/anatase phase junction TiO2 assembled by ultrathin two-dimensional nanosheets was prepared by a novel, green and simple deep eutectic solvents-regulated strategy. Just owing to the feature structure,...


ChemCatChem ◽  
2020 ◽  
Vol 12 (11) ◽  
pp. 2887-2887
Author(s):  
Adriano H. Braga ◽  
Natália J. S. Costa ◽  
Karine Philippot ◽  
Renato V. Gonçalves ◽  
János Szanyi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document