dependency relations
Recently Published Documents


TOTAL DOCUMENTS

136
(FIVE YEARS 34)

H-INDEX

14
(FIVE YEARS 1)

Author(s):  
Kashif Munir ◽  
Hai Zhao ◽  
Zuchao Li

The task of semantic role labeling ( SRL ) is dedicated to finding the predicate-argument structure. Previous works on SRL are mostly supervised and do not consider the difficulty in labeling each example which can be very expensive and time-consuming. In this article, we present the first neural unsupervised model for SRL. To decompose the task as two argument related subtasks, identification and clustering, we propose a pipeline that correspondingly consists of two neural modules. First, we train a neural model on two syntax-aware statistically developed rules. The neural model gets the relevance signal for each token in a sentence, to feed into a BiLSTM, and then an adversarial layer for noise-adding and classifying simultaneously, thus enabling the model to learn the semantic structure of a sentence. Then we propose another neural model for argument role clustering, which is done through clustering the learned argument embeddings biased toward their dependency relations. Experiments on the CoNLL-2009 English dataset demonstrate that our model outperforms the previous state-of-the-art baseline in terms of non-neural models for argument identification and classification.


2021 ◽  
Author(s):  
◽  
Kemel Jouini

<p>My thesis deals with dependency relations in the structure of sentences in Arabic and how properties of verbal morphology and associated lexical items dictate how sentences are derived. I adopt the probe-goal-Agree Minimalist view that variation between languages (even those that are closely related, such as Standard Arabic and Tunisian Arabic) is due to the 'feature structure' of functional elements that enter into the derivation.  In particular, the essential architecture of sentences expressing the dependency relations verbs and associated elements have with the 'functional' portion of sentences (i.e., tense/modality properties) is universal in that these dependency relations will be expressed on the basis of the same feature structure cross-linguistically. However, this architecture still allows for the kind of parametric variation that exists even between closely related languages.  In this context, I am interested in the status of subject-verb agreement configurations, in both VSO and SVO word orderings, and wh- and other A’-dependencies in Standard Arabic (with comparisons to some modern spoken varieties of Arabic, where appropriate). The analysis is shown to extend to other V-raising languages of the Semitic/Celtic type with ‘basic’ VSO word ordering. A possible extension of the analysis to the V2 phenomenology is also discussed and the major role played by the raising of V-v to T and the raising of T to Agr(s) or T to Fin is highlighted.  An important aspect of my analysis is a proper understanding of the dependency relations involved in the derivation of the relevant sentences where the role of the CP domain projections, verb-movement, feature identification and/or feature valuation along with clause type is essential for interpretation at the interface at the output of syntax. In this feature-based analysis of parametric and micro-parametric variation, I show that variation between typologically similar and typologically different languages is minimal in that it is limited to the interaction of feature combinations in the derivation of sentences.  These feature combinations concern the feature structure of the T-node in relation to the position where T is spelled out at the interface. In particular, T raises to Agr(s) or to Fin in some languages and/or structures. Such raising processes are important in subject-verb agreement configurations cross-linguistically involving combinations of T-features and D-features, which would differ in interpretability (i.e., interpretable vs. uninterpretable) as the basis for feature valuation. Similar feature combinations also drive the raising processes in wh-dependencies with some F-feature (mainly related to ‘focus’) interacting with the T-features of Fin.  I propose that two modes of licensing of these feature combinations are at work. The first mode of licensing is the basic head-head agreement relation. This agreement relation is the basis for verb-movement to the functional field above vP/VP in V-raising languages. The second mode of licensing is the Spec-head agreement relation, brought about by the Merge (internal or external) of D(P) elements in A-dependencies and the Merge of wh-elements in A’-dependencies.  In dependency relations other than subject-verb agreement and wh-dependencies, I propose that the licensing of these feature combinations is strictly a question of ‘identification’ via head-head agreement whereby a feature on a functional head does not need to be valued, but it still needs to be ‘identified’ for the well-formedness of the C-(Agr[s])-T dependency. This is the case of the interpretable D-feature of the Top node in Topic-comment structures and the interpretable F-feature of the two functional head nodes, Mod(al) and Neg, in relation to the T-features of Fin in a V-raising language like Standard Arabic.</p>


2021 ◽  
Author(s):  
◽  
Kemel Jouini

<p>My thesis deals with dependency relations in the structure of sentences in Arabic and how properties of verbal morphology and associated lexical items dictate how sentences are derived. I adopt the probe-goal-Agree Minimalist view that variation between languages (even those that are closely related, such as Standard Arabic and Tunisian Arabic) is due to the 'feature structure' of functional elements that enter into the derivation.  In particular, the essential architecture of sentences expressing the dependency relations verbs and associated elements have with the 'functional' portion of sentences (i.e., tense/modality properties) is universal in that these dependency relations will be expressed on the basis of the same feature structure cross-linguistically. However, this architecture still allows for the kind of parametric variation that exists even between closely related languages.  In this context, I am interested in the status of subject-verb agreement configurations, in both VSO and SVO word orderings, and wh- and other A’-dependencies in Standard Arabic (with comparisons to some modern spoken varieties of Arabic, where appropriate). The analysis is shown to extend to other V-raising languages of the Semitic/Celtic type with ‘basic’ VSO word ordering. A possible extension of the analysis to the V2 phenomenology is also discussed and the major role played by the raising of V-v to T and the raising of T to Agr(s) or T to Fin is highlighted.  An important aspect of my analysis is a proper understanding of the dependency relations involved in the derivation of the relevant sentences where the role of the CP domain projections, verb-movement, feature identification and/or feature valuation along with clause type is essential for interpretation at the interface at the output of syntax. In this feature-based analysis of parametric and micro-parametric variation, I show that variation between typologically similar and typologically different languages is minimal in that it is limited to the interaction of feature combinations in the derivation of sentences.  These feature combinations concern the feature structure of the T-node in relation to the position where T is spelled out at the interface. In particular, T raises to Agr(s) or to Fin in some languages and/or structures. Such raising processes are important in subject-verb agreement configurations cross-linguistically involving combinations of T-features and D-features, which would differ in interpretability (i.e., interpretable vs. uninterpretable) as the basis for feature valuation. Similar feature combinations also drive the raising processes in wh-dependencies with some F-feature (mainly related to ‘focus’) interacting with the T-features of Fin.  I propose that two modes of licensing of these feature combinations are at work. The first mode of licensing is the basic head-head agreement relation. This agreement relation is the basis for verb-movement to the functional field above vP/VP in V-raising languages. The second mode of licensing is the Spec-head agreement relation, brought about by the Merge (internal or external) of D(P) elements in A-dependencies and the Merge of wh-elements in A’-dependencies.  In dependency relations other than subject-verb agreement and wh-dependencies, I propose that the licensing of these feature combinations is strictly a question of ‘identification’ via head-head agreement whereby a feature on a functional head does not need to be valued, but it still needs to be ‘identified’ for the well-formedness of the C-(Agr[s])-T dependency. This is the case of the interpretable D-feature of the Top node in Topic-comment structures and the interpretable F-feature of the two functional head nodes, Mod(al) and Neg, in relation to the T-features of Fin in a V-raising language like Standard Arabic.</p>


2021 ◽  
Vol 19 (36) ◽  
pp. 115-142
Author(s):  
Márcio De Oliveira ◽  

Latin American interregional migration has increased dramatically in the past two decades. One of the countries contributing to the growth of these flows is Brazil, whose participation was consolidated due to international factors, its reception and its legal labor policies. Despite this, the relationship between migration, development and remittances remains poorly studied by Brazilian scholars. The discussion presented here focuses on a circumscribed analysis of refugees who had been legally recognized by the Brazilian State by the end of 2018. Thanks to research data on 487 refugees living in Brazil by then, it was possible to analyze their life conditions, the value and regularity of remittances received and/or sent, among other aspects. The results showed that low wages did not prevent refugees, for the most part, from sending remittances abroad nor, for some, from receiving it. Despite its low value, its regularity seems to keep alive the networks and dependency relations between those who migrate and those who remain in the origin countries.


Author(s):  
Chang-ho An, Zhanfang Zhao, Hee-kyung Moon

Deep Learning approach using probability distribution to natural language processing achieves significant accomplishment. However, natural languages have inherent linguistic structures rather than probabilistic distribution. This paper presents a new graph-based representation of syntactic structures called syntactic knowledge graph based on dependency relations. This paper investigates the valency theory and the markedness principle of natural languages to derive an appropriate set of dependency relations for the syntactic knowledge graph. A new set of dependency relations derived from the markers is proposed. This paper also demonstrates the representation of various linguistic structures to validate the feasibility of syntactic knowledge graphs.


Author(s):  
Rachel Rubin

Abstract The extraction of phraseological units operationalized in phraseological complexity measures (Paquot, 2019) relies on automatic dependency annotations, yet the suitability of annotation tools for learner language is often overlooked. In the present article, two Dutch dependency parsers, Alpino (van Noord, 2006) and Frog (van den Bosch et al., 2007), are evaluated for their performance in automatically annotating three types of dependency relations (verb + direct object, adjectival modifier, and adverbial modifier relations) across three proficiency levels of L2 Dutch. These observations then serve as the basis for an investigation into the impact of automatic dependency annotation on phraseological sophistication measures. Results indicate that both learner proficiency and the type of dependency relation function as moderating factors in parser performance. Phraseological complexity measures computed on the basis of both automatic and manual dependency annotations demonstrate moderate to high correlations, reflecting a moderate to low impact of automatic annotation on subsequent analyses.


2021 ◽  
Vol 7 ◽  
pp. e347
Author(s):  
Bhavana R. Bhamare ◽  
Jeyanthi Prabhu

Due to the massive progression of the Web, people post their reviews for any product, movies and places they visit on social media. The reviews available on social media are helpful to customers as well as the product owners to evaluate their products based on different reviews. Analyzing structured data is easy as compared to unstructured data. The reviews are available in an unstructured format. Aspect-Based Sentiment Analysis mines the aspects of a product from the reviews and further determines sentiment for each aspect. In this work, two methods for aspect extraction are proposed. The datasets used for this work are SemEval restaurant review dataset, Yelp and Kaggle datasets. In the first method a multivariate filter-based approach for feature selection is proposed. This method support to select significant features and reduces redundancy among selected features. It shows improvement in F1-score compared to a method that uses only relevant features selected using Term Frequency weight. In another method, selective dependency relations are used to extract features. This is done using Stanford NLP parser. The results gained using features extracted by selective dependency rules are better as compared to features extracted by using all dependency rules. In the hybrid approach, both lemma features and selective dependency relation based features are extracted. Using the hybrid feature set, 94.78% accuracy and 85.24% F1-score is achieved in the aspect category prediction task.


2021 ◽  
Vol 11 (2) ◽  
pp. 243-250
Author(s):  
Andre Kåsen

This article presents a method for automatic assignment of syntactic dependency relations to the corpus of American Norwegian speech (CANS). Different machine learning techniques and corpora are used. Finally, an accuracy measure is computed and compared with a relatively new treebank for spoken Norwegian.


Sign in / Sign up

Export Citation Format

Share Document