syntactic dependencies
Recently Published Documents


TOTAL DOCUMENTS

89
(FIVE YEARS 28)

H-INDEX

12
(FIVE YEARS 1)

Author(s):  
Leah Gosselin

Classic linguistic models, such as Chomsky’s minimalist schematization of the human language faculty, were typically based on a ‘monolingual ideal’. More recently, models have been extended to bilingual cognition. For instance, MacSwan (2000) posited that bilinguals possess a single syntactic computational system and, crucially, two (or more) phonological grammars. The current paper examines this possible architecture of the bilingual language faculty by using code-switching data, since this type of speech is unique to bilingual and multilingual individuals. Specifically, the natural speech Maria, a habitual Spanish-English code-switcher from the Bangor Miami Corpus, was examined. For the interface of phonology, an analysis was completed on the frequency of syllabic structures used by Maria. Phonotactics were examined as Spanish and English impose differential restrictions on complex onsets and codas. The results indicated that Maria’s language of use impacted the phonotactics of her speech, but that the context of use (unilingual or code-switched) did not. This suggests that Maria was alternating between two independent phonological grammars when she was code-switching. For the interface of morphosyntax, syntactic dependencies within Maria’s code-switched speech and past literature were consulted. The evidence illustrates that syntactic dependencies are indeed established within code-switched sentences, indicating that such constructions are derived from a single syntactic subset. Thus, the quantitative and qualitative results from this paper wholly support MacSwan’s original conjectures regarding the bilingual language faculty: bilingual cognition appears to be composed of a single computational system which builds multi-language syntactic structures, and more than one phonological grammar.


2021 ◽  
pp. 1-12
Author(s):  
Wenwen Li ◽  
Shiqun Yin ◽  
Ting Pu

 The purpose of aspect-based sentiment analysis is to predict the sentiment polarity of different aspects in a text. In previous work, while attention has been paid to the use of Graph Convolutional Networks (GCN) to encode syntactic dependencies in order to exploit syntactic information, previous models have tended to confuse opinion words from different aspects due to the complexity of language and the diversity of aspects. On the other hand, the effect of word lexicality on aspects’ sentiment polarity judgments has not been considered in previous studies. In this paper, we propose lexical attention and aspect-oriented GCN to solve the above problems. First, we construct an aspect-oriented dependency-parsed tree by analyzing and pruning the dependency-parsed tree of the sentence, then use the lexical attention mechanism to focus on the features of the lexical properties that play a key role in determining the sentiment polarity, and finally extract the aspect-oriented lexical weighted features by a GCN.Extensive experimental results on three benchmark datasets demonstrate the effectiveness of our approach.


2021 ◽  
Vol 118 (41) ◽  
pp. e2026469118
Author(s):  
Laurel Perkins ◽  
Jeffrey Lidz

The human ability to produce and understand an indefinite number of sentences is driven by syntax, a cognitive system that can combine a finite number of primitive linguistic elements to build arbitrarily complex expressions. The expressive power of syntax comes in part from its ability to encode potentially unbounded dependencies over abstract structural configurations. How does such a system develop in human minds? We show that 18-mo-old infants are capable of representing abstract nonlocal dependencies, suggesting that a core property of syntax emerges early in development. Our test case is English wh-questions, in which a fronted wh-phrase can act as the argument of a verb at a distance (e.g., What did the chef burn?). Whereas prior work has focused on infants’ interpretations of these questions, we introduce a test to probe their underlying syntactic representations, independent of meaning. We ask when infants know that an object wh-phrase and a local object of a verb cannot co-occur because they both express the same argument relation (e.g., *What did the chef burn the pizza). We find that 1) 18 mo olds demonstrate awareness of this complementary distribution pattern and thus represent the nonlocal grammatical dependency between the wh-phrase and the verb, but 2) younger infants do not. These results suggest that the second year of life is a period of active syntactic development, during which the computational capacities for representing nonlocal syntactic dependencies become evident.


2021 ◽  
Author(s):  
Eric Martinez ◽  
Francis Mollica ◽  
Edward Gibson

Although contracts and other legal documents have long been known to cause processing difficulty in laypeople, the source and nature of this difficulty has remained unclear. To better understand this mismatch, we conducted a corpus analysis (~10 million words) to investigate to what extent difficult-to-process features that are reportedly common in contracts--such as center embedding, low-frequency jargon, passive voice and non-standard capitalization--are in fact present in contracts relative to normal texts. We found that all of these features were strikingly more prevalent in contracts relative to standard-English texts. We also conducted an experimental study ($n=108$ subjects) to determine to what extent such features cause processing difficulties for laypeople of different reading levels. We found that contractual excerpts containing these features were recalled and comprehended at a lower rate than excerpts without these features, even for experienced readers, and that center-embedded clauses led to greater decreases in recall than other features. These findings confirm long-standing anecdotal accounts of the presence of difficult-to-process features in contracts, and show that these features inhibit comprehension and recall of legal content for readers of all levels. Our findings also suggest such difficulties may largely result from working memory costs imposed by complex syntactic features--such as center-embedded clauses--as opposed to a mere lack of understanding of specialized legal concepts, and that removing these features would be both tractable and beneficial for society at large.


2021 ◽  
pp. 1-61
Author(s):  
Aura A L Cruz Heredia ◽  
Bethany Dickerson ◽  
Ellen Lau

Abstract Sustained anterior negativities have been the focus of much neurolinguistics research concerned with the language-memory interface, but what neural computations do they actually reflect? During the comprehension of sentences with long-distance dependencies between elements (such as object wh-questions), prior ERP work has demonstrated sustained anterior negativities (SANs) across the dependency region. SANs have been traditionally interpreted as an index of working memory (WM) resources responsible for storing the first element (e.g., wh-phrase) until the second element (e.g., verb) is encountered and the two can be integrated. However, it is also known that humans pursue top-down approaches in processing long-distance dependencies – predicting units and structures before actually encountering them. This study tests the hypothesis that SANs are a more general neural index of syntactic prediction. Across three experiments, we evaluated SANs in traditional wh-dependency contrasts, but also in sentences in which subordinating adverbials (e.g., ‘although’) trigger a prediction for a second clause, compared to temporal adverbials (e.g., ‘today’) that do not. We find no SAN associated with subordinating adverbials, contra the syntactic prediction hypothesis. More surprisingly, we observe SANs across matrix questions but not embedded questions. Since both involved identical long-distance dependencies, these results are also inconsistent with the traditional syntactic working memory account of the SAN. We suggest that a more general hypothesis that sustained neural activity supports working memory can be maintained, however, if the sustained anterior negativity reflects working memory encoding at the non-linguistic discourse representation level, rather than at the sentence level.


2021 ◽  
Vol 7 (30) ◽  
pp. eabg0455
Author(s):  
Jonathan Rawski ◽  
William Idsardi ◽  
Jeffrey Heinz

We comment on the technical interpretation of the study of Watson et al. and caution against their conclusion that the behavioral evidence in their experiments points to nonhuman animals’ ability to learn syntactic dependencies, because their results are also consistent with the learning of phonological dependencies in human languages.


2021 ◽  
Vol 11 (12) ◽  
pp. 5743
Author(s):  
Pablo Gamallo

This article describes a compositional model based on syntactic dependencies which has been designed to build contextualized word vectors, by following linguistic principles related to the concept of selectional preferences. The compositional strategy proposed in the current work has been evaluated on a syntactically controlled and multilingual dataset, and compared with Transformer BERT-like models, such as Sentence BERT, the state-of-the-art in sentence similarity. For this purpose, we created two new test datasets for Portuguese and Spanish on the basis of that defined for the English language, containing expressions with noun-verb-noun transitive constructions. The results we have obtained show that the linguistic-based compositional approach turns out to be competitive with Transformer models.


2021 ◽  
Vol 7 (s3) ◽  
Author(s):  
Himanshu Yadav ◽  
Samar Husain ◽  
Richard Futrell

Abstract In syntactic dependency trees, when arcs are drawn from syntactic heads to dependents, they rarely cross. Constraints on these crossing dependencies are critical for determining the syntactic properties of human language, because they define the position of natural language in formal language hierarchies. We study whether the apparent constraints on crossing syntactic dependencies in natural language might be explained by constraints on dependency lengths (the linear distance between heads and dependents). We compare real dependency trees from treebanks of 52 languages against baselines of random trees which are matched with the real trees in terms of their dependency lengths. We find that these baseline trees have many more crossing dependencies than real trees, indicating that a constraint on dependency lengths alone cannot explain the empirical rarity of crossing dependencies. However, we find evidence that a combined constraint on dependency length and the rate of crossing dependencies might be able to explain two of the most-studied formal restrictions on dependency trees: gap degree and well-nestedness.


Sign in / Sign up

Export Citation Format

Share Document