Structural priming supports grammatical networks

2017 ◽  
Vol 40 ◽  
Author(s):  
Richard Hudson

AbstractAs Branigan & Pickering (B&P) argue, structural priming has important implications for the theory of language structure, but these implications go beyond those suggested. Priming implies a network structure, so the grammar must be a network and so must sentence structure. Instead of phrase structure, the most promising model for syntactic structure is enriched dependency structure, as in Word Grammar.

2018 ◽  
Vol 5 (1) ◽  
Author(s):  
Timothy J. Osborne

Syntax is a central subfield within linguistics and is important for the study of natural languages, since they all have syntax. Theories of syntax can vary drastically, though. They tend to be based on one of two competing principles, on dependency or phrase structure. Surprisingly, the tests for constituents that are widely employed in syntax and linguistics research to demonstrate the manner in which words are grouped together forming higher units of syntactic structure (phrases and clauses) actually support dependency over phrase structure. The tests identify much less sentence structure than phrase structure syntax assumes. The reason this situation is surprising is that phrase structure has been dominant in research on syntax over the past 60 years. This article examines the issue in depth. Dozens of texts were surveyed to determine how tests for constituents are employed and understood. Most of the tests identify phrasal constituents only; they deliver little support for the existence of subphrasal strings as constituents. This situation is consistent with dependency structure, since for dependency, subphrasal strings are not constituents to begin with.


1988 ◽  
Vol 24 (2) ◽  
pp. 303-342 ◽  
Author(s):  
Richard Hudson

The most serious recent work on the theory of coordination has probably been done in terms of three theories of grammatical structure: Generalized Phrase Structure Grammar (GPSG–see especially Gazdar, 1981; Gazdaret al., 1982; 1985; Saget al., 1985; Schachter & Mordechay, 1983), Categorial Grammar (CG–see especially Steedman, 1985; Dowty, 1985) and Transformational Grammar (TG–notably Williams, 1978, 1981; Neijt, 1979; van Oirsouw, 1985, 1987). Each of these approaches is different in important respects: for instance, according to whether or not they allow deletion rules, and according to the kinds of information which they allow to be encoded in syntactic features. However, behind these differences lies an important similarity: in each case the theory concerned makes two assumptions about grammatical structure in general (i.e. about all structures, including coordinate ones):I The basic syntagmatic relations in sentence-structure are part-whole relations (consituent structure) and temporal order; note that this is true whether or not syntactic structure is seen as a ‘projection’ of lexical properties, since these lexical properies are themselves defined in terms of constituent structure and temporal order.


2016 ◽  
Vol 4 (1) ◽  
Author(s):  
Richard Hudson

This comment on Sydney Lamb’s article “Language structure: A plausible theory” explores the similarities and differences between Lamb’s theory and my own theory called Word Grammar, which was inspired by Lamb’s work in the 1960s. The two theories share Lamb’s view that language is a symbolic network, just like the rest of our knowledge. The note explains this claim, then picks out a number of differences between the theories, all of which centre on the distinction between types and tokens. In Word Grammar, tokens are represented as temporary nodes added to the permanent network, and allow the theory to use dependency structure rather than phrase structure, to include mental referents, to recognise the messiness of spreading activation and to include a monotonic theory of default inheritance.


2018 ◽  
Author(s):  
Sophie M Hardy ◽  
Linda Wheeldon ◽  
Katrien Segaert

Structural priming refers to the tendency of speakers to repeat syntactic structures across sentences. We investigated the extent to which structural priming persists with age and whether the effect depends upon highly abstract syntactic representations that only encompass the global sentence structure or whether representations are specified for internal constituent phrasal properties. In Experiment 1, young and older adults described transitive verb targets that contained the plural morphology of the patient role (“The horse is chasing the frogs/ The frogs are being chased by the horse”). While maintaining the conceptual and global syntactic structure of the prime, we manipulated the internal phrasal structure of the patient role to either match (plural; “The king is punching the builders/ The builders are being punched by the king”) or mismatch (coordinate noun phrase; “The king is punching the pirate and the builder/ The pirate and the builder are being punched by the king”) the target. In both age groups, we observed limited priming of onset latencies, but robust effects of choice structural priming – participants produced more passive targets following passive primes – which critically did not vary dependent on whether the internal constituent structure matched or mismatched between the prime and target. Experiment 2 replicated these findings for the agent role: choice structural priming was unaffected by age or changes to the prime noun phrase type. This demonstrates that global, not internal, syntactic structure determines syntactic choices in young and older adults, as predicted by residual activation and implicit learning models of structural priming.


Author(s):  
Timothy Osborne

AbstractThis paper considers the NP vs. DP debate from the perspective of dependency grammar (DG). The message is delivered that given DG assumptions about sentence structure, the traditional NP-analysis of nominal groups is preferable over the DP-analysis. The debate is also considered from the perspective of phrase structure grammar (PSG). While many of the issues discussed here do not directly support NP over DP given PSG assumptions, some do. More importantly, one has to accept the widespread presence of null determiner heads for the DP analysis to be plausible on PSG assumptions. The argument developed at length here is that the traditional NP-analysis of nominal groups is both more accurate and simpler than the DP-analysis, in part because it does not rely on the frequent occurrence of null determiners.


Author(s):  
Daniel GARCÍA VELASCO

ABSTRACT Functional Discourse Grammar (FDG) is a typologically-based theory of language structure which is organized in levels, layers and components. In this paper, I will claim that FDG is modular in Sadock’s sense, as it presents four independent levels of representation with their own linguistic primitives each. For modular grammars, the relation between the different levels (more technically, the nature of the interfaces) is a central issue. It will be shown that FDG is a top-down grammar which follows two basic principles in its dynamic implementation: Depth-first and Maximal depth. Together with external constraints, these principles conspire to create linguistic representations which are psychologically adequate and which allow levels to be circumvented if necessary, thus simplifying representations and creating mismatches among them.


2022 ◽  
Vol 16 (4) ◽  
pp. 1-16
Author(s):  
Fereshteh Jafariakinabad ◽  
Kien A. Hua

The syntactic structure of sentences in a document substantially informs about its authorial writing style. Sentence representation learning has been widely explored in recent years and it has been shown that it improves the generalization of different downstream tasks across many domains. Even though utilizing probing methods in several studies suggests that these learned contextual representations implicitly encode some amount of syntax, explicit syntactic information further improves the performance of deep neural models in the domain of authorship attribution. These observations have motivated us to investigate the explicit representation learning of syntactic structure of sentences. In this article, we propose a self-supervised framework for learning structural representations of sentences. The self-supervised network contains two components; a lexical sub-network and a syntactic sub-network which take the sequence of words and their corresponding structural labels as the input, respectively. Due to the n -to-1 mapping of words to their structural labels, each word will be embedded into a vector representation which mainly carries structural information. We evaluate the learned structural representations of sentences using different probing tasks, and subsequently utilize them in the authorship attribution task. Our experimental results indicate that the structural embeddings significantly improve the classification tasks when concatenated with the existing pre-trained word embeddings.


2002 ◽  
Vol 8 (1) ◽  
pp. 25-54 ◽  
Author(s):  
Henry Brighton

A growing body of work demonstrates that syntactic structure can evolve in populations of genetically identical agents. Traditional explanations for the emergence of syntactic structure employ an argument based on genetic evolution: Syntactic structure is specified by an innate language acquisition device (LAD). Knowledge of language is complex, yet the data available to the language learner are sparse. This incongruous situation, termed the “poverty of the stimulus,” is accounted for by placing much of the specification of language in the LAD. The assumption is that the characteristic structure of language is somehow coded genetically. The effect of language evolution on the cultural substrate, in the absence of genetic change, is not addressed by this explanation. We show that the poverty of the stimulus introduces a pressure for compositional language structure when we consider language evolution resulting from iterated observational learning. We use a mathematical model to map the space of parameters that result in compositional syntax. Our hypothesis is that compositional syntax cannot be explained by understanding the LAD alone: Compositionality is an emergent property of the dynamics resulting from sparse language exposure.


2015 ◽  
Vol 3 (1-2) ◽  
pp. 337-348 ◽  
Author(s):  
Roey J. Gafter ◽  
Uri Horesh

This article examines a borrowing from Arabic into Hebrew, which is a combination of a lexical borrowing and a structural one. The Arabic superlativeaħla‘sweetest, most beautiful,’ pronounced by most Modern Hebrew speakers [axla], has shifted semantically to mean ‘great, awesome.’ Yet, as our corpus-based study illustrates, it was borrowed into Hebrew—for the most part—with a very particular syntactic structure that, in Arabic, denotes the superlative. In Arabic itself,aħlamay also denote a comparative adjective, though in different syntactic structures. We discuss the significance of this borrowing and the manner in which it is borrowed both to the specific contact situation between Arabic and Hebrew and to the theory of language contact in general.


Sign in / Sign up

Export Citation Format

Share Document