scholarly journals Electrophysiological correlates of syntactic structures

2019 ◽  
Author(s):  
Fiorenzo Artoni ◽  
Piergiorgio d’Orio ◽  
Eleonora Catricalà ◽  
Francesca Conca ◽  
Franco Bottoni ◽  
...  

Syntax is traditionally defined as a specifically human way to pair sound with meaning: words are assembled in a recursive way generating a potentially infinite set of sentences1,2. There can be different phrasal structures depending on the types of words involved, for example, “noun phrases” (NP), combining an article and a noun, vs. “verb phrases” (VP), combining a verb and a complement. Although it is known that the combination of an increasing number of words in sequences correlates with an increasing electrophysiological activity3,4, the specific electrophysiological correlates of the syntactic operation generating NPs vs. VPs remain unknown. A major confounding factor is the fact that syntactic information is inevitably intertwined with the acoustic information contained in words even during inner speech5. Here, we addressed this issue in a novel way by designing a paradigm to factor out acoustic information and isolate the syntactic component. In particular, we construed phrases that have exactly the same acoustic content but that are interpreted as NPs or VPs depending on their syntactic context (homophonous phrases). By performing stereo-electro-encephalographic (SEEG) recordings in epileptic patients6 we show that VPs are associated with a higher activity in the high gamma band (150-300Hz frequency), an index of cortical activity associated with linguistic processing, with respect to NPs in multiple cortical areas in both hemispheres, including language areas and their homologous in the non-dominant hemisphere. Our findings pave the way to a deeper understanding of the electrophysiological mechanisms underlying syntax and contribute to the ultimate far reaching goal of a complete neural decoding of linguistic structures from the brain2.

2019 ◽  
Author(s):  
Fiorenzo Artoni ◽  
Piergiorgio d’Orio ◽  
Eleonora Catricalà ◽  
Francesca Conca ◽  
Franco Bottoni ◽  
...  

AbstractSyntax is a species-specific component of human language combining a finite set of words in a potentially infinite number of sentences. Since words are by definition expressed by sound, factoring out syntactic information is normally impossible. Here, we circumvented this problem in a novel way by designing phrases with exactly the same acoustic content but different syntactic structures depending on the other words they occur with. By performing stereo- electroencephalographic (SEEG) recordings in epileptic patients we measured a different electrophysiological correlate of verb phrases vs. noun phrases by analyzing the high gamma band activity (150-300Hz frequency), in multiple cortical areas in both hemispheres, including language areas and their homologous in the non-dominant hemisphere. Our findings contribute to the ultimate goal of a complete neural decoding of linguistic structures from the brain.


2020 ◽  
Vol 65 (1) ◽  
pp. 49-73 ◽  
Author(s):  
Oscar Alberto Morales ◽  
Bexi Perdomo ◽  
Daniel Cassany ◽  
Rosa María Tovar ◽  
Élix Izarra

AbstractTitles play an important role in genre analysis. Cross-genre studies show that research paper and thesis titles have distinctive features. However, thesis and dissertation titles in the field of dentistry have thus far received little attention. Objective: To analyze the syntactic structures and their functions in English-language thesis and dissertation titles in dentistry. Methodology: We randomly chose 413 titles of English-language dentistry theses or dissertations presented at universities in 12 countries between January 2000 and June 2019. The resulting corpus of 5,540 running words was then analyzed both qualitatively and quantitatively, the two complementary focuses being grammatical structures and their functions. Results: The average title length was 13.4 words. Over half of the titles did not include any punctuation marks. For compound titles, we found that colons, dashes, commas, and question marks were used to separate the different components, colons being the most frequent. Four syntactic structures (nominal phrase, gerund phrase, full-sentence, and prepositional phrase) were identified for single-unit titles. Single-unit nominal phrase titles constituted the most frequent structure in the corpus, followed by compound titles. Four particular rhetorical combinations of compound title components were found to be present throughout the corpus. Conclusions: Titles of dentistry theses and dissertation in English echo the content of the text body and make an important contribution to fulfilling the text’s communicative purposes. Thus, teaching research students about the linguistic features of thesis titles would be beneficial to help them write effective titles and also facilitate assessment by teachers.


2019 ◽  
Vol 375 (1791) ◽  
pp. 20190305 ◽  
Author(s):  
Jonathan R. Brennan ◽  
Andrea E. Martin

Computation in neuronal assemblies is putatively reflected in the excitatory and inhibitory cycles of activation distributed throughout the brain. In speech and language processing, coordination of these cycles resulting in phase synchronization has been argued to reflect the integration of information on different timescales (e.g. segmenting acoustics signals to phonemic and syllabic representations; (Giraud and Poeppel 2012 Nat. Neurosci. 15 , 511 ( doi:10.1038/nn.3063 )). A natural extension of this claim is that phase synchronization functions similarly to support the inference of more abstract higher-level linguistic structures (Martin 2016 Front. Psychol. 7 , 120; Martin and Doumas 2017 PLoS Biol . 15 , e2000663 ( doi:10.1371/journal.pbio.2000663 ); Martin and Doumas. 2019 Curr. Opin. Behav. Sci. 29 , 77–83 ( doi:10.1016/j.cobeha.2019.04.008 )). Hale et al . (Hale et al . 2018 Finding syntax in human encephalography with beam search. arXiv 1806.04127 ( http://arxiv.org/abs/1806.04127 )) showed that syntactically driven parsing decisions predict electroencephalography (EEG) responses in the time domain; here we ask whether phase synchronization in the form of either inter-trial phrase coherence or cross-frequency coupling (CFC) between high-frequency (i.e. gamma) bursts and lower-frequency carrier signals (i.e. delta, theta), changes as the linguistic structures of compositional meaning ( viz ., bracket completions, as denoted by the onset of words that complete phrases) accrue. We use a naturalistic story-listening EEG dataset from Hale et al . to assess the relationship between linguistic structure and phase alignment. We observe increased phase synchronization as a function of phrase counts in the delta, theta, and gamma bands, especially for function words. A more complex pattern emerged for CFC as phrase count changed, possibly related to the lack of a one-to-one mapping between ‘size’ of linguistic structure and frequency band—an assumption that is tacit in recent frameworks. These results emphasize the important role that phase synchronization, desynchronization, and thus, inhibition, play in the construction of compositional meaning by distributed neural networks in the brain. This article is part of the theme issue ‘Towards mechanistic models of meaning composition’.


2018 ◽  
Author(s):  
Sasa L. Kivisaari ◽  
Marijn van Vliet ◽  
Annika Hultén ◽  
Tiina Lindh-Knuutila ◽  
Ali Faisal ◽  
...  

AbstractWe can easily identify a dog merely by the sound of barking or an orange by its citrus scent. In this work, we study the neural underpinnings of how the brain combines bits of information into meaningful object representations. Modern theories of semantics posit that the meaning of words can be decomposed into a unique combination of individual semantic features (e.g., “barks”, “has citrus scent”). Here, participants received clues of individual objects in form of three isolated semantic features, given as verbal descriptions. We used machine-learning-based neural decoding to learn a mapping between individual semantic features and BOLD activation patterns. We discovered that the recorded brain patterns were best decoded using a combination of not only the three semantic features that were presented as clues, but a far richer set of semantic features typically linked to the target object. We conclude that our experimental protocol allowed us to observe how fragmented information is combined into a complete semantic representation of an object and suggest neuroanatomical underpinnings for this process.


2021 ◽  
Author(s):  
Erik C. Brown ◽  
Brittany Stedelin ◽  
Ahmed M. Raslan ◽  
Nathan R. Selden

AbstractProcessing auditory human speech requires both detection (early and transient) and analysis (sustained). We analyzed high gamma (70-110Hz) activity of intracranial electroencephalography waveforms acquired during an auditory task that paired forward speech, reverse speech, and signal correlated noise. We identified widespread superior temporal sites with sustained activity responding only to forward and reverse speech regardless of paired order. More localized superior temporal auditory onset sites responded to all stimulus types when presented first in a pair and responded in recurrent fashion to the second paired stimulus in select conditions even in the absence of interstimulus silence; a novel finding. Auditory onset activity to a second paired sound recurred according to relative salience, with evidence of partial suppression during linguistic processing. We propose that temporal lobe auditory onset sites facilitate a salience detector function with hysteresis of 200ms and are influenced by cortico-cortical feedback loops involving linguistic processing and articulation.


2019 ◽  
Author(s):  
Lars Meyer ◽  
Yue Sun ◽  
Andrea E. Martin

Research into speech processing is often focused on a phenomenon termed ‘entrainment’, whereby the cortex shadows rhythmic acoustic information with oscillatory activity. Entrainment has been observed to a range of rhythms present in speech; in addition, synchronicity with abstract information (e.g., syntactic structures) has been observed. Entrainment accounts face two challenges: First, speech is not exactly rhythmic; second, synchronicity with representations that lack a clear acoustic counterpart has been described. We propose that apparent entrainment does not always result from acoustic information. Rather, internal rhythms may have functionalities in the generation of abstract representations and predictions. While acoustics may often provide punctate opportunities for entrainment, internal rhythms may also live a life of their own to infer and predict information, leading to intrinsic synchronicity—not to be counted as entrainment. This possibility may open up new research avenues in the psycho– and neurolinguistic study of language processing and language development.


2021 ◽  
Author(s):  
Cas Coopmans ◽  
Karthikeya Ramesh Kaushik ◽  
Andrea E. Martin

Since the cognitive revolution, language and action have been compared as cognitive systems, with cross-domain convergent views recently gaining renewed interest in biology, neuroscience, and cognitive science. Language and action are both combinatorial systems whose mode of combination has been argued to be hierarchical, combining elements into constituents of increasingly larger size. This structural similarity has led to the suggestion that they rely on shared cognitive and neural resources. In this paper, we compare the conceptual and formal properties of hierarchy in language and action using tools from category theory. We show that the strong compositionality of language requires a formalism that describes the mapping between sentences and their syntactic structures as an order-embedded Galois connection, while the weak compositionality of actions only requires a monotonic mapping between action sequences and their goals, which we model as a monotone Galois connection. We aim to capture the different system properties of language and action in terms of the distinction between hierarchical sets and hierarchical sequences, and discuss the implications for the way both systems are represented in the brain.


Author(s):  
Judith M. Ford ◽  
Holly K. Hamilton ◽  
Alison Boos

Auditory verbal hallucinations (AVH), also referred to as “hearing voices,” are vivid perceptions of speech that occur in the absence of any corresponding external stimulus but seem very real to the voice hearer. They are experienced by the majority of people with schizophrenia, less frequently in other psychiatric and neurological conditions, and are relatively rare in the general population. Because antipsychotic medications are not always successful in reducing the severity or frequency of AVH, a better understanding is needed of their neurobiological basis, which may ultimately lead to more precise treatment targets. What voices say and how the voices sound, or their phenomenology, varies widely within and across groups of people who hear them. In help-seeking populations, such as people with schizophrenia, the voices tend to be threatening and menacing, typically spoken in a non-self-voice, often commenting and sometimes commanding the voice hearers to do things they would not otherwise do. In psychotic populations, voices differ from normal inner speech by being unbidden and unintended, co-opting the voice hearer’s attention. In healthy voice-hearing populations, voices are not typically distressing nor disabling, and are sometimes comforting and reassuring. Regardless of content and valence, voices tend to activate some speech and language areas of the brain. Efforts to silence these brain areas with neurostimulation have had mixed success in reducing the frequency and salience of voices. Progress with this treatment approach would likely benefit from more precise anatomical targets and more precisely dosed neurostimulation. Neural mechanisms that may underpin the experience of voices are being actively investigated and include mechanisms enabling context-based predictions and distinctions between experiences coming from self and other. Both these mechanisms can be studied in non-human animal “models” and both can provide new anatomical targets for neurostimulation.


Sign in / Sign up

Export Citation Format

Share Document