scholarly journals Language Dysfunction in Schizophrenia: Assessing Neural Tracking to Characterize the Underlying Disorder(s)?

2021 ◽  
Vol 15 ◽  
Author(s):  
Lars Meyer ◽  
Peter Lakatos ◽  
Yifei He

Deficits in language production and comprehension are characteristic of schizophrenia. To date, it remains unclear whether these deficits arise from dysfunctional linguistic knowledge, or dysfunctional predictions derived from the linguistic context. Alternatively, the deficits could be a result of dysfunctional neural tracking of auditory information resulting in decreased auditory information fidelity and even distorted information. Here, we discuss possible ways for clinical neuroscientists to employ neural tracking methodology to independently characterize deficiencies on the auditory–sensory and abstract linguistic levels. This might lead to a mechanistic understanding of the deficits underlying language related disorder(s) in schizophrenia. We propose to combine naturalistic stimulation, measures of speech–brain synchronization, and computational modeling of abstract linguistic knowledge and predictions. These independent but likely interacting assessments may be exploited for an objective and differential diagnosis of schizophrenia, as well as a better understanding of the disorder on the functional level—illustrating the potential of neural tracking methodology as translational tool in a range of psychotic populations.

2020 ◽  
Author(s):  
Lars Meyer ◽  
Peter Lakatos ◽  
Yifei He

Deficits in language production and comprehension are characteristic of schizophrenia. To date, it remains unclear whether these deficits arise from dysfunctional linguistic knowledge, or dysfunctional predictions derived from the linguistic context. Alternatively, the deficits could be a result of dysfunctional neural tracking of auditory information resulting in decreased auditory information fidelity and even distorted information. Here, we discuss possible ways for clinical neuroscientists to employ neural tracking methodology to independently characterize deficiencies on the auditory–sensory and abstract linguistic levels. This might lead to a mechanistic understanding of the deficits underlying language related disorder(s) in schizophrenia. We propose to combine naturalistic stimulation, measures of speech–brain synchronization, and computational modeling of abstract linguistic knowledge and predictions. This orthogonal assessment may be exploited for an objective and differential diagnosis of schizophrenia, as well as a better understanding of the disorder on the functional level—illustrating the potential of neural tracking methodology as translational tool in a range of psychotic populations.


2019 ◽  
Vol 80 (02) ◽  
pp. 111-119 ◽  
Author(s):  
Kelsey Dumanch ◽  
Gayla Poling

Objectives To provide an introduction to the role of audiological evaluations with special reference to patients with skull base disease. Design Review article with case-based overview of the current state of the practice of diagnostic audiology through highlighting the multifaceted clinical toolbox and the value of mechanism-based audiological evaluations that contribute to otologic differential diagnosis. Setting Current state of the practice of diagnostic audiology. Main Outcome Measures Understanding of audiological evaluation results in clinical practice and value of contributions to interdisciplinary teams to identify and quantify dysfunction along the auditory pathway and its subsequent effects. Results Accurate auditory information is best captured with a test battery that consists of various assessment crosschecks and mechanism-driven assessments. Conclusion Audiologists utilize a comprehensive clinical toolbox to gather information for assessment, diagnosis, and management of numerous pathologies. This information, in conjunction with thorough medical review, provides mechanism-specific contributions to the otologic and lateral skull base differential diagnosis.


2021 ◽  
Author(s):  
Kaidi Lõo ◽  
Fabian Tomaschek ◽  
Pärtel Lippus ◽  
Benjamin V. Tucker

Recent evidence has indicated that a word's morphological family and inflectional paradigm members get activated when we produce words. These paradigmatic effects have previously been studied in careful, laboratory context using words in isolation. This previous research has not investigated how the linguistic context affects spontaneous speech production. The current corpus analysis investigates paradigmatic and syntagmatic effects in Estonian spontaneous speech. Following related work on English, we focus on morphemic and non-morphemic word final /-s/ in content words. We report that linguistic context, as measured by conditional probability, has the strongest effect on the acoustic durations, while inflectional properties (internal structure and inflectional paradigm size) also affect word and segment durations. These results indicate that morphology is part of a complex system that interacts with other aspects of the language production system.


1994 ◽  
Vol 50 ◽  
pp. 45-56
Author(s):  
Joost Schilperoord

In this paper it is argued that, contrary to computational models of language production, in the production system grammatical knowledge takes the form of conventionalized declarative schemes. Such schemes can be identified as a particular function word and an obliged element, for instance, a noun and a determiner. The argument is based on a particular pause pattern observed written language production. A cognitive linguistic account of the notion 'grammatical scheme' is given through a dicussion of Langacker's Usage based model of linguistic knowledge and the 'mental grammar'.


2003 ◽  
Vol 46 (6) ◽  
pp. 1367-1377 ◽  
Author(s):  
Allard Jongman ◽  
Yue Wang ◽  
Brian H. Kim

Most studies have been unable to identify reliable acoustic cues for the recognition of the English nonsibilant fricatives /f, v, θ, ð/. The present study was designed to test the extent to which the perception of these fricatives by normal-hearing adults is based on other sources of information, namely, linguistic context and visual information. In Experiment 1, target words beginning with /f/, /θ/, /s/, or /∫/ were preceded by either a semantically congruous or incongruous precursor sentence. Results showed an effect of linguistic context on the perception of the distinction between /f/ and /θ/ and on the acoustically more robust distinction between /s/ and /∫/. In Experiment 2, participants identified syllables consisting of the fricatives /f, v, θ, ð/ paired with the vowels /i, a, u/. Three conditions were contrasted: Stimuli were presented with (a) both auditory and visual information, (b) auditory information alone, or (c) visual information alone. When errors in terms of voicing were ignored in all 3 conditions, results indicated that perception of these fricatives is as good with visual information alone as with both auditory and visual information combined, and better than for auditory information alone. These findings suggest that accurate perception of nonsibilant fricatives derives from a combination of acoustic, linguistic, and visual information.


Author(s):  
Lukasz Debowski

In this chapter, we identify possible links between theoretical computer science, coding theory, and statistics reinforced by subextensivity of Shannon entropy. Our specific intention is to address these links in a way that may arise from a rudimentary theory of human learning from language communication. The semi-infinite stream of language production that a human being experiences during his or her life will be called simply parole (= "speech," [7]). Although modern computational linguistics tries to explain human language competence in terms of explicit mathematical models in order to enable its machine simulation [17, 20], modeling parole itself (widely known as "language modeling") is not trivial in a very obscure way. When a behavior of parole that improves its prediction is newly observed in a finite portion of the empirical data, it often suggests only minor improvements to the current model. When we use larger portions of parole to test the freshly improved model, this model always fails seriously, but in a different way. How we can provide necessary updates to, with- out harming the integrity of, the model is an important problem that experts must continually solve. Is there any sufficiently good definition of parole that is ready-made for industrial applications? Although not all readers of human texts learn continuously, parole is a product of those who can and often do learn throughout their lives. Thus, we assume that the amount of knowledge generalizable from a finite window of parole should diverge to infinity when the length of the window also tends to infinity. Many linguists assume that a very distinct part of the generalizable knowledge is "linguistic knowledge," which can be finite in principle. Nevertheless, for the sake of good modeling of parole in practical applications, it is useless to restrict ourselves solely to "finite linguistic knowledge" [6, 22]. Inspired by Crutchfield and Feldman [5], we will call any processes (distributions of infinite linear data) "finitary" when the amount of knowledge generalizable from them is finite, and "infinitary" when it is infinite. The crucial point is to accurately define the notion of knowledge generalized from a data sample. According to the principle of minimum description length (MDL), generalizable knowledge is the definition of such representation for the data which yields the shortest total description. In this case, we will define infinitarity as computational infinitarity (CIF).


2021 ◽  
Vol 12 ◽  
Author(s):  
Peter Q. Pfordresher ◽  
Emma B. Greenspon ◽  
Amy L. Friedman ◽  
Caroline Palmer

Individuals typically produce auditory sequences, such as speech or music, at a consistent spontaneous rate or tempo. We addressed whether spontaneous rates would show patterns of convergence across the domains of music and language production when the same participants spoke sentences and performed melodic phrases on a piano. Although timing plays a critical role in both domains, different communicative and motor constraints apply in each case and so it is not clear whether music and speech would display similar timing mechanisms. We report the results of two experiments in which adult participants produced sequences from memory at a comfortable spontaneous (uncued) rate. In Experiment 1, monolingual pianists in Buffalo, New York engaged in three production tasks: speaking sentences from memory, performing short melodies from memory, and tapping isochronously. In Experiment 2, English-French bilingual pianists in Montréal, Canada produced melodies on a piano as in Experiment 1, and spoke short rhythmically-structured phrases repeatedly. Both experiments led to the same pattern of results. Participants exhibited consistent spontaneous rates within each task. People who produced one spoken phrase rapidly were likely to produce another spoken phrase rapidly. This consistency across stimuli was also found for performance of different musical melodies. In general, spontaneous rates across speech and music tasks were not correlated, whereas rates of tapping and music were correlated. Speech rates (for syllables) were faster than music rates (for tones) and speech showed a smaller range of spontaneous rates across individuals than did music or tapping rates. Taken together, these results suggest that spontaneous rate reflects cumulative influences of endogenous rhythms (in consistent self-generated rates within domain), peripheral motor constraints (in finger movements across tapping and music), and communicative goals based on the cultural transmission of auditory information (slower rates for to-be-synchronized music than for speech).


2021 ◽  
Vol 0 ◽  
pp. 1-10
Author(s):  
Sonia Gupta ◽  
Manveen Kaur Jawanda

The oral cavity is considered to be a mirror of the body’s health, as it reflects the manifestations of various systemic disorders. Most of the oral mucosa is derived embryologically from an invagination of ectoderm and thus, like other similar orifices, it may become involved in the disorders that are primarily associated with the skin. Oral submucous fibrosis is one of the commonest precancerous conditions of the oral mucosa involving any part of the oral cavity resulting in tissue scarring, dysphagia and trismus. It is a collagen-related disorder characterized by excessive fibrosis in the oral submucosa, hyalinization and degenerative changes in the muscles. This disease has become a challenging entity for dermatologists due to resemblance of its features to various mucocutaneous conditions. An improper diagnosis can lead to wrong treatment and additional complications. Dermatologists need to be aware of the characteristic features of this disease which can distinguish it from other similar conditions. This review aims to focus on the detailed aspects of oral submucous fibrosis including its historical background, etiological factors, pathogenesis, clinical features, differential diagnosis, investigations, management and future perspectives.


2011 ◽  
Vol 23 (6) ◽  
pp. 1419-1436 ◽  
Author(s):  
Stéphanie Riès ◽  
Niels Janssen ◽  
Stéphane Dufau ◽  
F.-Xavier Alario ◽  
Borís Burle

The concept of “monitoring” refers to our ability to control our actions on-line. Monitoring involved in speech production is often described in psycholinguistic models as an inherent part of the language system. We probed the specificity of speech monitoring in two psycholinguistic experiments where electroencephalographic activities were recorded. Our focus was on a component previously reported in nonlinguistic manual tasks and interpreted as a marker of monitoring processes. The error negativity (Ne, or error-related negativity), thought to originate in medial frontal areas, peaks shortly after erroneous responses. A component of seemingly comparable properties has been reported, after errors, in tasks requiring access to linguistic knowledge (e.g., speech production), compatible with a generic error-detection process. However, in contrast to its original name, advanced processing methods later revealed that this component is also present after correct responses in visuomotor tasks. Here, we reported the observation of the same negativity after correct responses across output modalities (manual and vocal responses). This indicates that, in language production too, the Ne reflects on-line response monitoring rather than error detection specifically. Furthermore, the temporal properties of the Ne suggest that this monitoring mechanism is engaged before any auditory feedback. The convergence of our findings with those obtained with nonlinguistic tasks suggests that at least part of the monitoring involved in speech production is subtended by a general-purpose mechanism.


PLoS ONE ◽  
2021 ◽  
Vol 16 (2) ◽  
pp. e0246255
Author(s):  
Robin Lemke ◽  
Lisa Schäfer ◽  
Ingo Reich

We describe a novel approach to estimating the predictability of utterances given extralinguistic context in psycholinguistic research. Predictability effects on language production and comprehension are widely attested, but so far predictability has mostly been manipulated through local linguistic context, which is captured with n-gram language models. However, this method does not allow to investigate predictability effects driven by extralinguistic context. Modeling effects of extralinguistic context is particularly relevant to discourse-initial expressions, which can be predictable even if they lack linguistic context at all. We propose to use script knowledge as an approximation to extralinguistic context. Since the application of script knowledge involves the generation of prediction about upcoming events, we expect that scrips can be used to manipulate the likelihood of linguistic expressions referring to these events. Previous research has shown that script-based discourse expectations modulate the likelihood of linguistic expressions, but script knowledge has often been operationalized with stimuli which were based on researchers’ intuitions and/or expensive production and norming studies. We propose to quantify the likelihood of an utterance based on the probability of the event to which it refers. This probability is calculated with event language models trained on a script knowledge corpus and modulated with probabilistic event chains extracted from the corpus. We use the DeScript corpus of script knowledge to obtain empirically founded estimates of the likelihood of an event to occur in context without having to resort to expensive pre-tests of the stimuli. We exemplify our method at a case study on the usage of nonsentential expressions (fragments), which shows that utterances that are predictable given script-based extralinguistic context are more likely to be reduced.


Sign in / Sign up

Export Citation Format

Share Document