scholarly journals Sound change in the individual: Effects of exposure on cross-dialect speech processing

Author(s):  
Cynthia G. Clopper

AbstractSpeech perception is highly robust to variation, but familiarity with a particular source of variation can nevertheless lead to significant processing benefits. In the domain of cross-dialect speech perception, familiar local and standard varieties have been shown to facilitate lexical and semantic processing relative to unfamiliar dialects. However, more recent research suggests that individuals with exposure to both a local non-standard variety and a regional or national standard variety exhibit a mix of lexical processing costs and benefits, suggesting that familiarity with multiple different linguistic systems can result in both independent processing benefits for each variety as well as competition among variable multi-dialect representations. In an exemplar model of language processing, this complex pattern of results suggests several loci for sound change within an individual language user. Although the processing benefits associated with the local variety may contribute to long-term maintenance of variation through divergence from a regional or national standard, the processing benefits associated with the standard may contribute to dialect leveling and convergence towards the standard. Competition between these forces for maintenance and leveling will be observed most strongly in individuals with extensive exposure to both a non-standard local variety and a regional or national standard.

2001 ◽  
Vol 13 (6) ◽  
pp. 829-843 ◽  
Author(s):  
A. L. Roskies ◽  
J. A. Fiez ◽  
D. A. Balota ◽  
M. E. Raichle ◽  
S. E. Petersen

To distinguish areas involved in the processing of word meaning (semantics) from other regions involved in lexical processing more generally, subjects were scanned with positron emission tomography (PET) while performing lexical tasks, three of which required varying degrees of semantic analysis and one that required phonological analysis. Three closely apposed regions in the left inferior frontal cortex and one in the right cerebellum were significantly active above baseline in the semantic tasks, but not in the nonsemantic task. The activity in two of the frontal regions was modulated by the difficulty of the semantic judgment. Other regions, including some in the left temporal cortex and the cerebellum, were active across all four language tasks. Thus, in addition to a number of regions known to be active during language processing, regions in the left inferior frontal cortex were specifically recruited during semantic processing in a task-dependent manner. A region in the right cerebellum may be functionally related to those in the left inferior frontal cortex. Discussion focuses on the implications of these results for current views regarding neural substrates of semantic processing.


2020 ◽  
pp. 10.1212/CPJ.0000000000001006
Author(s):  
Marta Pinto-Grau ◽  
Bronagh Donohoe ◽  
Sarah O’Connor ◽  
Lisa Murphy ◽  
Emmet Costello ◽  
...  

ABSTRACTObjective.To investigate the incidence and nature of language change and its relationship to executive dysfunction in a population-based incident ALS sample, with the hypothesis that patterns of frontotemporal involvement in early ALS extend beyond areas of executive control to regions associated with language processing.Methods.One hundred and seventeen population-based incident ALS cases without dementia and 100 controls matched by age, sex and education were included in the study. A detailed assessment of language processing including lexical processing, word spelling, word reading, word naming, semantic processing and syntactic/grammatical processing was undertaken. Executive domains of phonemic verbal fluency, working memory, problem-solving, cognitive flexibility and social cognition were also evaluated.Results.Language processing was impaired in this incident cohort of individuals with ALS, with deficits in the domains of word naming, orthographic processing and syntactic/grammatical processing. Conversely, phonological lexical processing and semantic processing were spared. While executive dysfunction accounted in part for impairments in grammatical and orthographic lexical processing, word spelling, reading and naming, primary language deficits were also present.Conclusions.Language impairment is characteristic of ALS at early stages of the disease, and can develop independently of executive dysfunction, reflecting selective patterns of frontotemporal involvement at disease onset. Language change is therefore an important component of the frontotemporal syndrome associated with ALS.


2021 ◽  
Vol 12 ◽  
Author(s):  
Elif Canseza Kaplan ◽  
Anita E. Wagner ◽  
Paolo Toffanin ◽  
Deniz Başkent

Earlier studies have shown that musically trained individuals may have a benefit in adverse listening situations when compared to non-musicians, especially in speech-on-speech perception. However, the literature provides mostly conflicting results. In the current study, by employing different measures of spoken language processing, we aimed to test whether we could capture potential differences between musicians and non-musicians in speech-on-speech processing. We used an offline measure of speech perception (sentence recall task), which reveals a post-task response, and online measures of real time spoken language processing: gaze-tracking and pupillometry. We used stimuli of comparable complexity across both paradigms and tested the same groups of participants. In the sentence recall task, musicians recalled more words correctly than non-musicians. In the eye-tracking experiment, both groups showed reduced fixations to the target and competitor words’ images as the level of speech maskers increased. The time course of gaze fixations to the competitor did not differ between groups in the speech-in-quiet condition, while the time course dynamics did differ between groups as the two-talker masker was added to the target signal. As the level of two-talker masker increased, musicians showed reduced lexical competition as indicated by the gaze fixations to the competitor. The pupil dilation data showed differences mainly in one target-to-masker ratio. This does not allow to draw conclusions regarding potential differences in the use of cognitive resources between groups. Overall, the eye-tracking measure enabled us to observe that musicians may be using a different strategy than non-musicians to attain spoken word recognition as the noise level increased. However, further investigation with more fine-grained alignment between the processes captured by online and offline measures is necessary to establish whether musicians differ due to better cognitive control or sound processing.


2020 ◽  
Author(s):  
Michael P. Broderick ◽  
Nathaniel J. Zuk ◽  
Andrew J. Anderson ◽  
Edmund C. Lalor

AbstractSpeech comprehension relies on the ability to understand the meaning of words within a coherent context. Recent studies have attempted to obtain electrophysiological indices of this process by modelling how brain activity is affected by a word’s semantic dissimilarity to preceding words. While the resulting indices appear robust and are strongly modulated by attention, it remains possible that, rather than capturing the contextual understanding of words, they may actually reflect word-to-word changes in semantic content without the need for a narrative-level understanding on the part of the listener. To test this possibility, we recorded EEG from subjects who listened to speech presented in either its original, narrative form, or after scrambling the word order by varying amounts. This manipulation affected the ability of subjects to comprehend the narrative content of the speech, but not the ability to recognize the individual words. Neural indices of semantic understanding and low-level acoustic processing were derived for each scrambling condition using the temporal response function (TRF) approach. Signatures of semantic processing were observed for conditions where speech was unscrambled or minimally scrambled and subjects were able to understand the speech. The same markers were absent for higher levels of scrambling when speech comprehension dropped below chance. In contrast, word recognition remained high and neural measures related to envelope tracking did not vary significantly across the different scrambling conditions. This supports the previous claim that electrophysiological indices based on the semantic dissimilarity of words to their context reflect a listener’s understanding of those words relative to that context. It also highlights the relative insensitivity of neural measures of low-level speech processing to speech comprehension.


2020 ◽  
Vol 27 (6) ◽  
pp. 1317-1324
Author(s):  
Ya-Ning Chang ◽  
Chia-Ying Lee

AbstractAcross languages, age of acquisition (AoA) is a critical psycholinguistic factor in lexical processing, reflecting the influence of learning experience. Early-acquired words tend to be processed more quickly and accurately than late-acquired words. Recently, an integrated view proposed that both the mappings between representations and the construction of semantic representations contribute to AoA effects, thus, predicting larger AoA effects for words with arbitrary mappings between representations as well as for tasks requiring greater semantic processing. We investigated how these predictions generalize to the Chinese language system that differs from alphabetic languages regarding the ease of mappings and semantic involvement in lexical processing. A cross-task investigation of differential psycholinguistic effects was conducted with large character naming and lexical decision datasets to establish the extent to which semantics is involved in the two tasks. We focused on examining the effect sizes of lexical-semantic variables and AoA, and the interaction between AoA and consistency. The results demonstrated that semantics influenced Chinese character naming more than lexical decision, which is in contrast with the findings related to English language, though, critically, AoA effects were more pronounced for character naming than for lexical decision. Additionally, an interaction between AoA and consistency was found in character naming. Our findings provide cross-linguistic evidence supporting the view of multiple origins of AoA effects in the language-processing system.


2017 ◽  
Author(s):  
Sabrina Jaeger ◽  
Simone Fulle ◽  
Samo Turk

Inspired by natural language processing techniques we here introduce Mol2vec which is an unsupervised machine learning approach to learn vector representations of molecular substructures. Similarly, to the Word2vec models where vectors of closely related words are in close proximity in the vector space, Mol2vec learns vector representations of molecular substructures that are pointing in similar directions for chemically related substructures. Compounds can finally be encoded as vectors by summing up vectors of the individual substructures and, for instance, feed into supervised machine learning approaches to predict compound properties. The underlying substructure vector embeddings are obtained by training an unsupervised machine learning approach on a so-called corpus of compounds that consists of all available chemical matter. The resulting Mol2vec model is pre-trained once, yields dense vector representations and overcomes drawbacks of common compound feature representations such as sparseness and bit collisions. The prediction capabilities are demonstrated on several compound property and bioactivity data sets and compared with results obtained for Morgan fingerprints as reference compound representation. Mol2vec can be easily combined with ProtVec, which employs the same Word2vec concept on protein sequences, resulting in a proteochemometric approach that is alignment independent and can be thus also easily used for proteins with low sequence similarities.


2019 ◽  
Author(s):  
Lílian Rodrigues de Almeida ◽  
Paul A. Pope ◽  
Peter Hansen

In our previous studies we supported the claim that the motor theory is modulated by task load. Motoric participation in phonological processing increases from speech perception to speech production, with the endpoints of the dorsal stream having changing and complementary weightings for processing: the left inferior frontal gyrus (LIFG) being increasingly relevant and the left superior temporal gyrus (LSTG) being decreasingly relevant. Our previous results for neurostimulation of the LIFG support this model. In this study we investigated whether our claim that the motor theory is modulated by task load holds in (frontal) aphasia. Person(s) with aphasia (PWA) after stroke typically have damage on brain areas responsible for phonological processing. They may present variable patterns of recovery and, consequently, variable strategies of phonological processing. Here these strategies were investigated in two PWA with simultaneous fMRI and tDCS of the LIFG during speech perception and speech production tasks. Anodal tDCS excitation and cathodal tDCS inhibition should increase with the relevance of the target for the task. Cathodal tDCS over a target of low relevance could also induce compensation by the remaining nodes. Responses of PWA to tDCS would further depend on their pattern of recovery. Responses would depend on the responsiveness of the perilesional area, and could be weaker than in controls due to an overall hypoactivation of the cortex. Results suggest that the analysis of motor codes for articulation during phonological processing remains in frontal aphasia and that tDCS is a promising diagnostic tool to investigate the individual processing strategies.


2021 ◽  
pp. 1-13
Author(s):  
Lamiae Benhayoun ◽  
Daniel Lang

BACKGROUND: The renewed advent of Artificial Intelligence (AI) is inducing profound changes in the classic categories of technology professions and is creating the need for new specific skills. OBJECTIVE: Identify the gaps in terms of skills between academic training on AI in French engineering and Business Schools, and the requirements of the labour market. METHOD: Extraction of AI training contents from the schools’ websites and scraping of a job advertisements’ website. Then, analysis based on a text mining approach with a Python code for Natural Language Processing. RESULTS: Categorization of occupations related to AI. Characterization of three classes of skills for the AI market: Technical, Soft and Interdisciplinary. Skills’ gaps concern some professional certifications and the mastery of specific tools, research abilities, and awareness of ethical and regulatory dimensions of AI. CONCLUSIONS: A deep analysis using algorithms for Natural Language Processing. Results that provide a better understanding of the AI capability components at the individual and the organizational levels. A study that can help shape educational programs to respond to the AI market requirements.


2021 ◽  
Vol 11 (3) ◽  
pp. 359
Author(s):  
Katharina Hogrefe ◽  
Georg Goldenberg ◽  
Ralf Glindemann ◽  
Madleen Klonowski ◽  
Wolfram Ziegler

Assessment of semantic processing capacities often relies on verbal tasks which are, however, sensitive to impairments at several language processing levels. Especially for persons with aphasia there is a strong need for a tool that measures semantic processing skills independent of verbal abilities. Furthermore, in order to assess a patient’s potential for using alternative means of communication in cases of severe aphasia, semantic processing should be assessed in different nonverbal conditions. The Nonverbal Semantics Test (NVST) is a tool that captures semantic processing capacities through three tasks—Semantic Sorting, Drawing, and Pantomime. The main aim of the current study was to investigate the relationship between the NVST and measures of standard neurolinguistic assessment. Fifty-one persons with aphasia caused by left hemisphere brain damage were administered the NVST as well as the Aachen Aphasia Test (AAT). A principal component analysis (PCA) was conducted across all AAT and NVST subtests. The analysis resulted in a two-factor model that captured 69% of the variance of the original data, with all linguistic tasks loading high on one factor and the NVST subtests loading high on the other. These findings suggest that nonverbal tasks assessing semantic processing capacities should be administered alongside standard neurolinguistic aphasia tests.


Author(s):  
Emme O’Rourke ◽  
Emily L. Coderre

AbstractWhile many individuals with autism spectrum disorder (ASD) experience difficulties with language processing, non-linguistic semantic processing may be intact. We examined neural responses to an implicit semantic priming task by comparing N400 responses—an event-related potential related to semantic processing—in response to semantically related or unrelated pairs of words or pictures. Adults with ASD showed larger N400 responses than typically developing adults for pictures, but no group differences occurred for words. However, we also observed complex modulations of N400 amplitude by age and by level of autistic traits. These results offer important implications for how groups are delineated and compared in autism research.


Sign in / Sign up

Export Citation Format

Share Document