scholarly journals The neurobiology of syntax: beyond string sets

2012 ◽  
Vol 367 (1598) ◽  
pp. 1971-1983 ◽  
Author(s):  
Karl Magnus Petersson ◽  
Peter Hagoort

The human capacity to acquire language is an outstanding scientific challenge to understand. Somehow our language capacities arise from the way the human brain processes, develops and learns in interaction with its environment. To set the stage, we begin with a summary of what is known about the neural organization of language and what our artificial grammar learning (AGL) studies have revealed. We then review the Chomsky hierarchy in the context of the theory of computation and formal learning theory. Finally, we outline a neurobiological model of language acquisition and processing based on an adaptive, recurrent, spiking network architecture. This architecture implements an asynchronous, event-driven, parallel system for recursive processing. We conclude that the brain represents grammars (or more precisely, the parser/generator) in its connectivity, and its ability for syntax is based on neurobiological infrastructure for structured sequence processing. The acquisition of this ability is accounted for in an adaptive dynamical systems framework. Artificial language learning (ALL) paradigms might be used to study the acquisition process within such a framework, as well as the processing properties of the underlying neurobiological infrastructure. However, it is necessary to combine and constrain the interpretation of ALL results by theoretical models and empirical studies on natural language processing. Given that the faculty of language is captured by classical computational models to a significant extent, and that these can be embedded in dynamic network architectures, there is hope that significant progress can be made in understanding the neurobiology of the language faculty.

2020 ◽  
Author(s):  
Beata Grzyb ◽  
Gabriella Vigliocco

Language has predominately been studied as a unimodal phenomenon - as speech or text without much consideration of its physical and social context – this is true both in cognitive psychology/psycholinguistics as well as in artificial intelligence. However, in everyday life, language is most often used in face-to-face communication and in addition to structured speech it comprises a dynamic system of multiplex components such as gestures, eye gaze, mouth movements and prosodic modulation. Recently, cognitive scientists have started to realise the potential importance of multimodality for the understanding of human communication and its neural underpinnings; while AI scientists have begun to address how to integrate multimodality in order to improve communication between human and artificial embodied agent. We review here the existing literature on multimodal language learning and processing in humans and the literature on perception of artificial agents, their comprehension and production of multimodal cues and we discuss their main limitations. We conclude by arguing that by joining forces AI scientists can improve the effectiveness of human-machine interaction and increase the human-likeness and acceptance of embodied agents in society. In turn, computational models that generate language in artificial embodied agents constitute a unique research tool to investigate the underlying mechanisms that govern language processing and learning in humans.


Author(s):  
José-Antonio Cervantes ◽  
Luis-Felipe Rodríguez ◽  
Sonia López ◽  
Félix Ramos ◽  
Francisco Robles

There are a great variety of theoretical models of cognition whose main purpose is to explain the inner workings of the human brain. Researchers from areas such as neuroscience, psychology, and physiology have proposed these models. Nevertheless, most of these models are based on empirical studies and on experiments with humans, primates, and rodents. In fields such as cognitive informatics and artificial intelligence, these cognitive models may be translated into computational implementations and incorporated into the architectures of intelligent autonomous agents (AAs). Thus, the main assumption in this work is that knowledge in those fields can be used as a design approach contributing to the development of intelligent systems capable of displaying very believable and human-like behaviors. Decision-Making (DM) is one of the most investigated and computationally implemented functions. The literature reports several computational models that enable AAs to make decisions that help achieve their personal goals and needs. However, most models disregard crucial aspects of human decision-making such as other agents' needs, ethical values, and social norms. In this paper, the authors present a set of criteria and mechanisms proposed to develop a biologically inspired computational model of Moral Decision-Making (MDM). To achieve a process of moral decision-making believable, the authors propose a cognitive function to determine the importance of each criterion based on the mood and emotional state of AAs, the main objective the model is to enable AAs to make decisions based on ethical and moral judgment.


2019 ◽  
Vol 22 (04) ◽  
pp. 655-656
Author(s):  
JUBIN ABUTALEBI ◽  
HARALD CLAHSEN

The cognitive architecture of human language processing has been studied for decades, but using computational modeling for such studies is a relatively recent topic. Indeed, computational approaches to language processing have become increasingly popular in our field, mainly due to advances in computational modeling techniques and the availability of large collections of experimental data. Language learning, particularly child language learning, has been the subject of many computational models. By simulating the process of child language learning, computational models may indeed teach us which linguistic representations are learnable from the input that children have access to (and which are not), as well as which mechanisms yield the same patterns of behavior that are found in children's language performance.


2021 ◽  
pp. 274-294
Author(s):  
Beata Grzyb ◽  
Gabriella Vigliocco

Recently, cognitive scientists have started to realise the potential importance of multimodality for the understanding of human communication and its neural underpinnings; while AI scientists have begun to address how to integrate multimodality in order to improve communication between human and embodied agent. We review here the existing literature on multimodal language learning and processing in humans and the literature on perception of embodied agents, their comprehension and production of multimodal cues and we discuss their main limitations. We conclude by arguing that by joining forces AI scientists can improve the effectiveness of human-machine interaction and increase the human-likeness and acceptance of embodied agents in society. In turn, computational models that generate language in artificial embodied agents constitute a unique research tool to investigate the underlying mechanisms that govern language processing and learning in humans.


2019 ◽  
Author(s):  
Matthew A Kelly ◽  
David Reitter

What role does the study of natural language play in the task of developing a unified theory and common model of cognition? Language is perhaps the most complex behaviour that humans exhibit, and, as such, is one of the most difficult problems for understanding human cognition. Linguistic theory can both inform and be informed by unified models of cognition. We discuss (1) how computational models of human cognition can provide insight into how humans produce and comprehend language and (2) how the problem of modelling language processing raises questions and creates challenges for widely used computational models of cognition. Evidence from the literature suggests that behavioural phenomena, such as recency and priming effects, and cognitive constraints, such as working memory limits, affect how language is produced by humans in ways that can be predicted by computational cognitive models. But just as computational models can provide new insights into language, language can serve as a test for these models. For example, simulating language learning requires the use of more powerful machine learning techniques, such as deep learning and vector symbolic architectures, and language comprehension requires a capacity for on-the-fly situational model construction. In sum, language plays an important role in both shaping the development of a common model of the mind, and, in turn, the theoretical understanding of language stands to benefit greatly from the development of a common model.


Author(s):  
Alice Meurice ◽  
Fanny Meunier

The chapter discusses the importance of in-service teacher training (INSET) to promote the use of open natural language processing (NLP)-based technologies (NLPTs). The first section briefly outlines the affordances of technology for second language acquisition and emphasizes the potential of open NLPTs. The second section presents the overall INSET design used in the TELL-OP ERASMUS+ project led by a team of researchers from several European universities. Section 3 provides a quantitative and qualitative analysis of the questionnaires and feedback data from Belgian French-speaking teachers (n = 86) on the TELL-OP online training module. A SWOT analysis (strengths, weaknesses, opportunities, and threats) is used to complement the teachers' feedback. In the fourth section, the authors put the course design into perspective using several theoretical models on the use of technology and open access resources in education, and provide suggestions for improving future similar INSET initiatives.


2008 ◽  
Vol 41 (2) ◽  
pp. 253-271 ◽  
Author(s):  
Alison Wray

A new education policy for England, announced in Spring 2007, aims to introduce the learning of a foreign language to all children from the age of 8 by the year 2010. But there was a similar initiative in the 1960s and it didn't work then, so why should it now? This presentation explores the reasons underlying the belief that children can ‘naturally’ learn another language if they begin young enough, and considers reasons why classroom learning may not always tap into whatever natural language learning skills children have. Drawing on a range of previously published research and my own recent empirical studies, I suggest that, unless we are careful, our primary-age children will be flung into an adult-style learning approach, which they are too immature to handle. How, then, can young children's learning potential be exploited most effectively? The role of multiword sequences as a form of lexis is considered, making reference to my model of formulaic language processing e.g. Wray 2002. Consideration is given to how memorising useful wordstrings may assist children in developing a view of language knowledge that promotes effective learning.


2020 ◽  
Author(s):  
Kun Sun

Expectations or predictions about upcoming content play an important role during language comprehension and processing. One important aspect of recent studies of language comprehension and processing concerns the estimation of the upcoming words in a sentence or discourse. Many studies have used eye-tracking data to explore computational and cognitive models for contextual word predictions and word processing. Eye-tracking data has previously been widely explored with a view to investigating the factors that influence word prediction. However, these studies are problematic on several levels, including the stimuli, corpora, statistical tools they applied. Although various computational models have been proposed for simulating contextual word predictions, past studies usually preferred to use a single computational model. The disadvantage of this is that it often cannot give an adequate account of cognitive processing in language comprehension. To avoid these problems, this study draws upon a massive natural and coherent discourse as stimuli in collecting the data on reading time. This study trains two state-of-art computational models (surprisal and semantic (dis)similarity from word vectors by linear discriminative learning (LDL)), measuring knowledge of both the syntagmatic and paradigmatic structure of language. We develop a `dynamic approach' to compute semantic (dis)similarity. It is the first time that these two computational models have been merged. Models are evaluated using advanced statistical methods. Meanwhile, in order to test the efficiency of our approach, one recently developed cosine method of computing semantic (dis)similarity based on word vectors data adopted is used to compare with our `dynamic' approach. The two computational and fixed-effect statistical models can be used to cross-verify the findings, thus ensuring that the result is reliable. All results support that surprisal and semantic similarity are opposed in the prediction of the reading time of words although both can make good predictions. Additionally, our `dynamic' approach performs better than the popular cosine method. The findings of this study are therefore of significance with regard to acquiring a better understanding how humans process words in a real-world context and how they make predictions in language cognition and processing.


Author(s):  
Kelly C. Allison ◽  
Jennifer D. Lundgren

The Diagnostic and Statistical Manual, fifth edition, of the American Psychiatric Association (2013) has designated several disorders under the diagnosis of otherwise specified feeding and eating disorder (OSFED). This chapter evaluates three of these, night eating syndrome (NES), purging disorder (PD), and atypical anorexia nervosa (atypical AN). It also reviews orthorexia nervosa, which has been discussed in the clinical realm as well as the popular press. The history and definition for each is reviewed, relevant theoretical models are presented and compared, and evidence for the usefulness of the models is described. Empirical studies examining the disorders’ independence from other disorders, comorbid psychopathology, and, when available, medical comorbidities, are discussed. Distress and impairment in functioning seem comparable between at least three of these emerging disorders and threshold eating disorders. Finally, remaining questions for future research are summarized.


Author(s):  
Jonathan E. Peelle

Language processing in older adulthood is a model of balance between preservation and decline. Despite widespread changes to physiological mechanisms supporting perception and cognition, older adults’ language abilities are frequently well preserved. At the same time, the neural systems engaged to achieve this high level of success change, and individual differences in neural organization appear to differentiate between more and less successful performers. This chapter reviews anatomical and cognitive changes that occur in aging and popular frameworks for age-related changes in brain function, followed by an examination of how these principles play out in the context of language comprehension and production.


Sign in / Sign up

Export Citation Format

Share Document