scholarly journals Computational approaches to word retrieval in bilinguals

2019 ◽  
Vol 22 (04) ◽  
pp. 655-656
Author(s):  
JUBIN ABUTALEBI ◽  
HARALD CLAHSEN

The cognitive architecture of human language processing has been studied for decades, but using computational modeling for such studies is a relatively recent topic. Indeed, computational approaches to language processing have become increasingly popular in our field, mainly due to advances in computational modeling techniques and the availability of large collections of experimental data. Language learning, particularly child language learning, has been the subject of many computational models. By simulating the process of child language learning, computational models may indeed teach us which linguistic representations are learnable from the input that children have access to (and which are not), as well as which mechanisms yield the same patterns of behavior that are found in children's language performance.

2020 ◽  
Author(s):  
Beata Grzyb ◽  
Gabriella Vigliocco

Language has predominately been studied as a unimodal phenomenon - as speech or text without much consideration of its physical and social context – this is true both in cognitive psychology/psycholinguistics as well as in artificial intelligence. However, in everyday life, language is most often used in face-to-face communication and in addition to structured speech it comprises a dynamic system of multiplex components such as gestures, eye gaze, mouth movements and prosodic modulation. Recently, cognitive scientists have started to realise the potential importance of multimodality for the understanding of human communication and its neural underpinnings; while AI scientists have begun to address how to integrate multimodality in order to improve communication between human and artificial embodied agent. We review here the existing literature on multimodal language learning and processing in humans and the literature on perception of artificial agents, their comprehension and production of multimodal cues and we discuss their main limitations. We conclude by arguing that by joining forces AI scientists can improve the effectiveness of human-machine interaction and increase the human-likeness and acceptance of embodied agents in society. In turn, computational models that generate language in artificial embodied agents constitute a unique research tool to investigate the underlying mechanisms that govern language processing and learning in humans.


2017 ◽  
Vol 7 (1) ◽  
pp. 47-60
Author(s):  
Kees De Bot ◽  
Fang Fang

Human behavior is not constant over the hours of the day, and there are considerable individual differences. Some people raise early and go to bed early and have their peek performance early in the day (“larks”) while others tend to go to bed late and get up late and have their best performance later in the day (“owls”). In this contribution we report on three projects on the role of chronotype (CT) in language processing and learning. The first study (de Bot, 2013) reports on the impact of CT on language learning aptitude and word learning. The second project was reported in Fang (2015) and looks at CT and executive functions, in particular inhibition as measured by variants of the Stroop test. The third project aimed at assessing lexical access in L1 and L2 at preferred and non-preferred times of the day. The data suggest that there are effects of CT on language learning and processing. There is a small effect of CT on language aptitude and a stronger effect of CT on lexical access in the first and second language. The lack of significance for other tasks is mainly caused by the large interindividual and intraindividual variation.


Author(s):  
Giulia Bovolenta ◽  
Emma Marsden

Abstract There is currently much interest in the role of prediction in language processing, both in L1 and L2. For language acquisition researchers, this has prompted debate on the role that predictive processing may play in both L1 and L2 language learning, if any. In this conceptual review, we explore the role of prediction and prediction error as a potential learning aid. We examine different proposed prediction mechanisms and the empirical evidence for them, alongside the factors constraining prediction for both L1 and L2 speakers. We then review the evidence on the role of prediction in learning languages. We report computational modeling that underpins a number of proposals on the role of prediction in L1 and L2 learning, then lay out the empirical evidence supporting the predictions made by modeling, from research into priming and adaptation. Finally, we point out the limitations of these mechanisms in both L1 and L2 speakers.


2021 ◽  
pp. 274-294
Author(s):  
Beata Grzyb ◽  
Gabriella Vigliocco

Recently, cognitive scientists have started to realise the potential importance of multimodality for the understanding of human communication and its neural underpinnings; while AI scientists have begun to address how to integrate multimodality in order to improve communication between human and embodied agent. We review here the existing literature on multimodal language learning and processing in humans and the literature on perception of embodied agents, their comprehension and production of multimodal cues and we discuss their main limitations. We conclude by arguing that by joining forces AI scientists can improve the effectiveness of human-machine interaction and increase the human-likeness and acceptance of embodied agents in society. In turn, computational models that generate language in artificial embodied agents constitute a unique research tool to investigate the underlying mechanisms that govern language processing and learning in humans.


2019 ◽  
Author(s):  
Matthew A Kelly ◽  
David Reitter

What role does the study of natural language play in the task of developing a unified theory and common model of cognition? Language is perhaps the most complex behaviour that humans exhibit, and, as such, is one of the most difficult problems for understanding human cognition. Linguistic theory can both inform and be informed by unified models of cognition. We discuss (1) how computational models of human cognition can provide insight into how humans produce and comprehend language and (2) how the problem of modelling language processing raises questions and creates challenges for widely used computational models of cognition. Evidence from the literature suggests that behavioural phenomena, such as recency and priming effects, and cognitive constraints, such as working memory limits, affect how language is produced by humans in ways that can be predicted by computational cognitive models. But just as computational models can provide new insights into language, language can serve as a test for these models. For example, simulating language learning requires the use of more powerful machine learning techniques, such as deep learning and vector symbolic architectures, and language comprehension requires a capacity for on-the-fly situational model construction. In sum, language plays an important role in both shaping the development of a common model of the mind, and, in turn, the theoretical understanding of language stands to benefit greatly from the development of a common model.


Author(s):  
Emmanuel Keuleers

Computational psycholinguistics has a long history of investigation and modeling of morphological phenomena. Several computational models have been developed to deal with the processing and production of morphologically complex forms and with the relation between linguistic morphology and psychological word representations. Historically, most of this work has focused on modeling the production of inflected word forms, leading to the development of models based on connectionist principles and other data-driven models such as Memory-Based Language Processing (MBLP), Analogical Modeling of Language (AM), and Minimal Generalization Learning (MGL). In the context of inflectional morphology, these computational approaches have played an important role in the debate between single and dual mechanism theories of cognition. Taking a different angle, computational models based on distributional semantics have been proposed to account for several phenomena in morphological processing and composition. Finally, although several computational models of reading have been developed in psycholinguistics, none of them have satisfactorily addressed the recognition and reading aloud of morphologically complex forms.


Author(s):  
Alex Kirlik ◽  
Michael D. Byrne

This chapter provides an introduction to and overview of both foundational and contemporary research using computational modeling to aid in the scientific understanding of human expertise. The authors note the distinction between computational models constructed within some molar or unified cognitive architecture and models that are more domain or task specific in their psychological assumptions, and present numerous examples of each type. The authors also provide their assessment of this body of research, one that highlights the need for extensive analysis, and even expert-level knowledge of both tasks and the environments in which expert behavior is manifest as a key requirement for successfully modeling high levels of skill or expert performance. Finally, the authors provide their thoughts about promising future directions for research using computational modeling, together with other emerging methodological techniques such as neuroimaging, to provide a comprehensive approach to advance the scientific understanding of human expertise.


2012 ◽  
Vol 367 (1598) ◽  
pp. 1971-1983 ◽  
Author(s):  
Karl Magnus Petersson ◽  
Peter Hagoort

The human capacity to acquire language is an outstanding scientific challenge to understand. Somehow our language capacities arise from the way the human brain processes, develops and learns in interaction with its environment. To set the stage, we begin with a summary of what is known about the neural organization of language and what our artificial grammar learning (AGL) studies have revealed. We then review the Chomsky hierarchy in the context of the theory of computation and formal learning theory. Finally, we outline a neurobiological model of language acquisition and processing based on an adaptive, recurrent, spiking network architecture. This architecture implements an asynchronous, event-driven, parallel system for recursive processing. We conclude that the brain represents grammars (or more precisely, the parser/generator) in its connectivity, and its ability for syntax is based on neurobiological infrastructure for structured sequence processing. The acquisition of this ability is accounted for in an adaptive dynamical systems framework. Artificial language learning (ALL) paradigms might be used to study the acquisition process within such a framework, as well as the processing properties of the underlying neurobiological infrastructure. However, it is necessary to combine and constrain the interpretation of ALL results by theoretical models and empirical studies on natural language processing. Given that the faculty of language is captured by classical computational models to a significant extent, and that these can be embedded in dynamic network architectures, there is hope that significant progress can be made in understanding the neurobiology of the language faculty.


Sign in / Sign up

Export Citation Format

Share Document