scholarly journals How language processing can shape a common model of cognition

2019 ◽  
Author(s):  
Matthew A Kelly ◽  
David Reitter

What role does the study of natural language play in the task of developing a unified theory and common model of cognition? Language is perhaps the most complex behaviour that humans exhibit, and, as such, is one of the most difficult problems for understanding human cognition. Linguistic theory can both inform and be informed by unified models of cognition. We discuss (1) how computational models of human cognition can provide insight into how humans produce and comprehend language and (2) how the problem of modelling language processing raises questions and creates challenges for widely used computational models of cognition. Evidence from the literature suggests that behavioural phenomena, such as recency and priming effects, and cognitive constraints, such as working memory limits, affect how language is produced by humans in ways that can be predicted by computational cognitive models. But just as computational models can provide new insights into language, language can serve as a test for these models. For example, simulating language learning requires the use of more powerful machine learning techniques, such as deep learning and vector symbolic architectures, and language comprehension requires a capacity for on-the-fly situational model construction. In sum, language plays an important role in both shaping the development of a common model of the mind, and, in turn, the theoretical understanding of language stands to benefit greatly from the development of a common model.

2020 ◽  
Author(s):  
Joshua Conrad Jackson ◽  
Joseph Watts ◽  
Johann-Mattis List ◽  
Ryan Drabble ◽  
Kristen Lindquist

Humans have been using language for thousands of years, but psychologists seldom consider what natural language can tell us about the mind. Here we propose that language offers a unique window into human cognition. After briefly summarizing the legacy of language analyses in psychological science, we show how methodological advances have made these analyses more feasible and insightful than ever before. In particular, we describe how two forms of language analysis—comparative linguistics and natural language processing—are already contributing to how we understand emotion, creativity, and religion, and overcoming methodological obstacles related to statistical power and culturally diverse samples. We summarize resources for learning both of these methods, and highlight the best way to combine language analysis techniques with behavioral paradigms. Applying language analysis to large-scale and cross-cultural datasets promises to provide major breakthroughs in psychological science.


2020 ◽  
Author(s):  
Beata Grzyb ◽  
Gabriella Vigliocco

Language has predominately been studied as a unimodal phenomenon - as speech or text without much consideration of its physical and social context – this is true both in cognitive psychology/psycholinguistics as well as in artificial intelligence. However, in everyday life, language is most often used in face-to-face communication and in addition to structured speech it comprises a dynamic system of multiplex components such as gestures, eye gaze, mouth movements and prosodic modulation. Recently, cognitive scientists have started to realise the potential importance of multimodality for the understanding of human communication and its neural underpinnings; while AI scientists have begun to address how to integrate multimodality in order to improve communication between human and artificial embodied agent. We review here the existing literature on multimodal language learning and processing in humans and the literature on perception of artificial agents, their comprehension and production of multimodal cues and we discuss their main limitations. We conclude by arguing that by joining forces AI scientists can improve the effectiveness of human-machine interaction and increase the human-likeness and acceptance of embodied agents in society. In turn, computational models that generate language in artificial embodied agents constitute a unique research tool to investigate the underlying mechanisms that govern language processing and learning in humans.


Author(s):  
Ray Jackendoff ◽  
Jenny Audring

The Texture of the Lexicon explores three interwoven themes: a morphological theory, the structure of the lexicon, and an integrated account of the language capacity and its place in the mind. These themes together constitute the theory of Relational Morphology (RM), extending the Parallel Architecture of Jackendoff’s groundbreaking Foundations of Language. Part I (chapters 1–3) situates morphology in the architecture of the language faculty, and introduces a novel formalism that unifies the treatment of morphological patterns, from totally productive to highly marginal. Two major points emerge. First, traditional word formation rules and realization rules should be replaced by declarative schemas, formulated in the same terms as words. Hence the grammar should really be thought of as part of the lexicon. Second, the traditional emphasis on productive patterns, to the detriment of nonproductive patterns, is misguided; linguistic theory can and should encompass them both. Part II (chapters 4–6) puts the theory to the test, applying it to a wide range of familiar and less familiar morphological phenomena. Part III (chapters 7–9) connects RM with language processing, language acquisition, and a broad selection of linguistic and nonlinguistic phenomena beyond morphology. The framework is therefore attractive not only for its ability to account insightfully for morphological phenomena, but equally for its contribution to the integration of linguistic theory, psycholinguistics, and human cognition.


2020 ◽  
Author(s):  
Joshua Conrad Jackson ◽  
Joseph Watts ◽  
Johann-Mattis List ◽  
Curtis Puryear ◽  
Ryan Drabble ◽  
...  

Humans have been using language for thousands of years, but psychologists seldom consider what natural language can tell us about the mind. Here we propose that language offers a unique window into human cognition. After briefly summarizing the legacy of language analyses in psychological science, we show how methodological advances have made these analyses more feasible and insightful than ever before. In particular, we describe how two forms of language analysis—comparative linguistics and natural language processing—are already contributing to how we understand emotion, creativity, and religion, and overcoming methodological obstacles related to statistical power and culturally diverse samples. We summarize resources for learning both of these methods, and highlight the best way to combine language analysis techniques with behavioral paradigms. Applying language analysis to large-scale and cross-cultural datasets promises to provide major breakthroughs in psychological science.


2019 ◽  
Vol 22 (04) ◽  
pp. 655-656
Author(s):  
JUBIN ABUTALEBI ◽  
HARALD CLAHSEN

The cognitive architecture of human language processing has been studied for decades, but using computational modeling for such studies is a relatively recent topic. Indeed, computational approaches to language processing have become increasingly popular in our field, mainly due to advances in computational modeling techniques and the availability of large collections of experimental data. Language learning, particularly child language learning, has been the subject of many computational models. By simulating the process of child language learning, computational models may indeed teach us which linguistic representations are learnable from the input that children have access to (and which are not), as well as which mechanisms yield the same patterns of behavior that are found in children's language performance.


2021 ◽  
pp. 274-294
Author(s):  
Beata Grzyb ◽  
Gabriella Vigliocco

Recently, cognitive scientists have started to realise the potential importance of multimodality for the understanding of human communication and its neural underpinnings; while AI scientists have begun to address how to integrate multimodality in order to improve communication between human and embodied agent. We review here the existing literature on multimodal language learning and processing in humans and the literature on perception of embodied agents, their comprehension and production of multimodal cues and we discuss their main limitations. We conclude by arguing that by joining forces AI scientists can improve the effectiveness of human-machine interaction and increase the human-likeness and acceptance of embodied agents in society. In turn, computational models that generate language in artificial embodied agents constitute a unique research tool to investigate the underlying mechanisms that govern language processing and learning in humans.


2002 ◽  
Vol 24 (2) ◽  
pp. 287-296 ◽  
Author(s):  
Elaine Tarone

Ellis's target article suggests that language processing is based on frequency and probabilistic knowledge and that language learning is implicit. These findings are consistent with those of SLA researchers working within a variationist framework (e.g., Tarone, 1985; Bayley & Preston, 1996). This paper provides a brief overview of this research area, which has developed useful models for dealing with frequency effects in language use, and describes a psycholinguistic model of language variation currently being proposed by Preston (2000a, 2000b) that dovetails nicely with Ellis's proposals. The present commentary addresses the question “To what extent is the learner's interlanguage passively and unconsciously derived from input frequencies?” Ellis does state that factors other than frequency are also important for SLA—specifically, conscious noticing and social context. A related factor is the learner'screativity, revealed when learners' noticing leads them to view utterances not just as potential objects of analysis but as potential objects of language play. Noticing results in the selective internalization of language input in interactions with various L2 speakers, and creativity occurs in the learner's consequent production of any of a range of different voices thus internalized for the purposes of expressing social identity and of language play (Tarone 2000).


2012 ◽  
Vol 367 (1598) ◽  
pp. 1971-1983 ◽  
Author(s):  
Karl Magnus Petersson ◽  
Peter Hagoort

The human capacity to acquire language is an outstanding scientific challenge to understand. Somehow our language capacities arise from the way the human brain processes, develops and learns in interaction with its environment. To set the stage, we begin with a summary of what is known about the neural organization of language and what our artificial grammar learning (AGL) studies have revealed. We then review the Chomsky hierarchy in the context of the theory of computation and formal learning theory. Finally, we outline a neurobiological model of language acquisition and processing based on an adaptive, recurrent, spiking network architecture. This architecture implements an asynchronous, event-driven, parallel system for recursive processing. We conclude that the brain represents grammars (or more precisely, the parser/generator) in its connectivity, and its ability for syntax is based on neurobiological infrastructure for structured sequence processing. The acquisition of this ability is accounted for in an adaptive dynamical systems framework. Artificial language learning (ALL) paradigms might be used to study the acquisition process within such a framework, as well as the processing properties of the underlying neurobiological infrastructure. However, it is necessary to combine and constrain the interpretation of ALL results by theoretical models and empirical studies on natural language processing. Given that the faculty of language is captured by classical computational models to a significant extent, and that these can be embedded in dynamic network architectures, there is hope that significant progress can be made in understanding the neurobiology of the language faculty.


Author(s):  
Kim Uittenhove ◽  
Patrick Lemaire

In two experiments, we tested the hypothesis that strategy performance on a given trial is influenced by the difficulty of the strategy executed on the immediately preceding trial, an effect that we call strategy sequential difficulty effect. Participants’ task was to provide approximate sums to two-digit addition problems by using cued rounding strategies. Results showed that performance was poorer after a difficult strategy than after an easy strategy. Our results have important theoretical and empirical implications for computational models of strategy choices and for furthering our understanding of strategic variations in arithmetic as well as in human cognition in general.


2020 ◽  
Author(s):  
Kun Sun

Expectations or predictions about upcoming content play an important role during language comprehension and processing. One important aspect of recent studies of language comprehension and processing concerns the estimation of the upcoming words in a sentence or discourse. Many studies have used eye-tracking data to explore computational and cognitive models for contextual word predictions and word processing. Eye-tracking data has previously been widely explored with a view to investigating the factors that influence word prediction. However, these studies are problematic on several levels, including the stimuli, corpora, statistical tools they applied. Although various computational models have been proposed for simulating contextual word predictions, past studies usually preferred to use a single computational model. The disadvantage of this is that it often cannot give an adequate account of cognitive processing in language comprehension. To avoid these problems, this study draws upon a massive natural and coherent discourse as stimuli in collecting the data on reading time. This study trains two state-of-art computational models (surprisal and semantic (dis)similarity from word vectors by linear discriminative learning (LDL)), measuring knowledge of both the syntagmatic and paradigmatic structure of language. We develop a `dynamic approach' to compute semantic (dis)similarity. It is the first time that these two computational models have been merged. Models are evaluated using advanced statistical methods. Meanwhile, in order to test the efficiency of our approach, one recently developed cosine method of computing semantic (dis)similarity based on word vectors data adopted is used to compare with our `dynamic' approach. The two computational and fixed-effect statistical models can be used to cross-verify the findings, thus ensuring that the result is reliable. All results support that surprisal and semantic similarity are opposed in the prediction of the reading time of words although both can make good predictions. Additionally, our `dynamic' approach performs better than the popular cosine method. The findings of this study are therefore of significance with regard to acquiring a better understanding how humans process words in a real-world context and how they make predictions in language cognition and processing.


Sign in / Sign up

Export Citation Format

Share Document