scholarly journals Top-down information is more important in noisy situations: Exploring the role of pragmatic, semantic, and syntactic information in language processing

2019 ◽  
Author(s):  
Fabio Trecca ◽  
Kristian Tylén ◽  
Riccardo Fusaroli ◽  
Christer Johansson ◽  
Morten H. Christiansen

Language processing depends on the integration of bottom-up information with top-down cues from several different sources—primarily our knowledge of the real world, of discourse contexts, and of how language works. Previous studies have shown that factors pertaining to both the sender and the receiver of the message affect the relative weighting of such information. Here, we suggest another factor that may change our processing strategies: perceptual noise in the environment. We hypothesize that listeners weight different sources of top-down information more in situations of perceptual noise than in noise-free situations. Using a sentence-picture matching experiment with four forced-choice alternatives, we show that degrading the speech input with noise compels the listeners to rely more on top-down information in processing. We discuss our results in light of previous findings in the literature, highlighting the need for a unified model of spoken language comprehension in different ecologically valid situations, including under noisy conditions.

2021 ◽  
Author(s):  
Elena Pyatigorskaya ◽  
Matteo Maran ◽  
Emiliano Zaccarella

Language comprehension proceeds at a very fast pace. It is argued that context influences the speed of language comprehension by providing informative cues for the correct processing of the incoming linguistic input. Priming studies investigating the role of context in language processing have shown that humans quickly recognise target words that share orthographic, morphological, or semantic information with their preceding primes. How syntactic information influences the processing of incoming words is however less known. Early syntactic priming studies reported faster recognition for noun and verb targets (e.g., apple or sing) following primes with which they form grammatical phrases or sentences (the apple, he sings). The studies however leave open a number of questions about the reported effect, including the degree of automaticity of syntactic priming, the facilitative versus inhibitory nature, and the specific mechanism underlying the priming effect—that is, the type of syntactic information primed on the target word. Here we employed a masked syntactic priming paradigm in four behavioural experiments in German language to test whether masked primes automatically facilitate the categorization of nouns and verbs presented as flashing visual words. Overall, we found robust syntactic priming effects with masked primes—thus suggesting high automaticity of the process—but only when verbs were morpho-syntactically marked (er kau-t; he chew-s). Furthermore, we found that, compared to baseline, primes slow down target categorisation when the relationship between prime and target is syntactically incorrect, rather than speeding it up when the prime-target relationship is syntactically correct. This argues in favour of an inhibitory nature of syntactic priming. Overall, the data indicate that humans automatically extract abstract syntactic features from word categories as flashing visual words, which has an impact on the speed of successful language processing during language comprehension.


Author(s):  
Asli Özyürek

Use of language in face-to-face context is multimodal. Production and perception of speech take place in the context of visual articulators such as lips, face, or hand gestures which convey relevant information to what is expressed in speech at different levels of language. While lips convey information at the phonological level, gestures contribute to semantic, pragmatic, and syntactic information, as well as to discourse cohesion. This chapter overviews recent findings showing that speech and gesture (e.g. a drinking gesture as someone says, “Would you like a drink?”) interact during production and comprehension of language at the behavioral, cognitive, and neural levels. Implications of these findings for current psycholinguistic theories and how they can be expanded to consider the multimodal context of language processing are discussed.


2019 ◽  
Vol 70 (1) ◽  
pp. 58-104
Author(s):  
Philipp Dankel ◽  
Ignacio Satti

Abstract This article focuses on the practice of listing in Talk-in-Interaction. Lists are frequently used in spoken language as a discursive resource and can be considered as a universal, cross-lingual practice for structuring ideas. As such, they have been given attention in several fields of linguistics, mainly in intonation research, conversation analysis and interactional linguistics. However, the role of gestures and other physical forms of expression in listing has been mostly disregarded so far. For this reason, we attempt to cast light on the form and function of gestures and other bodily resources that are embedded in this practice. We argue that lists are multimodal and that bodily resources play a major role in establishing the format and in organizing the interaction. In order to do so, we use a broad collection of examples from different sources in French, Italian and Spanish.


2019 ◽  
Author(s):  
Byurakn Ishkhanyan ◽  
Anders Højen ◽  
Riccardo Fusaroli ◽  
Christer Johansson ◽  
Kristian Tylén ◽  
...  

Speech input is often noisy and ambiguous. Yet listenersusually do not have difficulties understanding it. A keyhypothesis is that in speech processing acoustic-phoneticbottom-up processing is complemented by top-downcontextual information. This context effect is larger when theambiguous word is only separated from a disambiguating word by a few syllables compared to many syllables, suggesting that there is a limited time window for processing acoustic-phonetic information with the help of context. Here, we argue that the relative weight of bottom-up and top-down processes may be different for languages that have different phonological properties. We report an experiment comparing two closely related languages, Danish and Norwegian. We show that Danish speakers do indeed rely on context more than Norwegian speakers do. These results highlight the importance of investigating cross-linguistic differences in speech processing, suggesting that speakers of different languages may develop different language processing strategies.


2020 ◽  
Author(s):  
Uri Hertz ◽  
Colin Blakemore ◽  
Chris D. Frith

In recent years the role of top-down expectations on perception has been extensively researched within the framework of predictive coding. However, less attention has been given to the different sources of expectations, how they differ and how they interact. Here we examined the effects of informative hints on perceptual experience, and how these interact with repetition-based expectations to create a long-lasting effect. Over seven experiments, we used verbal hints and multiple presentations of ambiguous two-tone images. We found that vividness ratings increased from one presentation to the next, even after the object in the image had been identified. In addition, vividness ratings significantly increased when images were introduced with a hint, and this boost was greater for more detailed hints. However, the initial increase in vividness did not always carry over to the next presentation. When recognition of the image in the presentation was hard, due to memory load, inconsistent presentation, or noise level of the image, the initial advantage in vividness was attenuated. This was most apparent when participants were primed with greyscale version of the two-tone image. A computational model based on evidence accumulation was able to recover these patterns of perceptual experience, suggesting that the effect of hints is short lived if it cannot be encoded in memory for future presentations. This notion highlights the different contributions of attention, memory and their interactions on forming expectations for perception.


2020 ◽  
Vol 26 (2-3) ◽  
Author(s):  
Unknown / not yet matched

Abstract The general principles of perceptuo-motor processing and memory give rise to the Now-or-Never bottleneck constraint imposed on the organization of the language processing system. In particular, the Now-or-Never bottleneck demands an appropriate structure of linguistic input and rapid incorporation of both linguistic and multisensory contextual information in a progressive, integrative manner. I argue that the emerging predictive processing framework is well suited for the task of providing a comprehensive account of language processing under the Now-or-Never constraint. Moreover, this framework presents a stronger alternative to the Chunk-and-Pass account proposed by Christiansen and Chater (2016), as it better accommodates the available evidence concerning the role of context (in both the narrow and wider senses) in language comprehension at various levels of linguistic representation. Furthermore, the predictive processing approach allows for treating language as a special case of domain-general processing strategies, suggesting deep parallels with other cognitive processes such as vision.


Entropy ◽  
2020 ◽  
Vol 22 (1) ◽  
pp. 126 ◽  
Author(s):  
Martin Gerlach ◽  
Francesc Font-Clos

The use of Project Gutenberg (PG) as a text corpus has been extremely popular in statistical analysis of language for more than 25 years. However, in contrast to other major linguistic datasets of similar importance, no consensual full version of PG exists to date. In fact, most PG studies so far either consider only a small number of manually selected books, leading to potential biased subsets, or employ vastly different pre-processing strategies (often specified in insufficient details), raising concerns regarding the reproducibility of published results. In order to address these shortcomings, here we present the Standardized Project Gutenberg Corpus (SPGC), an open science approach to a curated version of the complete PG data containing more than 50,000 books and more than 3 × 10 9 word-tokens. Using different sources of annotated metadata, we not only provide a broad characterization of the content of PG, but also show different examples highlighting the potential of SPGC for investigating language variability across time, subjects, and authors. We publish our methodology in detail, the code to download and process the data, as well as the obtained corpus itself on three different levels of granularity (raw text, timeseries of word tokens, and counts of words). In this way, we provide a reproducible, pre-processed, full-size version of Project Gutenberg as a new scientific resource for corpus linguistics, natural language processing, and information retrieval.


2015 ◽  
Vol 10 (1) ◽  
pp. 53-87 ◽  
Author(s):  
Harald Clahsen ◽  
Sabrina Gerth ◽  
Vera Heyer ◽  
Esther Schott

The role of morphological and syntactic information in non-native second language (L2) comprehension is controversial. Some have argued that late bilinguals rapidly integrate grammatical cues with other information sources during reading or listening in the same way as native speakers. Others claim that structural cues are underused in L2 processing. We examined different kinds of modifiers inside compounds (e.g. singulars vs. plurals, *rat eater vs. rats eater) with respect to this controversy, which are subject to both structural and non-structural constraints. Two offline and two online (eye-movement) experiments were performed examining the role of these constraints in spoken language comprehension of English and German, testing 77 advanced L2 learners. We also compared the L2 groups to corresponding groups of native speakers. Our results suggest that despite native-like sensitivity to the compounding constraints, late bilinguals rely more on non-structural constraints and are less able to revise their initial interpretations than L1 comprehenders.


2004 ◽  
Vol 63 (3) ◽  
pp. 143-149 ◽  
Author(s):  
Fred W. Mast ◽  
Charles M. Oman

The role of top-down processing on the horizontal-vertical line length illusion was examined by means of an ambiguous room with dual visual verticals. In one of the test conditions, the subjects were cued to one of the two verticals and were instructed to cognitively reassign the apparent vertical to the cued orientation. When they have mentally adjusted their perception, two lines in a plus sign configuration appeared and the subjects had to evaluate which line was longer. The results showed that the line length appeared longer when it was aligned with the direction of the vertical currently perceived by the subject. This study provides a demonstration that top-down processing influences lower level visual processing mechanisms. In another test condition, the subjects had all perceptual cues available and the influence was even stronger.


Sign in / Sign up

Export Citation Format

Share Document