input modality
Recently Published Documents


TOTAL DOCUMENTS

100
(FIVE YEARS 25)

H-INDEX

15
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Keith S Apfelbaum ◽  
Christina Blomquist ◽  
Bob McMurray

Efficient word recognition depends on the ability to overcome competition from overlapping words. The nature of the overlap depends on the input modality: spoken words have temporal overlap from other words that share phonemes in the same positions, whereas written words have spatial overlap from other words with letters in the same places. It is unclear how these differences in input format affect the ability to recognize a word and the types of competitors that become active while doing so. This study investigates word recognition in both modalities in children between 7 and 15. Children complete a visual-world paradigm eye-tracking task that measures competition from words with several types of overlap, using identical word lists between modalities. Results showed correlated developmental changes in the speed of target recognition in both modalities. Additionally, developmental changes were seen in the efficiency of competitor suppression for some competitor types in the spoken modality. These data reveal some developmental continuity in the process of word recognition independent of modality, but also some instances of independence in how competitors are activated. Stimuli, data and analyses from this project are available at: https://osf.io/eav72


Author(s):  
Max R. Freeman ◽  
Viorica Marian

Abstract A bilingual’s language system is highly interactive. When hearing a second language (L2), bilinguals access native-language (L1) words that share sounds across languages. In the present study, we examine whether input modality and L2 proficiency moderate the extent to which bilinguals activate L1 phonotactic constraints (i.e., rules for combining speech sounds) during L2 processing. Eye movements of English monolinguals and Spanish–English bilinguals were tracked as they searched for a target English word in a visual display. On critical trials, displays included a target that conflicted with the Spanish vowel-onset rule (e.g., spa), as well as a competitor containing the potentially activated “e” onset (e.g., egg). The rule violation was processed either in the visual modality (Experiment 1) or audio-visually (Experiment 2). In both experiments, bilinguals with lower L2 proficiency made more eye movements to competitors than fillers. Findings suggest that bilinguals who have lower L2 proficiency access L1 phonotactic constraints during L2 visual word processing with and without auditory input of the constraint-conflicting structure (e.g., spa). We conclude that the interactivity between a bilingual’s two languages is not limited to words that share form across languages, but also extends to sublexical, rule-based structures.


2021 ◽  
Author(s):  
Marion Coumel ◽  
Ema Ushioda ◽  
Katherine Messenger

We examined whether language input modality and individual differences in attention and motivation influence second language (L2) learning via syntactic priming. In an online study, we compared French L2 English and L1 English speakers’ primed production of passives in reading-to-writing vs. listening-to-writing priming conditions. We measured immediate priming (producing a passive immediately after exposure to the target structure) and short- and long-term learning (producing more target structures in immediate and delayed post-tests without primes relative to pre-tests). Both groups showed immediate priming and short- and long-term learning. Prime modality did not influence these effects but learning was greater in L2 speakers. While attention only increased learning in L1 speakers, high motivation increased L2 speakers' learning in the reading-to-writing condition. These results suggest that syntactic priming fosters long-term L2 learning, regardless of input modality. This study is the first to show that motivation may modulate L2 learning via syntactic priming.


2021 ◽  
Vol 118 (16) ◽  
pp. e2019342118
Author(s):  
Matthias Grabenhorst ◽  
Laurence T. Maloney ◽  
David Poeppel ◽  
Georgios Michalareas

The environment is shaped by two sources of temporal uncertainty: the discrete probability of whether an event will occur and—if it does—the continuous probability of when it will happen. These two types of uncertainty are fundamental to every form of anticipatory behavior including learning, decision-making, and motor planning. It remains unknown how the brain models the two uncertainty parameters and how they interact in anticipation. It is commonly assumed that the discrete probability of whether an event will occur has a fixed effect on event expectancy over time. In contrast, we first demonstrate that this pattern is highly dynamic and monotonically increases across time. Intriguingly, this behavior is independent of the continuous probability of when an event will occur. The effect of this continuous probability on anticipation is commonly proposed to be driven by the hazard rate (HR) of events. We next show that the HR fails to account for behavior and propose a model of event expectancy based on the probability density function of events. Our results hold for both vision and audition, suggesting independence of the representation of the two uncertainties from sensory input modality. These findings enrich the understanding of fundamental anticipatory processes and have provocative implications for many aspects of behavior and its neural underpinnings.


Author(s):  
Sebastian Cmentowski ◽  
Andrey Krekhov ◽  
Andre Zenner ◽  
Daniel Kucharski ◽  
Jens Kruger

2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Merel C. Wolf ◽  
Antje S. Meyer ◽  
Caroline F. Rowland ◽  
Florian Hintz

Language users encounter words in at least two different modalities. Arguably, the most frequent encounters are in spoken or written form. Previous research has shown that – compared to the spoken modality – written language features more difficult words. An important question is whether input modality has effects on word recognition accuracy. In the present study, we investigated whether input modality (spoken, written, or bimodal) affected word recognition accuracy and whether such a modality effect interacted with word difficulty. Moreover, we tested whether the participants’ reading experience interacted with word difficulty and whether this interaction was influenced by modality. We re-analyzed data from 48 Dutch university students that were collected in the context of a vocabulary test development to assess in which modality test words should be presented. Participants carried out a word recognition task, where non-words and words of varying difficulty were presented in auditory, visual and audio-visual modalities. In addition, they completed a receptive vocabulary and an author recognition test to measure their exposure to literary texts. Our re-analyses showed that word difficulty interacted with reading experience in that frequent readers (i.e., with more exposure to written texts) were more accurate in recognizing difficult words than individuals who read less frequently. However, there was no evidence for an effect of input modality on word recognition accuracy, nor for interactions with word difficulty or reading experience. Thus, in our study, input modality did not influence word recognition accuracy. We discuss the implications of this finding and describe possibilities for future research involving other groups of participants and/or different languages.


Sign in / Sign up

Export Citation Format

Share Document