scholarly journals The Effect of Simultaneously Presented Words and Auditory Tones on Visuomotor Performance

2021 ◽  
pp. 1-28
Author(s):  
Rita Mendonça ◽  
Margarida V. Garrido ◽  
Gün R. Semin

Abstract The experiment reported here used a variation of the spatial cueing task to examine the effects of unimodal and bimodal attention-orienting primes on target identification latencies and eye gaze movements. The primes were a nonspatial auditory tone and words known to drive attention consistent with the dominant writing and reading direction, as well as introducing a semantic, temporal bias (past–future) on the horizontal dimension. As expected, past-related (visual) word primes gave rise to shorter response latencies on the left hemifield and future-related words on the right. This congruency effect was differentiated by an asymmetric performance on the right space following future words and driven by the left-to-right trajectory of scanning habits that facilitated search times and eye gaze movements to lateralized targets. Auditory tone prime alone acted as an alarm signal, boosting visual search and reducing response latencies. Bimodal priming, i.e., temporal visual words paired with the auditory tone, impaired performance by delaying visual attention and response times relative to the unimodal visual word condition. We conclude that bimodal primes were no more effective in capturing participants’ spatial attention than the unimodal auditory and visual primes. Their contribution to the literature on multisensory integration is discussed.

Author(s):  
Koray Koçoğlu ◽  
Gülden Akdal ◽  
Berril Dönmez Çolakoğlu ◽  
Raif Çakmur ◽  
Jagdish C. Sharma ◽  
...  

AbstractThere is growing interest in how social processes and behaviour might be affected in Parkinson’s disease. A task which has been widely used to assess how people orient attention in response to social cues is the spatial cueing task. Socially relevant directional cues, such as a picture of someone gazing or pointing to the left or the right have been shown to cause orienting of visual attention in the cued direction. The basal ganglia may play a role in responding to such directional cues, but no studies to date have examined whether similar social cueing effects are seen in people with Parkinson’s disease. In this study, patients and healthy controls completed a prosaccade (Experiment 1) and an antisaccade task (Experiment 2) in which the target was preceded by arrow, eye gaze or pointing finger cues. Patients showed increased errors and response times for antisaccades but not prosaccades. Healthy participants made most anticipatory errors on pointing finger cue trials, but Parkinson's patients were equally affected by arrow, eye gaze and pointing cues. It is concluded that Parkinson's patients have a reduced ability to suppress responding to directional cues, but this effect is not specific to social cues.


Psihologija ◽  
2010 ◽  
Vol 43 (1) ◽  
pp. 103-116 ◽  
Author(s):  
Jelena Havelka ◽  
Clive Frankish

Case mixing is a technique that is used to investigate the perceptual processes involved in visual word recognition. Two experiments examined the effect of case mixing on lexical decision latencies. The aim of these experiments was to establish whether different case mixing patterns would interact with the process of appropriate visual segmentation and phonological assembly in word reading. In the first experiment, case mixing had a greater effect on response times to words when it led to visual disruption of the multi-letter graphemes (MLGs) as well as the overall word shape (e.g. pLeAd), compared to when it disrupted overall word shape only (e.g. plEAd). A second experiment replicated this finding with words in which MLGs represent either the vowel (e.g. bOaST vs. bOAst) or the consonant sound (e.g. sNaCK vs. sNAcK). These results confirm that case mixing can have different effect depending on the type of orthographic unit that is broken up by the manipulation. They demonstrate that graphemes are units that play an important role in visual word recognition, and that manipulation of their presentation by case mixing will have a significant effect on response latencies to words in a lexical decision task. As such these findings need to be taken into account by the models of visual word recognition.


2019 ◽  
Vol 25 (2) ◽  
pp. 214-233
Author(s):  
Filiz Mergen ◽  
Gulmira Kuruoglu

Recently obtained data from interdisciplinary research has expanded our knowledge on the relationship between language and the brain considerably. Numerous aspects of language have been the subject of research. Visual word recognition is a temporal process which starts with recognizing the physical features of words and matching them with potential candidates in the mental lexicon. Word frequency plays a significant role in this process. Other factors are the similarities in spelling and pronunciation, and whether words have meanings or are simply letter strings. The emotional load of the words is another factor that deserves a closer inspection as an overwhelming amount of evidence supports the privileged status of emotions both in verbal and nonverbal tasks. It is well-established that lexical processing is handled by the involvement of the brain hemispheres to varying degrees, and that the left hemisphere has greater involvement in verbal tasks as compared to the right hemisphere. Also, the emotional load of the verbal stimuli affects the specialized roles of the brain hemispheres in lexical processing. Despite the abundance of research on processing of words that belong to languages from a variety of language families, the number of studies that investigated Turkish, a language of Uralic-Altaic origin, is scarce. This study aims to fill the gap in the literature by reporting evidence on how Turkish words with and without emotional load are processed and represented in the brain. We employed a visual hemifield paradigm and a lexical decision task. The participants were instructed to decide if the letter strings presented either from the right or the left of the computer screen were real words or non-words. Their response times and accuracy of their answers were recorded. We obtained shorter response times and higher accuracy rates for real words than non-words as reported in the majority of studies in the literature. We also found that the emotional load modulated the recognition of words, supporting the results in the literature. Finally, our results are in line with the view of left hemispheric superiority in lexical processing in monolingual speakers.


2017 ◽  
Vol 42 (3) ◽  
pp. 311-320 ◽  
Author(s):  
Pei Zhao ◽  
Jing Zhao ◽  
Xuchu Weng ◽  
Su Li

Visual word N170 is an index of perceptual expertise for visual words across different writing systems. Recent developmental studies have shown the early emergence of visual word N170 and its close association with individual’s reading ability. In the current study, we investigated whether fine-tuning N170 for Chinese characters could emerge after short-term literacy learning in young pre-literate children. Two groups of Chinese preschool children were trained for visual identification and free writing respectively. Results showed that visual identification learning led to enhanced N170 sensitivity to characters over radical-combinations in the left hemisphere and line-combinations in the right hemisphere, and writing learning led to enhanced N170 sensitivity to characters over radical-combinations and line-combinations in the right hemisphere. These results suggested that the N170 component became more sensitive for the local graphic feature (strokes) of characters rapidly after brief literacy learning even in young children; and writing learning experiences specifically led to enhanced orthographic sensitivity in the right hemisphere.


Author(s):  
Marc Ouellet ◽  
Julio Santiago ◽  
Ziv Israeli ◽  
Shai Gabay

Spanish and English speakers tend to conceptualize time as running from left to right along a mental line. Previous research suggests that this representational strategy arises from the participants’ exposure to a left-to-right writing system. However, direct evidence supporting this assertion suffers from several limitations and relies only on the visual modality. This study subjected to a direct test the reading hypothesis using an auditory task. Participants from two groups (Spanish and Hebrew) differing in the directionality of their orthographic system had to discriminate temporal reference (past or future) of verbs and adverbs (referring to either past or future) auditorily presented to either the left or right ear by pressing a left or a right key. Spanish participants were faster responding to past words with the left hand and to future words with the right hand, whereas Hebrew participants showed the opposite pattern. Our results demonstrate that the left-right mapping of time is not restricted to the visual modality and that the direction of reading accounts for the preferred directionality of the mental time line. These results are discussed in the context of a possible mechanism underlying the effects of reading direction on highly abstract conceptual representations.


Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 304
Author(s):  
Kelsey Cnudde ◽  
Sophia van Hees ◽  
Sage Brown ◽  
Gwen van der Wijk ◽  
Penny M. Pexman ◽  
...  

Visual word recognition is a relatively effortless process, but recent research suggests the system involved is malleable, with evidence of increases in behavioural efficiency after prolonged lexical decision task (LDT) performance. However, the extent of neural changes has yet to be characterized in this context. The neural changes that occur could be related to a shift from initially effortful performance that is supported by control-related processing, to efficient task performance that is supported by domain-specific processing. To investigate this, we replicated the British Lexicon Project, and had participants complete 16 h of LDT over several days. We recorded electroencephalography (EEG) at three intervals to track neural change during LDT performance and assessed event-related potentials and brain signal complexity. We found that response times decreased during LDT performance, and there was evidence of neural change through N170, P200, N400, and late positive component (LPC) amplitudes across the EEG sessions, which suggested a shift from control-related to domain-specific processing. We also found widespread complexity decreases alongside localized increases, suggesting that processing became more efficient with specific increases in processing flexibility. Together, these findings suggest that neural processing becomes more efficient and optimized to support prolonged LDT performance.


2021 ◽  
Vol 35 (Supplement A) ◽  
pp. 132-148
Author(s):  
Tahira Gulamani ◽  
Achala H. Rodrigo ◽  
Amanda A. Uliaszek ◽  
Anthony C. Ruocco

Emotion perception biases may precipitate problematic interpersonal interactions in families affected with borderline personality disorder (BPD) and lead to conflictual relationships. In the present study, the authors investigated the familial aggregation of facial emotion recognition biases for neutral, happy, sad, fearful, and angry expressions in probands with BPD (n = 89), first-degree biological relatives (n = 67), and healthy controls (n = 87). Relatives showed comparable accuracy and response times to controls in recognizing negative emotions in aggregate and most discrete emotions. For sad expressions, both probands and relatives displayed slower response latencies, and they were more likely than controls to perceive sad expressions as fearful. Nonpsychiatrically affected relatives were slower than controls in responding to negative emotional expressions in aggregate, and fearful and sad facial expressions more specifically. These findings uncover potential biases in perceiving sad and fearful facial expressions that may be transmitted in families affected with BPD.


2007 ◽  
Vol 105 (2) ◽  
pp. 514-522 ◽  
Author(s):  
Joy L. Hendrick ◽  
Jamie R. Switzer

As some states allow motorists to use hands-free cell phones only while driving, this study was done to examine some braking responses to see if conversing on these two types of cell phones affects quick responding. College-age drivers ( n = 25) completed reaction time trials in go/no-go situations under three conditions: control (no cell phone or conversation), and conversing on hands-free and hand-held cell phones. Their task involved moving the right foot from one pedal to another as quickly as possible in response to a visual signal in a lab setting. Significantly slower reaction times, movement times, and total response times were found for both cell phone conditions than for the control but no differences between hands-free and hand-held phone conditions. These findings provide additional support that talking on cell phones, regardless if it is hands-free or hand-held, reduces speed of information processing.


2019 ◽  
Author(s):  
Marc Brysbaert ◽  
Emmanuel Keuleers ◽  
Paweł Mandera

We present a new dataset of English word recognition times for a total of 62 thousand words, called the English Crowdsourcing Project. The data were collected via an internet vocabulary test, in which more than one million people participated. The present dataset is limited to native English speakers. Participants were asked to indicate which words they knew. Their response times were registered, although at no point were the participants asked to respond as fast as possible. Still, the response times correlate around .75 with the response times of the English Lexicon Project for the shared words. Also results of virtual experiments indicate that the new response times are a valid addition to the English Lexicon Project. This not only means that we have useful response times for some 35 thousand extra words, but we now also have data on differences in response latencies as a functionof education and age.


Author(s):  
Birgitta Dresp-Langley ◽  
Marie Monfouga

Pieron's and Chocholle’s seminal psychophysical work predicts that human response time to information relative to visual contrast and/or sound frequency decreases when contrast intensity or sound frequency increases. The goal of this study is to bring to the fore the ability of individuals to use visual contrast intensity and sound frequency in combination for faster perceptual decisions of relative depth (“nearer”) in planar (2D) object configurations on the basis of physical variations in luminance contrast. Computer controlled images with two abstract patterns of varying contrast intensity, one on the left and one on the right, preceded or not by a pure tone of varying frequency, were shown to healthy young humans in controlled experimental sequences. Their task (two-alternative forced-choice) was to decide as quickly as possible which of two patterns, the left or the right one, in a given image appeared to “stand out as if it were nearer” in terms of apparent (subjective) visual depth. The results show that the combinations of varying relative visual contrast with sounds of varying frequency exploited here produced an additive effect on choice response times in terms of facilitation, where a stronger visual contrast combined with a higher sound frequency produced shorter forced-choice response times. This new effect is predicted by cross-modal audio-visual probability summation.


Sign in / Sign up

Export Citation Format

Share Document