Controlled and automatic perceptions of a sociolinguistic marker

2018 ◽  
Vol 30 (2) ◽  
pp. 261-285 ◽  
Author(s):  
Annette D'Onofrio

AbstractThis paper explores the relation between controlled and automatic perceptions of a sociolinguistic variable that yields no metalinguistic commentary—a marker (Labov, 1972). Two experiments examine links between the backed trap vowel and its social meanings. The first, a matched guise task, measures social evaluations of the feature in a relatively controlled, introspective task. In the second, two measures are used that access different points in online processing and different degrees of listener control: (a) lexical categorization of an ambiguous stimulus, measured by a mouse click, and (b) automatic, early responses to this ambiguous stimulus, measured by eye movements. While listeners perceptually link trap-backing with social information in all three measures, specific social effects differ across the measures. Findings illustrate that the task and time course of a response influence how listeners link a linguistic marker with social information, even when this sociolinguistic knowledge is below the level of conscious awareness.

2015 ◽  
Vol 24 ◽  
pp. 1 ◽  
Author(s):  
Florian Schwarz

One focus of work on the processing of linguistic meaning has been the relative processing speed of different aspects of meaning. While much early work has focused on implicatures in comparison to literal asserted content (e.g., Bott & Noveck 2004, Huang & Snedeker 2009, among many others), the present paper extends recent efforts to experimentally investigate another aspect of meaning, namely presuppositions. It investigates the triggers again and stop using the visual world eye tracking paradigm, and provides evidence for rapid processing of presupposed content. Our study finds no difference in timing for the two triggers, which is of theoretical relevance given proposals for distinguishing classes of triggers, such as hard vs. soft (Abusch 2010). Whatever differences between these there may be are apparently not affecting the online processing time course. As a further comparison, again was also compared to twice, which expresses essentially the same meaning without a presupposition. Shifts in eye movements for these two cases also appear to be entirely on par, further supporting the notion that presupposed and asserted content are available in parallel early on in online processing.


2019 ◽  
Vol 72 (7) ◽  
pp. 1863-1875 ◽  
Author(s):  
Martin R Vasilev ◽  
Fabrice BR Parmentier ◽  
Bernhard Angele ◽  
Julie A Kirkby

Oddball studies have shown that sounds unexpectedly deviating from an otherwise repeated sequence capture attention away from the task at hand. While such distraction is typically regarded as potentially important in everyday life, previous work has so far not examined how deviant sounds affect performance on more complex daily tasks. In this study, we developed a new method to examine whether deviant sounds can disrupt reading performance by recording participants’ eye movements. Participants read single sentences in silence and while listening to task-irrelevant sounds. In the latter condition, a 50-ms sound was played contingent on the fixation of five target words in the sentence. On most occasions, the same tone was presented (standard sound), whereas on rare and unexpected occasions it was replaced by white noise (deviant sound). The deviant sound resulted in significantly longer fixation durations on the target words relative to the standard sound. A time-course analysis showed that the deviant sound began to affect fixation durations around 180 ms after fixation onset. Furthermore, deviance distraction was not modulated by the lexical frequency of target words. In summary, fixation durations on the target words were longer immediately after the presentation of the deviant sound, but there was no evidence that it interfered with the lexical processing of these words. The present results are in line with the recent proposition that deviant sounds yield a temporary motor suppression and suggest that deviant sounds likely inhibit the programming of the next saccade.


2021 ◽  
Author(s):  
Andy Jeesu Kim ◽  
Brian A. Anderson

Despite our best intentions, physically salient but entirely task-irrelevant stimuli can sometimes capture our attention. With learning, it is possible to more efficiently ignore such stimuli, although specifically how the visual system accomplishes this remains to be clarified. Using a sample of young-adult participants, we examined the time course of eye movements to targets and distractors. We replicate a reduced frequency of eye movements to the distractor when appearing in a location at which distractors are frequently encountered. This reduction was observed even for the earliest saccades, when selection tends to be most stimulus-driven. When the distractor appeared at the high-probability location, saccadic reaction time was slowed specifically for distractor-going saccades, suggesting a slowing of priority accumulation at this location. In the event that the distractor was fixated, disengagement from the distractor was also faster when it appeared in the high-probability location. Both proactive and reactive mechanisms of distractor suppression work together to minimize attentional capture by frequently-encountered distractors.


2021 ◽  
Author(s):  
Ana Pellicer-Sánchez ◽  
Anna Siyanova

Abstract The field of vocabulary research is witnessing a growing interest in the use of eye-tracking to investigate topics that have traditionally been examined using offline measures, providing new insights into the processing and learning of vocabulary. During an eye-tracking experiment, participants’ eye movements are recorded while they attend to written or auditory input, resulting in a rich record of online processing behaviour. Because of its many benefits, eye-tracking is becoming a major research technique in vocabulary research. However, before this emerging trend of eye-tracking based vocabulary research continues to proliferate, it is important to step back and reflect on what current studies have shown about the processing and learning of vocabulary, and the ways in which we can use the technique in future research. To this aim, the present paper provides a comprehensive overview of current eye-tracking research findings, both in terms of the processing and learning of single words and formulaic sequences. Current research gaps and potential avenues for future research are also discussed.


1991 ◽  
Vol 1 (2) ◽  
pp. 161-170
Author(s):  
Jean-Louis Vercher ◽  
Gabriel M. Gauthier

To maintain clear vision, the images on the retina must remain reasonably stable. Head movements are generally dealt with successfully by counter-rotation of the eyes induced by the combined actions of the vestibulo-ocular reflex (VOR) and the optokinetic reflex. A problem of importance relates to the value of the so-called intrinsic gain of the VOR (VORG) in man, and how this gain is modulated to provide appropriate eye movements. We have studied these problems in two situations: 1. fixation of a stationary object of the visual space while the head moves; 2. fixation of an object moving with the head. These two situations were compared to a basic condition in which no visual target was allowed in order to induce “pure” VOR. Eye movements were recorded in seated subjects during stationary sinusoidal and transient rotations around the vertical axis. Subjects were in total darkness (DARK condition) and involved in mental arithmetic. Alternatively, they were provided with a small foveal target, either fixed with respect to earth (earth-fixed target: EFT condition), or moving with them (chair-fixed-target: CFT condition). The stationary rotation experiment was used as baseline for the ensuing experiment and yielded control data in agreement with the literature. In all 3 visual conditions, typical responses to transient rotations were rigorously identical during the first 200 ms. They showed, sequentially, a 16-ms delay of the eye behind the head and a rapid increase in eye velocity during 75 to 80 ms, after which the average VORG was 0.9 ± 0.15. During the following 50 to 100 ms, the gain remained around 0.9 in all three conditions. Beyond 200 ms, the VORG remained around 0.9 in DARK and increased slowly towards 1 or decreased towards zero in the EFT and CFT conditions, respectively. The time-course of the later events suggests that visual tracking mechanisms came into play to reduce retinal slip through smooth pursuit, and position error through saccades. Our data also show that in total darkness VORG is set to 0.9 in man. Lower values reported in the literature essentially reflect predictive properties of the vestibulo-ocular mechanism, particularly evident when the input signal is a sinewave.


2019 ◽  
Vol 121 (5) ◽  
pp. 1967-1976 ◽  
Author(s):  
Niels Gouirand ◽  
James Mathew ◽  
Eli Brenner ◽  
Frederic R. Danion

Adapting hand movements to changes in our body or the environment is essential for skilled motor behavior. Although eye movements are known to assist hand movement control, how eye movements might contribute to the adaptation of hand movements remains largely unexplored. To determine to what extent eye movements contribute to visuomotor adaptation of hand tracking, participants were asked to track a visual target that followed an unpredictable trajectory with a cursor using a joystick. During blocks of trials, participants were either allowed to look wherever they liked or required to fixate a cross at the center of the screen. Eye movements were tracked to ensure gaze fixation as well as to examine free gaze behavior. The cursor initially responded normally to the joystick, but after several trials, the direction in which it responded was rotated by 90°. Although fixating the eyes had a detrimental influence on hand tracking performance, participants exhibited a rather similar time course of adaptation to rotated visual feedback in the gaze-fixed and gaze-free conditions. More importantly, there was extensive transfer of adaptation between the gaze-fixed and gaze-free conditions. We conclude that although eye movements are relevant for the online control of hand tracking, they do not play an important role in the visuomotor adaptation of such tracking. These results suggest that participants do not adapt by changing the mapping between eye and hand movements, but rather by changing the mapping between hand movements and the cursor’s motion independently of eye movements. NEW & NOTEWORTHY Eye movements assist hand movements in everyday activities, but their contribution to visuomotor adaptation remains largely unknown. We compared adaptation of hand tracking under free gaze and fixed gaze. Although our results confirm that following the target with the eyes increases the accuracy of hand movements, they unexpectedly demonstrate that gaze fixation does not hinder adaptation. These results suggest that eye movements have distinct contributions for online control and visuomotor adaptation of hand movements.


2019 ◽  
Vol 26 (6) ◽  
pp. 557-566 ◽  
Author(s):  
Nasrin Mohammadhasani ◽  
Tindara Caprì ◽  
Andrea Nucita ◽  
Giancarlo Iannizzotto ◽  
Rosa Angela Fabio

AbstractObjective:Several studies agree on the link between attention and eye movements during reading. It has been well established that attention and working memory (WM) interact. A question that could be addressed to better understand these relationships is: to what extent can an attention deficit affect eye movements and, consequently, remembering a word? The main aims of the present study were (1) to compare visual patterns of word stimuli between children with Attention Deficit Hyperactivity Disorder (ADHD) and typically developing (TD) children, during a visual task on word stimuli; (2) to examine the WM accuracy of the word stimuli; and (3) to compare the dynamic of visual scan path in both groups.Method:A total of 49 children with ADHD, age and sex matched with 32 TD children, were recruited. We used eye-tracking technology in which the Word Memory Test was implemented. To highlight the scan path of participants, two measures were used: the ordered direction of reading and the entropy index.Results:ADHD groups showed a poorer WM than TD group. They did not follow a typical scan path across the words compared with TD children, but their visual scanning was discontinuous, uncoordinated, and chaotic. ADHD groups showed an index of entropy among the four categories of saccades higher than TD group.Conclusions:The findings were discussed in light of two directions: the relationship between atypical visual scan path and WM and the training implications related to the necessity of redirecting the dynamic of visual scan path in ADHD to improve WM.


2008 ◽  
Vol 3 (2) ◽  
pp. 149-175 ◽  
Author(s):  
Ian Cunnings ◽  
Harald Clahsen

The avoidance of regular but not irregular plurals inside compounds (e.g., *rats eater vs. mice eater) has been one of the most widely studied morphological phenomena in the psycholinguistics literature. To examine whether the constraints that are responsible for this contrast have any general significance beyond compounding, we investigated derived word forms containing regular and irregular plurals in two experiments. Experiment 1 was an offline acceptability judgment task, and Experiment 2 measured eye movements during reading derived words containing regular and irregular plurals and uninflected base nouns. The results from both experiments show that the constraint against regular plurals inside compounds generalizes to derived words. We argue that this constraint cannot be reduced to phonological properties, but is instead morphological in nature. The eye-movement data provide detailed information on the time-course of processing derived word forms indicating that early stages of processing are affected by a general constraint that disallows inflected words from feeding derivational processes, and that the more specific constraint against regular plurals comes in at a subsequent later stage of processing. We argue that these results are consistent with stage-based models of language processing.


Author(s):  
Adrian Staub ◽  
Keith Rayner ◽  
Alexander Pollatsek ◽  
Jukka Hyönä ◽  
Helen Majewski

Sign in / Sign up

Export Citation Format

Share Document