scholarly journals Time-locked Perceptual Fading Induced by Visual Transients

2003 ◽  
Vol 15 (5) ◽  
pp. 664-672 ◽  
Author(s):  
Ryota Kanai ◽  
Yukiyasu Kamitani

After prolonged fixation, a stationary object placed in the peripheral visual field fades and disappears from our visual awareness, especially at low luminance contrast (the Troxler effect). Here, we report that similar fading can be triggered by visual transients, such as additional visual stimuli flashed near the object, apparent motion, or a brief removal of the object itself (blinking). The fading occurs even without prolonged adaptation and is time-locked to the presentation of the visual transients. Experiments show that the effect of a flashed object decreased monotonically as a function of the distance from the target object. Consistent with this result, when apparent motion, consisting of a sequence of flashes was presented between stationary disks, these target disks perceptually disappeared as if erased by the moving object. Blinking the target disk, instead of flashing an additional visual object, turned out to be sufficient to induce the fading. The effect of blinking peaked around a blink duration of 80 msec. Our findings reveal a unique mechanism that controls the visibility of visual objects in a spatially selective and time-locked manner in response to transient visual inputs. Possible mechanisms underlying this phenomenon will be discussed.

2018 ◽  
Vol 30 (1) ◽  
pp. 55-64 ◽  
Author(s):  
Lisa N. Jefferies ◽  
Vincent Di Lollo

We report a novel visual phenomenon called the rejuvenation effect. It causes an “old” object that has been on view for some time to acquire the properties of a suddenly appearing new object. In each experiment, a square outline was displayed continuously on one side of fixation. The target (an asterisk) was presented either inside the square or on the opposite side of fixation. On half of the trials, a transient visual or auditory event preceded the target. In Experiment 1a ( N = 139), response times were faster when the target appeared inside the square, but only when it was preceded by a transient event, consistent with the network-reset theory of locus coeruleus-norepinephrine (LC-NE) phasic activation. Three further experiments confirmed the predictions of network-reset theory, including the absence of rejuvenation in participants with atypical LC-NE functioning (individuals with symptoms of autism spectrum disorder). These findings provide new perspectives on what causes a visual object to be perceived as new.


2012 ◽  
Vol 25 (0) ◽  
pp. 117 ◽  
Author(s):  
Yi-Chuan Chen ◽  
Gert Westermann

Infants are able to learn novel associations between visual objects and auditory linguistic labels (such as a dog and the sound /dɔg/) by the end of their first year of life. Surprisingly, at this age they seem to fail to learn the associations between visual objects and natural sounds (such as a dog and its barking sound). Researchers have therefore suggested that linguistic learning is special (Fulkerson and Waxman, 2007) or that unfamiliar sounds overshadow visual object processing (Robinson and Sloutsky, 2010). However, in previous studies visual stimuli were paired with arbitrary sounds in contexts lacking ecological validity. In the present study, we create animations of two novel animals and two realistic animal calls to construct two audiovisual stimuli. In the training phase, each animal was presented in motions that mimicked animal behaviour in real life: in a short movie, the animal ran (or jumped) from the periphery to the center of the monitor, and it made calls while raising its head. In the test phase, static images of both animals were presented side-by-side and the sound for one of the animals was played. Infant looking times to each stimulus were recorded with an eye tracker. We found that following the sound, 12-month-old infants preferentially looked at the animal corresponding to the sound. These results show that 12-month-old infants are able to learn novel associations between visual objects and natural sounds in an ecologically valid situation, thereby challenging our current understanding of the development of crossmodal association learning.


2018 ◽  
Vol 3 (1) ◽  
pp. e000139
Author(s):  
Lee Lenton

ObjectiveTo compare the performance of adults with multifocal intraocular lenses (MIOLs) in a realistic flight simulator with age-matched adults with monofocal intraocular lenses (IOLs).Methods and AnalysisTwenty-five adults ≥60 years with either bilateral MIOL or bilateral IOL implantation were enrolled. Visual function tests included visual acuity and contrast sensitivity under photopic and mesopic conditions, defocus curves and low luminance contrast sensitivity tests in the presence and absence of glare (Mesotest II), as well as halo size measurement using an app-based halometer (Aston halometer). Flight simulator performance was assessed in a fixed-based flight simulator (PS4.5). Subjects completed three simulated landing runs in both daytime and night-time conditions in a randomised order, including a series of visual tasks critical for safety.ResultsOf the 25 age-matched enrolled subjects, 13 had bilateral MIOLs and 12 had bilateral IOLs. Photopic and mesopic visual acuity or contrast sensitivity were not significantly different between the groups. Larger halo areas were seen in the MIOL group and Mesotest values were significantly worse in the MIOL group, both with and without glare. The defocus curves showed better uncorrected visual acuity at intermediate and near distances for the MIOL group. There were no significant differences regarding performance of the vision-related flight simulator tasks between both groups.ConclusionsThe performance of visually related flight simulator tasks was not significantly impaired in older adults with MIOLs compared with age-matched adults with monofocal IOLs. These findings suggest that MIOLs do not impair visual performance in a flight simulator.


Perception ◽  
2018 ◽  
Vol 47 (9) ◽  
pp. 966-975 ◽  
Author(s):  
Shinyoung Jung ◽  
Yosun Yoon ◽  
Suk Won Han

People’s attention is well attracted to stimuli matching their working memory. This memory-driven attentional capture has been demonstrated in simplified and controlled laboratory settings. The present study investigated whether working memory contents capture attention in a setting that closely resembles real-world environment. In the experiment, participants performed a task of searching for a target object in real-world indoor scenes, while maintaining a visual object in working memory. To create a setting similar to real-world environment, images taken from IKEA®’s online catalogue were used. The results showed that participants’ attention was biased toward a working memory-matching object, interfering with the target search. This was so even when participants did not expect that a memory-matching stimulus would appear in the search array. These results suggest that working memory can bias attention in complex, natural environment and this memory-driven attentional capture in real-world setting takes place in an automatic manner.


Proceedings ◽  
2020 ◽  
Vol 39 (1) ◽  
pp. 18
Author(s):  
Nenchoo ◽  
Tantrairatn

This paper presents an estimation of 3D UAV position in real-time condition by using Intel RealSense Depth camera D435i with visual object detection technique as a local positioning system for indoor environment. Nowadays, global positioning system or GPS is able to specify UAV position for outdoor environment. However, for indoor environment GPS hasn’t a capability to determine UAV position. Therefore, Depth stereo camera D435i is proposed to observe on ground to specify UAV position for indoor environment instead of GPS. Using deep learning for object detection to identify target object with depth camera to specifies 2D position of target object. In addition, depth position is estimated by stereo camera and target size. For experiment, Parrot Bebop2 as a target object is detected by using YOLOv3 as a real-time object detection system. However, trained Fully Convolutional Neural Networks (FCNNs) model is considerably significant for object detection, thus the model has been trained for bebop2 only. To conclude, this proposed system is able to specifies 3D position of bebop2 for indoor environment. For future work, this research will be developed and apply for visualized navigation control of drone swarm.


Perception ◽  
1996 ◽  
Vol 25 (11) ◽  
pp. 1263-1280 ◽  
Author(s):  
Walter F Bischof ◽  
Adriane E Seiffert ◽  
Vincent Di Lollo

The characteristics of the sustained input to directionally selective motion sensors were examined in three human psychophysical studies on directional-motion discrimination. Apparent motion was produced by displaying a group of dots in two frames (F1 and F2), where F2 was a translated version of F1. All stimuli included parts that contained both F1 and F2 (combined images) and parts containing only F1 or F2 (single images). All displays began with a single image (F1), continued with the combined image, and ended with F2. Six durations of single and of combined images (10, 20, 40, 80, 160, or 320 ms) were crossed factorially. As the duration of the single image was increased, perception of directional motion first improved, and then declined at longer durations. This outcome contrasted with the monotonic increment obtained in earlier studies under low-luminance conditions. To account for the entire pattern of results, earlier models of the Reichardt motion sensor were modified so as to include a mixed transient – sustained input to one of the filters of the sensor. Predictions from the new model were tested and confirmed in two experiments carried out under both low-luminance and high-luminance viewing conditions.


1995 ◽  
Vol 6 (3) ◽  
pp. 182-186 ◽  
Author(s):  
Steven Yantis

The human visual system does not rigidly preserve the properties of the retinal image as neural signals are transmitted to higher areas of the brain Instead, it generates a representation that captures stable surface properties despite a retinal image that is often fragmented in space and time because of occlusion caused by object and observer motion The recovery of this coherent representation depends at least in part on input from an abstract representation of three-dimensional (3-D) surface layout In the two experiments reported, a stereoscopic apparent motion display was used to investigate the perceived continuity of a briefly interrupted visual object When a surface appeared in front of the object's location during the interruption, the object was more likely to be perceived as persisting through the interruption (behind an occluder) than when the surface appeared behind the object's location under otherwise identical stimulus conditions The results reveal the influence of 3-D surface-based representations even in very simple visual tasks


2016 ◽  
Vol 283 (1831) ◽  
pp. 20160692 ◽  
Author(s):  
Alessandro Benedetto ◽  
Donatella Spinelli ◽  
M. Concetta Morrone

Recent evidence suggests that ongoing brain oscillations may be instrumental in binding and integrating multisensory signals. In this experiment, we investigated the temporal dynamics of visual–motor integration processes. We show that action modulates sensitivity to visual contrast discrimination in a rhythmic fashion at frequencies of about 5 Hz (in the theta range), for up to 1 s after execution of action. To understand the origin of the oscillations, we measured oscillations in contrast sensitivity at different levels of luminance, which is known to affect the endogenous brain rhythms, boosting the power of alpha-frequencies. We found that the frequency of oscillation in sensitivity increased at low luminance, probably reflecting the shift in mean endogenous brain rhythm towards higher frequencies. Importantly, both at high and at low luminance, contrast discrimination showed a rhythmic motor-induced suppression effect, with the suppression occurring earlier at low luminance. We suggest that oscillations play a key role in sensory–motor integration, and that the motor-induced suppression may reflect the first manifestation of a rhythmic oscillation.


Author(s):  
Matthew J Davidson ◽  
Will Mithen ◽  
Hinze Hogendoorn ◽  
Jeroen J.A. van Boxtel ◽  
Naotsugu Tsuchiya

AbstractAlthough visual awareness of an object typically increases neural responses, we identify a neural response that increases prior to perceptual disappearances, and that scales with the amount of invisibility reported during perceptual filling-in. These findings challenge long-held assumptions regarding the neural correlates of consciousness and entrained visually evoked potentials, by showing that the strength of stimulus-specific neural activity can encode the conscious absence of a stimulus.Significance StatementThe focus of attention and the contents of consciousness frequently overlap. Yet what happens if this common correlation is broken? To test this, we asked human participants to attend and report on the invisibility of four visual objects which seemed to disappear, yet actually remained on screen. We found that neural activity increased, rather than decreased, when targets became invisible. This coincided with measures of attention that also increased when stimuli disappeared. Together, our data support recent suggestions that attention and conscious perception are distinct and separable. In our experiment, neural measures more strongly follow attention.


Sign in / Sign up

Export Citation Format

Share Document