visual onset
Recently Published Documents


TOTAL DOCUMENTS

21
(FIVE YEARS 5)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Vol 12 ◽  
Author(s):  
Shao Li ◽  
Qi Ding ◽  
Yichen Yuan ◽  
Zhenzhu Yue

People can discriminate the synchrony between audio-visual scenes. However, the sensitivity of audio-visual synchrony perception can be affected by many factors. Using a simultaneity judgment task, the present study investigated whether the synchrony perception of complex audio-visual stimuli was affected by audio-visual causality and stimulus reliability. In Experiment 1, the results showed that audio-visual causality could increase one's sensitivity to audio-visual onset asynchrony (AVOA) of both action stimuli and speech stimuli. Moreover, participants were more tolerant of AVOA of speech stimuli than that of action stimuli in the high causality condition, whereas no significant difference between these two kinds of stimuli was found in the low causality condition. In Experiment 2, the speech stimuli were manipulated with either high or low stimulus reliability. The results revealed a significant interaction between audio-visual causality and stimulus reliability. Under the low causality condition, the percentage of “synchronous” responses of audio-visual intact stimuli was significantly higher than that of visual_intact/auditory_blurred stimuli and audio-visual blurred stimuli. In contrast, no significant difference among all levels of stimulus reliability was observed under the high causality condition. Our study supported the synergistic effect of top-down processing and bottom-up processing in audio-visual synchrony perception.


2020 ◽  
Author(s):  
Devin H. Kehoe ◽  
Jennifer Lewis ◽  
Mazyar Fallah

AbstractSuccessful oculomotor target selection often requires discriminating visual features but it remains contentious whether oculomotor substrates encoding saccade vectors functionally contribute to this process. One possibility is that visual features are discriminated cortically and oculomotor modules select the object with the highest activation in the set of all preprocessed cortical object representations, while an alternative possibility is that oculomotor modules actively discriminate potential targets based on visual features. If the latter view is correct, these modules should not require input from specialized visual cortices encoding the task relevant features. We therefore examined whether the latency of visual onset responses elicited by abrupt distractor onsets is consistent with input from specialized visual cortices by non-invasively measuring human saccade metrics (saccade curvature, endpoint deviations, saccade frequency, error proportion) as a function of distractor processing time for novel, visually complex distractors that had to be discriminated from a target to guide saccades. Visual onset response latencies were ~110 ms, consistent with projections from anterior cortical sites specialized for object processing. Surprisingly, oculomotor visual onset responses encoded features, as we manipulated the visual similarity between targets and distractors and observed an increased visual onset response magnitude and duration when the distractor was highly similar to the target, which was not attributable to an inhibitory processing delay. Visual onset responses were dynamically modulated by executive function, as these responses were anticipatorily extinguished over the time course of the experiment. As expected, the latency of distractor-related inhibition was modulated by the behavioral relevance of the distractor.Significance StatementWe provide novel insights into the role of the oculomotor system in saccadic target selection by challenging the convention that neural substrates that encode oculomotor vectors functionally contribute to target discrimination. Our data show that the oculomotor system selects a winner from amongst the preprocessed object representations output from specialized visual cortices as supposed to discriminating visual features locally. We also challenge the convention that oculomotor visual onset responses are feature-invariant, as they encoded task-relevance.


2020 ◽  
Author(s):  
F. Cervantes Constantino ◽  
T. Sánchez-Costa ◽  
G.A. Cipriani ◽  
A. Carboni

AbstractSurroundings continually propagate audiovisual (AV) signals, and by attending we make clear and precise sense of those that matter at any given time. In such cases, parallel visual and auditory contributions may jointly serve as a basis for selection. It is unclear what hierarchical effects arise when initial selection criteria are unimodal, or involve uncertainty. Uncertainty in sensory information is a factor considered in computational models of attention proposing precision weighting as a primary mechanism for selection. The effects of visuospatial selection on auditory processing were investigated here with electroencephalography (EEG). We examined the encoding of random tone pips probabilistically associated to spatially-attended visual changes, via a temporal response function model (TRF) of the auditory EEG timeseries. AV precision, or temporal uncertainty, was manipulated across stimuli while participants sustained endogenous visuospatial attention. TRF data showed that cross-modal modulations were dominated by AV precision between auditory and visual onset times. The roles of unimodal (visuospatial and auditory) uncertainties, each a consequence of non-synchronous AV presentations, were further investigated. The TRF data demonstrated that visuospatial uncertainty in attended sector size determines transfer effects by enabling the visual priming of tones when relevant for auditory segregation, in line with top-down processing timescales. Auditory uncertainty in distractor proportion, on the other hand, determined susceptibility of early tone encoding to automatic change by incoming visual update processing. The findings provide a hierarchical account of the role of uni- and cross-modal sources of uncertainty on the neural encoding of sound dynamics in a multimodal attention task.


2019 ◽  
Vol 62 (10) ◽  
pp. 3860-3875 ◽  
Author(s):  
Kaylah Lalonde ◽  
Lynne A. Werner

Purpose This study assessed the extent to which 6- to 8.5-month-old infants and 18- to 30-year-old adults detect and discriminate auditory syllables in noise better in the presence of visual speech than in auditory-only conditions. In addition, we examined whether visual cues to the onset and offset of the auditory signal account for this benefit. Method Sixty infants and 24 adults were randomly assigned to speech detection or discrimination tasks and were tested using a modified observer-based psychoacoustic procedure. Each participant completed 1–3 conditions: auditory-only, with visual speech, and with a visual signal that only cued the onset and offset of the auditory syllable. Results Mixed linear modeling indicated that infants and adults benefited from visual speech on both tasks. Adults relied on the onset–offset cue for detection, but the same cue did not improve their discrimination. The onset–offset cue benefited infants for both detection and discrimination. Whereas the onset–offset cue improved detection similarly for infants and adults, the full visual speech signal benefited infants to a lesser extent than adults on the discrimination task. Conclusions These results suggest that infants' use of visual onset–offset cues is mature, but their ability to use more complex visual speech cues is still developing. Additional research is needed to explore differences in audiovisual enhancement (a) of speech discrimination across speech targets and (b) with increasingly complex tasks and stimuli.


2019 ◽  
Author(s):  
Talia Brandman ◽  
Chiara Avancini ◽  
Olga Leticevscaia ◽  
Marius V. Peelen

AbstractSounds (e.g., barking) help us to visually identify objects (e.g., a dog) that are distant or ambiguous. While neuroimaging studies have revealed neuroanatomical sites of audiovisual interactions, little is known about the time-course by which sounds facilitate visual object processing. Here we used magnetoencephalography (MEG) to reveal the time-course of the facilitatory influence of natural sounds (e.g., barking) on visual object processing, and compared this to the facilitatory influence of spoken words (e.g., “dog”). Participants viewed images of blurred objects preceded by a task-irrelevant natural sound, a spoken word, or uninformative noise. A classifier was trained to discriminate multivariate sensor patterns evoked by animate and inanimate intact objects with no sounds, presented in a separate experiment, and tested on sensor patterns evoked by the blurred objects in the three auditory conditions. Results revealed that both sounds and words, relative to uninformative noise, significantly facilitated visual object category decoding between 300-500 ms after visual onset. We found no evidence for earlier facilitation by sounds than by words. These findings provide evidence for a semantic route of facilitation by both natural sounds and spoken words, whereby the auditory input first activates semantic object representations, which then modulate the visual processing of objects.


2017 ◽  
Vol 33 (6) ◽  
pp. 464-468 ◽  
Author(s):  
Matthew S. Tenan ◽  
Andrew J. Tweedell ◽  
Courtney A. Haynes

The onset of muscle activity, as measured by electromyography (EMG), is a commonly applied metric in biomechanics. Intramuscular EMG is often used to examine deep musculature and there are currently no studies examining the effectiveness of algorithms for intramuscular EMG onset. The present study examines standard surface EMG onset algorithms (linear envelope, Teager-Kaiser Energy Operator, and sample entropy) and novel algorithms (time series mean-variance analysis, sequential/batch processing with parametric and nonparametric methods, and Bayesian changepoint analysis). Thirteen male and 5 female subjects had intramuscular EMG collected during isolated biceps brachii and vastus lateralis contractions, resulting in 103 trials. EMG onset was visually determined twice by 3 blinded reviewers. Since the reliability of visual onset was high (ICC(1,1): 0.92), the mean of the 6 visual assessments was contrasted with the algorithmic approaches. Poorly performing algorithms were stepwise eliminated via (1) root mean square error analysis, (2) algorithm failure to identify onset/premature onset, (3) linear regression analysis, and (4) Bland-Altman plots. The top performing algorithms were all based on Bayesian changepoint analysis of rectified EMG and were statistically indistinguishable from visual analysis. Bayesian changepoint analysis has the potential to produce more reliable, accurate, and objective intramuscular EMG onset results than standard methodologies.


2017 ◽  
Vol 29 (4) ◽  
pp. 637-651 ◽  
Author(s):  
Tim C. Kietzmann ◽  
Anna L. Gert ◽  
Frank Tong ◽  
Peter König

Faces provide a wealth of information, including the identity of the seen person and social cues, such as the direction of gaze. Crucially, different aspects of face processing require distinct forms of information encoding. Another person's attentional focus can be derived based on a view-dependent code. In contrast, identification benefits from invariance across all viewpoints. Different cortical areas have been suggested to subserve these distinct functions. However, little is known about the temporal aspects of differential viewpoint encoding in the human brain. Here, we combine EEG with multivariate data analyses to resolve the dynamics of face processing with high temporal resolution. This revealed a distinct sequence of viewpoint encoding. Head orientations were encoded first, starting after around 60 msec of processing. Shortly afterward, peaking around 115 msec after stimulus onset, a different encoding scheme emerged. At this latency, mirror-symmetric viewing angles elicited highly similar cortical responses. Finally, about 280 msec after visual onset, EEG response patterns demonstrated a considerable degree of viewpoint invariance across all viewpoints tested, with the noteworthy exception of the front-facing view. Taken together, our results indicate that the processing of facial viewpoints follows a temporal sequence of encoding schemes, potentially mirroring different levels of computational complexity.


2015 ◽  
Vol 27 (7) ◽  
pp. 1360-1375 ◽  
Author(s):  
Heida M. Sigurdardottir ◽  
David L. Sheinberg

The lateral intraparietal area (LIP) is thought to play an important role in the guidance of where to look and pay attention. LIP can also respond selectively to differently shaped objects. We sought to understand to what extent short-term and long-term experience with visual orienting determines the responses of LIP to objects of different shapes. We taught monkeys to arbitrarily associate centrally presented objects of various shapes with orienting either toward or away from a preferred spatial location of a neuron. The training could last for less than a single day or for several months. We found that neural responses to objects are affected by such experience, but that the length of the learning period determines how this neural plasticity manifests. Short-term learning affects neural responses to objects, but these effects are only seen relatively late after visual onset; at this time, the responses to newly learned objects resemble those of familiar objects that share their meaning or arbitrary association. Long-term learning affects the earliest bottom–up responses to visual objects. These responses tend to be greater for objects that have been associated with looking toward, rather than away from, LIP neurons' preferred spatial locations. Responses to objects can nonetheless be distinct, although they have been similarly acted on in the past and will lead to the same orienting behavior in the future. Our results therefore indicate that a complete experience-driven override of LIP object responses may be difficult or impossible. We relate these results to behavioral work on visual attention.


2014 ◽  
Vol 197 (5) ◽  
pp. 913-923 ◽  
Author(s):  
François Daigle ◽  
Sylvain Lerat ◽  
Giselda Bucca ◽  
Édith Sanssouci ◽  
Colin P. Smith ◽  
...  

AlthoughStreptomyces coelicoloris not resistant to tellurite, it possesses several TerD domain-encoding (tdd) genes of unknown function. To elucidate the function oftdd8, the transcriptomes ofS. coelicolorstrain M145 and of atdd8deletion mutant derivative (the Δtdd8strain) were compared. Several orthologs ofMycobacterium tuberculosisgenes involved in dormancy survival were upregulated in the deletion mutant at the visual onset of prodiginine production. These genes are organized in a putative redox stress response cluster comprising two large loci. A binding motif similar to the dormancy survival regulator (DosR) binding site ofM. tuberculosishas been identified in the upstream sequences of most genes in these loci. A predicted role for these genes in the redox stress response is supported by the low NAD+/NADH ratio in the Δtdd8strain. ThisS. coelicolorgene cluster was shown to be induced by hypoxia and NO stress. While thetdd8deletion mutant (the Δtdd8strain) was unable to maintain calcium homeostasis in a calcium-depleted medium, the addition of Ca2+in Δtdd8culture medium reduced the expression of several genes of the redox stress response cluster. The results shown in this work are consistent with Tdd8 playing a significant role in calcium homeostasis and redox stress adaptation.


2013 ◽  
Vol 4 ◽  
Author(s):  
Sanne ten Oever ◽  
Alexander Sack ◽  
Katherine L. Wheat ◽  
Nina Bien ◽  
Nienke van Atteveldt
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document