scholarly journals Visual mismatch negativity to disappearing parts of objects and textures

2018 ◽  
Author(s):  
Zsófia Anna Gaál ◽  
István Czigler ◽  
István Sulykos ◽  
Domonkos File ◽  
Petia Kojouharova

Visual mismatch negativity (vMMN), an event-related signature of automatic detection of events violating sequential regularities is traditionally investigated to the onset of frequent (standard) and rare (deviant) events. In a previous study [4] we obtained vMMN to vanishing parts of continuously presented objects (diamonds with diagonals), and we concluded that the offset-related vMMN is a model of sensitivity to irregular partial occlusion of objects. In the present study we replicated the previous results, but to test the object-related interpretation we applied a new condition with a set of separate visual stimuli: a texture of bars with two orientations. In the texture condition (offset of bars with irregular vs. regular orientation) we obtained vMMN, showing that the continuous presence of objects is unnecessary for offset-related vMMN. However, unlike in the object-related condition, reappearance of the previously vanishing lines also elicited vMMN. In a formal way reappearance of the stimuli is an event with probability 1.0, and according to the results, object condition reappearance is an expected event. However, offset and onset of texture elements seems to be treated separately by the system underlying vMMN. As an advantage of the present method, the whole stimulus set during the inter-stimulus interval saturates the visual structures sensitive to stimulus input. Accordingly, the offset-related vMMN is less sensitive to low-level adaptation difference between the deviant and standard stimuli.

2016 ◽  
Vol 29 (4-5) ◽  
pp. 319-335 ◽  
Author(s):  
Riku Asaoka ◽  
Jiro Gyoba

Previous studies have shown that the perceived duration of visual stimuli can be strongly distorted by auditory stimuli presented simultaneously. In this study, we examine whether sounds presented separately from target visual stimuli alter the perceived duration of the target’s presentation. The participants’ task was to classify the duration of the target visual stimuli as perceived by them into four categories. Our results demonstrate that a sound presented before and after a visual target increases or decreases the perceived visual duration depending on the inter-stimulus interval between the sounds and the visual stimulus. In addition, three tones presented before and after a visual target did not increase or decrease the perceived visual duration. This indicates that auditory perceptual grouping prevents intermodal perceptual grouping, and eliminates crossmodal effects. These findings suggest that the auditory–visual integration, rather than a high arousal state caused by the presentation of the preceding sound, can induce distortions of perceived visual duration, and that inter- and intramodal perceptual grouping plays an important role in crossmodal time perception. These findings are discussed with reference to the Scalar Expectancy Theory.


1968 ◽  
Vol 20 (1) ◽  
pp. 51-61 ◽  
Author(s):  
Michael C. Corballis ◽  
William Lieberman ◽  
Dalbir Bindra

Four sets of paired visual stimuli (OO, XX, XO, or OX) were judged by 48 subjects to be either “same” or “different.” Decision latencies of the same and different judgement were studied as a function of the inter-stimulus interval (ISI). In Experiments I and II, in which stimulus durations were 70 millisec., decision latencies showed marked increases when the ISI was reduced to 100 millisec., but in Experiments III and IV, in which the stimulus durations were only 40 millisec., comparable increases did not occur until the ISI was reduced to 50 millisec. These increases were more marked for “same” than for “different” judgements, although overall decision latencies were generally shorter for “same” judgements. The effects of varying ISIs and stimulus durations are interpreted in terms of masking; they fail to support an hypothesis of central intermittency.


2006 ◽  
Vol 401 (1-2) ◽  
pp. 178-182 ◽  
Author(s):  
István Czigler ◽  
Júlia Weisz ◽  
István Winkler

1988 ◽  
Vol 15 (2) ◽  
pp. 173-178 ◽  
Author(s):  
Sanford E. Gerber ◽  
Traci K. Davis ◽  
Kathleen M. Mastrini

Author(s):  
Dongxin Liu ◽  
Jiong Hu ◽  
Ruijuan Dong ◽  
Jing Chen ◽  
Gabriella Musacchia ◽  
...  

2013 ◽  
Author(s):  
Zacharias Vamvakousis ◽  
Rafael Ramirez

P300-based brain-computer interfaces (BCIs) are especially useful for people with illnesses, which prevent them from communicating in a normal way (e.g. brain or spinal cord injury). However, most of the existing P300-based BCI systems use visual stimulation which may not be suitable for patients with sight deterioration (e.g. patients suffering from amyotrophic lateral sclerosis). Moreover, P300-based BCI systems rely on expensive equipment, which greatly limits their use outside the clinical environment. Therefore, we propose a multi-class BCI system based solely on auditory stimuli, which makes use of low-cost EEG technology. We explored different combinations of timbre, pitch and spatial auditory stimuli (TimPiSp: timbre-pitch-spatial, TimSp: timbre-spatial, and Timb: timbre-only) and three inter-stimulus intervals (150ms, 175ms and 300ms), and evaluated our system by conducting an oddball task on 7 healthy subjects. This is the first study in which these 3 auditory cues are compared. After averaging several repetitions in the 175ms inter-stimulus interval, we obtained average selection accuracies of 97.14%, 91.43%, and 88.57% for modalities TimPiSp, TimSp, and Timb, respectively. Best subject’s accuracy was 100% in all modalities and inter-stimulus intervals. Average information transfer rate for the 150ms inter-stimulus interval in the TimPiSp modality was 14.85 bits/min. Best subject’s information transfer rate was 39.96 bits/min for 175ms Timbre condition. Based on the TimPiSp modality, an auditory P300 speller was implemented and evaluated by asking users to type a 12-characters-long phrase. Six out of 7 users completed the task. The average spelling speed was 0.56 chars/min and best subject’s performance was 0.84 chars/min. The obtained results show that the proposed auditory BCI is successful with healthy subjects and may constitute the basis for future implementations of more practical and affordable auditory P300-based BCI systems.


Sign in / Sign up

Export Citation Format

Share Document