visual events
Recently Published Documents


TOTAL DOCUMENTS

199
(FIVE YEARS 48)

H-INDEX

29
(FIVE YEARS 3)

2021 ◽  
Author(s):  
◽  
Julie Leibrich

<p>This research investigated recognition memory for picture stories. Jenkins, Wald and Pittenger (1978) had found that when subjects viewed a slide sequence which depicted an every-day event, in a later recognition memory test they correctly rejected distractors which were inconsistent with the event but falsely accepted consistent distractors. Jenkins interpreted this result as evidence that fusion - the abstraction of visual events - determined memory performance. He argued that subjects compared the test slides to the abstracted event and accepted those which were consistent with the event. A series of experiments examined the possibility that performance was due not to fusion but to confusion with respect to the featural details of the stimulus material. This alternative interpretation argued that consistent slides had more features in common with acquisition slides than did the inconsistent slides and that the variables of semantic consistency and featural similarity had been confounded. The first experiment manipulated acquisition material and found that subjects who saw a disordered acquisition sequence falsely accepted consistent slides. The second experiment manipulated acquisition conditions and found that subjects who were inhibited from fusing the event by being required to perform a non-semantic task during acquisition falsely accepted consistent slides. Neither of these results supported a fusion interpretation since acceptance of consistent slides occurred under conditions where fusion of the event was not expected. The third experiment manipulated the test conditions and found that acceptance of both consistent and inconsistent slides was less likely with delayed tests although fusion of the event should have led to no change in the likelihood of accepting inconsistent slides. The fourth and fifth experiments re-examined the manipulation of presentation order and demonstrated that subjects were unable to reconstruct the event from a disordered sequence and yet still falsely accepted consistent slides. Each test of the fusion interpretation which had attempted to separate the variables of features and meaning indirectly had indicated that recognition performance was not due to abstraction of the visual event. A final experiment attempted to find explicit evidence for a featural interpretation of the results by directly varying featural similarity of consistent distractor slides to slides from the originally viewed sequence while keeping the degree of semantic consistency constant. Although this experiment failed to support a featural account, the converging evidence from all experiments indicated that recognition memory for picture stories is based to a large extent on the featural properties of the stimulus material. An account of performance solely in terms of visual abstraction is not adequate. Moreover, unless the variables of featural similarity and meaning can be separated directly in the test material, this recognition paradigm is unlikely to provide a means for examining the influence of schemata on recognition memory for picture stories.</p>


2021 ◽  
Author(s):  
◽  
Julie Leibrich

<p>This research investigated recognition memory for picture stories. Jenkins, Wald and Pittenger (1978) had found that when subjects viewed a slide sequence which depicted an every-day event, in a later recognition memory test they correctly rejected distractors which were inconsistent with the event but falsely accepted consistent distractors. Jenkins interpreted this result as evidence that fusion - the abstraction of visual events - determined memory performance. He argued that subjects compared the test slides to the abstracted event and accepted those which were consistent with the event. A series of experiments examined the possibility that performance was due not to fusion but to confusion with respect to the featural details of the stimulus material. This alternative interpretation argued that consistent slides had more features in common with acquisition slides than did the inconsistent slides and that the variables of semantic consistency and featural similarity had been confounded. The first experiment manipulated acquisition material and found that subjects who saw a disordered acquisition sequence falsely accepted consistent slides. The second experiment manipulated acquisition conditions and found that subjects who were inhibited from fusing the event by being required to perform a non-semantic task during acquisition falsely accepted consistent slides. Neither of these results supported a fusion interpretation since acceptance of consistent slides occurred under conditions where fusion of the event was not expected. The third experiment manipulated the test conditions and found that acceptance of both consistent and inconsistent slides was less likely with delayed tests although fusion of the event should have led to no change in the likelihood of accepting inconsistent slides. The fourth and fifth experiments re-examined the manipulation of presentation order and demonstrated that subjects were unable to reconstruct the event from a disordered sequence and yet still falsely accepted consistent slides. Each test of the fusion interpretation which had attempted to separate the variables of features and meaning indirectly had indicated that recognition performance was not due to abstraction of the visual event. A final experiment attempted to find explicit evidence for a featural interpretation of the results by directly varying featural similarity of consistent distractor slides to slides from the originally viewed sequence while keeping the degree of semantic consistency constant. Although this experiment failed to support a featural account, the converging evidence from all experiments indicated that recognition memory for picture stories is based to a large extent on the featural properties of the stimulus material. An account of performance solely in terms of visual abstraction is not adequate. Moreover, unless the variables of featural similarity and meaning can be separated directly in the test material, this recognition paradigm is unlikely to provide a means for examining the influence of schemata on recognition memory for picture stories.</p>


2021 ◽  
Author(s):  
Marie Estelle Bellet ◽  
Marion Gay ◽  
Joachim Bellet ◽  
Bechir Jarraya ◽  
Stanislas Dehaene ◽  
...  

Theories of predictive coding hypothesize that cortical networks learn internal models of environmental regularities to generate expectations that are constantly compared with sensory inputs. The prefrontal cortex (PFC) is thought to be critical for predictive coding. Here, we show how prefrontal neuronal ensembles encode a detailed internal model of sequences of visual events and their violations. We recorded PFC ensembles in a visual local-global sequence paradigm probing low and higher-order predictions and mismatches. PFC ensembles formed distributed, overlapping representations for all aspects of the dynamically unfolding sequences, including information about image identity as well as abstract information about ordinal position, anticipated sequence pattern, mismatches to local and global structure, and model updates. Model and mismatch signals were mixed in the same ensembles, suggesting a revision of predictive processing models that consider segregated processing. We conclude that overlapping prefrontal ensembles may collectively encode all aspects of an ongoing visual experience, including anticipation, perception, and surprise.


Author(s):  
Gajendra Singh ◽  
Rajiv Kapoor ◽  
Arun Khosla

Movement information of persons is a very vital feature for abnormality detection in crowded scenes. In this paper, a new method for detection of crowd escape event in video surveillance system is proposed. The proposed method detects abnormalities based on crowd motion pattern, considering both crowd motion magnitude and direction. Motion features are described by weighted-oriented histogram of optical flow magnitude (WOHOFM) and weighted-oriented histogram of optical flow direction (WOHOFD), which describes local motion pattern. The proposed method uses semi-supervised learning approach using combined classifier (KNN and K-Means) framework to detect abnormalities in motion pattern. The authors validate the effectiveness of the proposed approach on publicly available UMN, PETS2009, and Avanue datasets consisting of events like gathering, splitting, and running. The technique reported here has been found to outperform the recent findings reported in the literature.


2021 ◽  
Vol 15 ◽  
Author(s):  
Yi Gao ◽  
Kamilla N. Miller ◽  
Michael E. Rudd ◽  
Michael A. Webster ◽  
Fang Jiang

Integrating visual and tactile information in the temporal domain is critical for active perception. To accomplish this, coordinated timing is required. Here, we study perceived duration within and across these two modalities. Specifically, we examined how duration comparisons within and across vision and touch were influenced by temporal context and presentation order using a two-interval forced choice task. We asked participants to compare the duration of two temporal intervals defined by tactile or visual events. Two constant standard durations (700 ms and 1,000 ms in ‘shorter’ sessions; 1,000 ms and 1,500 ms in ‘longer’ sessions) were compared to variable comparison durations in different sessions. In crossmodal trials, standard and comparison durations were presented in different modalities, whereas in the intramodal trials, the two durations were presented in the same modality. The standard duration was either presented first (&lt;sc&gt;) or followed the comparison duration (&lt;cs&gt;). In both crossmodal and intramodal conditions, we found that the longer standard duration was overestimated in &lt;cs&gt; trials and underestimated in &lt;sc&gt; trials whereas the estimation of shorter standard duration was unbiased. Importantly, the estimation of 1,000ms was biased when it was the longer standard duration within the shorter sessions but not when it was the shorter standard duration within the longer sessions, indicating an effect of temporal context. The effects of presentation order can be explained by a central tendency effect applied in different ways to different presentation orders. Both crossmodal and intramodal conditions showed better discrimination performance for &lt;sc&gt; trials than &lt;cs&gt; trials, supporting the Type B effect for both crossmodal and intramodal duration comparison. Moreover, these results were not dependent on whether the standard duration was defined using tactile or visual stimuli. Overall, our results indicate that duration comparison between vision and touch is dependent on presentation order and temporal context, but not modality.


2021 ◽  
pp. 1-12
Author(s):  
Anna Borgolte ◽  
Ahmad Bransi ◽  
Johanna Seifert ◽  
Sermin Toto ◽  
Gregor R. Szycik ◽  
...  

Abstract Synaesthesia is a multimodal phenomenon in which the activation of one sensory modality leads to an involuntary additional experience in another sensory modality. To date, normal multisensory processing has hardly been investigated in synaesthetes. In the present study we examine processes of audiovisual separation in synaesthesia by using a simultaneity judgement task. Subjects were asked to indicate whether an acoustic and a visual stimulus occurred simultaneously or not. Stimulus onset asynchronies (SOA) as well as the temporal order of the stimuli were systematically varied. Our results demonstrate that synaesthetes are better in separating auditory and visual events than control subjects, but only when vision leads.


Author(s):  
Matt Csonka ◽  
Nadia Mardmomen ◽  
Paula J Webster ◽  
Julie A Brefczynski-Lewis ◽  
Chris Frum ◽  
...  

Abstract Our ability to perceive meaningful action events involving objects, people and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some hearing and sighted individuals, and in some clinical populations. The present meta-analysis sought to test current hypotheses regarding neurobiological architectures that may mediate audio-visual multisensory processing. Reported coordinates from 82 neuroimaging studies (137 experiments) that revealed some form of audio-visual interaction in discrete brain regions were compiled, converted to a common coordinate space, and then organized along specific categorical dimensions to generate activation likelihood estimate (ALE) brain maps and various contrasts of those derived maps. The results revealed brain regions (cortical “hubs”) preferentially involved in multisensory processing along different stimulus category dimensions, including (1) living versus non-living audio-visual events, (2) audio-visual events involving vocalizations versus actions by living sources, (3) emotionally valent events, and (4) dynamic-visual versus static-visual audio-visual stimuli. These meta-analysis results are discussed in the context of neurocomputational theories of semantic knowledge representations and perception, and the brain volumes of interest are available for download to facilitate data interpretation for future neuroimaging studies.


Sign in / Sign up

Export Citation Format

Share Document