visual scenes
Recently Published Documents


TOTAL DOCUMENTS

286
(FIVE YEARS 67)

H-INDEX

33
(FIVE YEARS 6)

Memory ◽  
2021 ◽  
pp. 1-14
Author(s):  
Elizabeth H. Hall ◽  
Wilma A. Bainbridge ◽  
Chris I. Baker
Keyword(s):  

2021 ◽  
Author(s):  
Madhura D Ketkar ◽  
Burak Gür ◽  
Sebastian Molina-Obando ◽  
Maria Ioannidou ◽  
Carlotta Martelli ◽  
...  

The accurate processing of contrast is the basis for all visually guided behaviors. Visual scenes with rapidly changing illumination challenge contrast computation, because adaptation is not fast enough to compensate for such changes. Yet, human perception of contrast is stable even when the visual environment is quickly changing. The fruit fly Drosophila also shows nearly luminance invariant behavior for both ON and OFF stimuli. To achieve this, first-order interneurons L1, L2 and L3 all encode contrast and luminance differently, and distribute information across both ON and OFF contrast-selective pathways. Behavioral responses to both ON and OFF stimuli rely on a luminance-based correction provided by L1 and L3, wherein L1 supports contrast computation linearly, and L3 non-linearly amplifies dim stimuli. Therefore, L1, L2 and L3 are not distinct inputs to ON and OFF pathways but the lamina serves as a separate processing layer that distributes distinct luminance and contrast information across ON and OFF pathways to support behavioral performance in varying conditions.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Claudia Damiano ◽  
Dirk B. Walther ◽  
William A. Cunningham

AbstractQuickly scanning an environment to determine relative threat is an essential part of survival. Scene gist extracted rapidly from the environment may help people detect threats. Here, we probed this link between emotional judgements and features of visual scenes. We first extracted curvature, length, and orientation statistics of all images in the International Affective Picture System image set and related them to emotional valence scores. Images containing angular contours were rated as negative, and images containing long contours as positive. We then composed new abstract line drawings with specific combinations of length, angularity, and orientation values and asked participants to rate them as positive or negative, and as safe or threatening. Smooth, long, horizontal contour scenes were rated as positive/safe, while short angular contour scenes were rated as negative/threatening. Our work shows that particular combinations of image features help people make judgements about potential threat in the environment.


2021 ◽  
Vol 21 (9) ◽  
pp. 2856
Author(s):  
Michelle Greene ◽  
Kathryn Leeke ◽  
Bruce Hansen ◽  
David Field

2021 ◽  
Vol 11 (9) ◽  
pp. 1206
Author(s):  
Erika Almadori ◽  
Serena Mastroberardino ◽  
Fabiano Botta ◽  
Riccardo Brunetti ◽  
Juan Lupiáñez ◽  
...  

Object sounds can enhance the attentional selection and perceptual processing of semantically-related visual stimuli. However, it is currently unknown whether crossmodal semantic congruence also affects the post-perceptual stages of information processing, such as short-term memory (STM), and whether this effect is modulated by the object consistency with the background visual scene. In two experiments, participants viewed everyday visual scenes for 500 ms while listening to an object sound, which could either be semantically related to the object that served as the STM target at retrieval or not. This defined crossmodal semantically cued vs. uncued targets. The target was either in- or out-of-context with respect to the background visual scene. After a maintenance period of 2000 ms, the target was presented in isolation against a neutral background, in either the same or different spatial position as in the original scene. The participants judged the same vs. different position of the object and then provided a confidence judgment concerning the certainty of their response. The results revealed greater accuracy when judging the spatial position of targets paired with a semantically congruent object sound at encoding. This crossmodal facilitatory effect was modulated by whether the target object was in- or out-of-context with respect to the background scene, with out-of-context targets reducing the facilitatory effect of object sounds. Overall, these findings suggest that the presence of the object sound at encoding facilitated the selection and processing of the semantically related visual stimuli, but this effect depends on the semantic configuration of the visual scene.


Patterns ◽  
2021 ◽  
pp. 100350
Author(s):  
Yajing Zheng ◽  
Shanshan Jia ◽  
Zhaofei Yu ◽  
Jian K. Liu ◽  
Tiejun Huang

2021 ◽  
Vol 12 ◽  
Author(s):  
Vinicius Macuch Silva ◽  
Michael Franke

Previous research in cognitive science and psycholinguistics has shown that language users are able to predict upcoming linguistic input probabilistically, pre-activating material on the basis of cues emerging from different levels of linguistic abstraction, from phonology to semantics. Current evidence suggests that linguistic prediction also operates at the level of pragmatics, where processing is strongly constrained by context. To test a specific theory of contextually-constrained processing, termed pragmatic surprisal theory here, we used a self-paced reading task where participants were asked to view visual scenes and then read descriptions of those same scenes. Crucially, we manipulated whether the visual context biased readers into specific pragmatic expectations about how the description might unfold word by word. Contrary to the predictions of pragmatic surprisal theory, we found that participants took longer reading the main critical term in scenarios where they were biased by context and pragmatic constraints to expect a given word, as opposed to scenarios where there was no pragmatic expectation for any particular referent.


Sign in / Sign up

Export Citation Format

Share Document