visual context
Recently Published Documents


TOTAL DOCUMENTS

317
(FIVE YEARS 74)

H-INDEX

28
(FIVE YEARS 4)

2022 ◽  
pp. 1026-1048
Author(s):  
Sugandha Kaur ◽  
Bidisha Som

Previous studies show that the presence of a context word in picture naming either facilitates or interferes with the naming. Although there has been extensive research in this area, there are many conflicting findings, making it difficult to reach firm conclusions. This chapter aims to delve into the dynamics of such processing and understand the nuances involved in experimental manipulations that may influence the pattern of results and be responsible for differences in outcomes. The series of experiments reported in this chapter was aimed at refining our understanding of mechanisms in the way bilinguals process language production by examining two different paradigms—primed picture naming and picture-word interference. This was investigated by manipulating both the type of visual context words presented with the picture and the time interval between the presentation of context word and picture. The results are interpreted within the context of current models of lexical access.


2021 ◽  
pp. 149-178
Author(s):  
Jordan Schonig

This chapter examines the aesthetic properties and phenomenological effects of compression glitches—blocky image distortions that momentarily deform digitally compressed video. As visible expressions of the invisible processes of digital video compression, compression glitches offer unprecedented encounters with the technological production of cinematic motion. Two distinct consequences of these encounters are explored in this chapter. First, because compression glitches are more likely to occur when the compression algorithm is overworked by large volumes of onscreen movement, the ubiquity of compression glitches has yielded a spectatorial sensitivity to the magnitude of movement on screen. Second, because compression glitches extract movement itself (i.e., algorithmic motion instructions) from its original visual context, the visual qualities of such glitches heighten our attention to the formal qualities of movement as distinct from the actions and events that such movements comprise. Taken together, these two spectatorial effects of the compression glitch illuminate new orientations toward cinematic motion in the digital era. Describing these orientations, the chapter argues, can model a form of inquiry that bridges the gap between technologically oriented and phenomenologically oriented accounts of “digital cinema.”


2021 ◽  
Author(s):  
Vincent van de Ven ◽  
Guyon Kleuters ◽  
Joey Stuiver

We memorize our daily life experiences, which are often multisensory in nature, by segmenting them into distinct event models, in accordance with perceived contextual or situational changes. However, very little is known about how multisensory integration affects segmentation, as most studies have focused on unisensory (visual or audio) segmentation. In three experiments, we investigated the effect of multisensory integration on segmentation in memory and perception. In Experiment 1, participants encoded lists of visual objects while audio and visual contexts changed synchronously or asynchronously. After each list, we tested recognition and temporal associative memory for pictures that were encoded in the same audio-visual context or that crossed a synchronous or an asynchronous multisensory change. We found no effect of multisensory integration for recognition memory: Synchronous and asynchronous changes similarly impaired recognition for pictures encoded at those changes, compared to pictures encoded further away from those changes. Multisensory integration did affect temporal associative memory, which was worse for pictures encoded at synchronous than at asynchronous changes. Follow up experiments showed that this effect was not due to the higher complexity of multisensory over unisensory contexts (Experiment 2), nor that it was due to the temporal unpredictability of contextual changes inherent to Experiment 1 (Experiment 3). We argue that participants formed situational expectations through multisensory integration, such that synchronous multisensory changes deviated more strongly from those expectations than asynchronous changes. We discuss our findings in light of supportive and conflicting findings of uni- and multisensory segmentation.


2021 ◽  
Author(s):  
Meng Wang ◽  
Sen Wang ◽  
Han Yang ◽  
Zheng Zhang ◽  
Xi Chen ◽  
...  

Author(s):  
C.A. Sanchez ◽  
T. Read ◽  
A. Crawford

Classic research in perception has suggested that visual context can impact how individuals perceive object characteristics like physical size. The current set of studies extends this work to an applied setting by examining whether smartphone display size can impact the perception of objects presented on smartphones. Participants viewed several target items, on two different sized virtual device displays based on actual consumer devices and were asked to make simple judgments of the size of presented objects. Results from both experiments confirm that display size impacts perceived size, such that larger displays cause users to significantly underestimate the size of objects moreso than smaller displays. This is the first study to confirm such an effect, and suggests that beyond aesthetics or cost, one’s personal choice of device might have additional performance consequences.


2021 ◽  
Vol 12 ◽  
Author(s):  
Vinicius Macuch Silva ◽  
Michael Franke

Previous research in cognitive science and psycholinguistics has shown that language users are able to predict upcoming linguistic input probabilistically, pre-activating material on the basis of cues emerging from different levels of linguistic abstraction, from phonology to semantics. Current evidence suggests that linguistic prediction also operates at the level of pragmatics, where processing is strongly constrained by context. To test a specific theory of contextually-constrained processing, termed pragmatic surprisal theory here, we used a self-paced reading task where participants were asked to view visual scenes and then read descriptions of those same scenes. Crucially, we manipulated whether the visual context biased readers into specific pragmatic expectations about how the description might unfold word by word. Contrary to the predictions of pragmatic surprisal theory, we found that participants took longer reading the main critical term in scenarios where they were biased by context and pragmatic constraints to expect a given word, as opposed to scenarios where there was no pragmatic expectation for any particular referent.


2021 ◽  
Author(s):  
James Bigelow ◽  
Ryan J Morrill ◽  
Timothy Olsen ◽  
Stephani N Bazarini ◽  
Andrea R Hasenstaub

Recent studies have established significant anatomical and functional connections between visual areas and primary auditory cortex (A1), which may be important for perceptual processes such as communication and spatial perception. However, much remains unknown about the microcircuit structure of these interactions, including how visual context may affect different cell types across cortical layers, each with diverse responses to sound. The present study examined activity in putative excitatory and inhibitory neurons across cortical layers of A1 in awake male and female mice during auditory, visual, and audiovisual stimulation. We observed a subpopulation of A1 neurons responsive to visual stimuli alone, which were overwhelmingly found in the deep cortical layers and included both excitatory and inhibitory cells. Other neurons for which responses to sound were modulated by visual context were similarly excitatory or inhibitory but were less concentrated within the deepest cortical layers. Important distinctions in visual context sensitivity were observed among different spike rate and timing responses to sound. Spike rate responses were themselves heterogeneous, with stronger responses evoked by sound alone at stimulus onset, but greater sensitivity to visual context by sustained firing activity following transient onset responses. Minimal overlap was observed between units with visual-modulated firing rate responses and spectrotemporal receptive fields (STRFs) which are sensitive to both spike rate and timing changes. Together, our results suggest visual information in A1 is predominantly carried by deep layer inputs and influences sound encoding across cortical layers, and that these influences independently impact qualitatively distinct responses to sound.


Sign in / Sign up

Export Citation Format

Share Document