scene object
Recently Published Documents


TOTAL DOCUMENTS

32
(FIVE YEARS 11)

H-INDEX

7
(FIVE YEARS 1)

2021 ◽  
Vol 12 (1) ◽  
pp. 20
Author(s):  
Nikki-Anne Wilson ◽  
Rebekah M. Ahmed ◽  
Olivier Piguet ◽  
Muireann Irish

Scene construction refers to the process by which humans generate richly detailed and spatially cohesive scenes in the mind’s eye. The cognitive processes that underwrite this capacity remain unclear, particularly when the envisaged scene calls for the integration of various types of contextual information. Here, we explored social and non-social forms of scene construction in Alzheimer’s disease (AD; n = 11) and the behavioural variant of frontotemporal dementia (bvFTD; n = 15) relative to healthy older control participants (n = 16) using a novel adaptation of the scene construction task. Participants mentally constructed detailed scenes in response to scene–object cues that varied in terms of their sociality (social; non-social) and congruence (congruent; incongruent). A significant group × sociality × congruence interaction was found whereby performance on the incongruent social scene condition was significantly disrupted in both patient groups relative to controls. Moreover, bvFTD patients produced significantly less contextual detail in social relative to non-social incongruent scenes. Construction of social and non-social incongruent scenes in the patient groups combined was significantly associated with independent measures of semantic processing and visuospatial memory. Our findings demonstrate the influence of schema-incongruency on scene construction performance and reinforce the importance of episodic–semantic interactions during novel event construction.


2021 ◽  
Author(s):  
Lixiang Chen ◽  
Radoslaw Martin Cichy ◽  
Daniel Kaiser

AbstractDuring natural vision, objects rarely appear in isolation, but often within a semantically related scene context. Previous studies reported that semantic consistency between objects and scenes facilitates object perception, and that scene-object consistency is reflected in changes in the N300 and N400 components in EEG recordings. Here, we investigate whether these N300/N400 differences are indicative of changes in the cortical representation of objects. In two experiments, we recorded EEG signals while participants viewed semantically consistent or inconsistent objects within a scene; in Experiment 1, these objects were task-irrelevant, while in Experiment 2, they were directly relevant for behavior. In both experiments, we found reliable and comparable N300/400 differences between consistent and inconsistent scene-object combinations. To probe the quality of object representations, we performed multivariate classification analyses, in which we decoded the category of the objects contained in the scene. In Experiment 1, in which the objects were not task-relevant, object category could be decoded from around 100 ms after the object presentation, but no difference in decoding performance was found between consistent and inconsistent objects. By contrast, when the objects were task-relevant in Experiment 2, we found enhanced decoding of semantically consistent, compared to semantically inconsistent, objects. These results show that differences in N300/N400 components related to scene-object consistency do not index changes in cortical object representations, but rather reflect a generic marker of semantic violations. Further, our findings suggest that facilitatory effects between objects and scenes are task-dependent rather than automatic.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Lauren E. Welbourne ◽  
Aditya Jonnalagadda ◽  
Barry Giesbrecht ◽  
Miguel P. Eckstein

AbstractTo optimize visual search, humans attend to objects with the expected size of the sought target relative to its surrounding scene (object-scene scale consistency). We investigate how the human brain responds to variations in object-scene scale consistency. We use functional magnetic resonance imaging and a voxel-wise feature encoding model to estimate tuning to different object/scene properties. We find that regions involved in scene processing (transverse occipital sulcus) and spatial attention (intraparietal sulcus) have the strongest responsiveness and selectivity to object-scene scale consistency: reduced activity to mis-scaled objects (either unusually smaller or larger). The findings show how and where the brain incorporates object-scene size relationships in the processing of scenes. The response properties of these brain areas might explain why during visual search humans often miss objects that are salient but at atypical sizes relative to the surrounding scene.


Author(s):  
Victor Debelov ◽  
Nikita Dolgov

While the mathematical modeling of optical phenomena, a computer calculation is often performed, confirming the conclusions made. To do this, a virtual computer model of the optical installation is created in the form of a 3D scene. Also, virtual scenes are often used in training when creating presentations. This paper describes the SphL library, which provides a convenient assignment of spherical lenses and the calculation of the interaction of linear polarized light rays with them. It is focused on applications that use ray tracing. It is known that light of any polarization can be represented on the basis of the mentioned one. The reflected and all rays passing through the lens that arise due to internal reflections are calculated from the ray incident on the scene object. The number of internal reflections is set by the parameter. All output rays are calculated based on the application of Fresnel’s equations and are characterized by intensity values and polarization parameters. In this version of SphL, the main objects at the end–user level are spherical lenses, since they are most often used in optic installations. They are constructed on the basis of the application of the set-theoretic intersection of geometric primitives: a half-space, a sphere, a cone, a cylinder and their complements to the scene space. An advanced user can build their own objects by analogy, for example, cylindrical lenses.


Author(s):  
Tao He ◽  
Lianli Gao ◽  
Jingkuan Song ◽  
Jianfei Cai ◽  
Yuan-Fang Li

Despite the huge progress in scene graph generation in recent years, its long-tail distribution in object relationships remains a challenging and pestering issue. Existing methods largely rely on either external knowledge or statistical bias information to alleviate this problem. In this paper, we tackle this issue from another two aspects: (1) scene-object interaction aiming at learning specific knowledge from a scene via an additive attention mechanism; and (2) long-tail knowledge transfer which tries to transfer the rich knowledge learned from the head into the tail. Extensive experiments on the benchmark dataset Visual Genome on three tasks demonstrate that our method outperforms current state-of-the-art competitors. Our source code is available at https://github.com/htlsn/issg.


2020 ◽  
Vol 32 (5) ◽  
pp. 783-803
Author(s):  
Cybelle M. Smith ◽  
Kara D. Federmeier

Objects are perceived within rich visual contexts, and statistical associations may be exploited to facilitate their rapid recognition. Recent work using natural scene–object associations suggests that scenes can prime the visual form of associated objects, but it remains unknown whether this relies on an extended learning process. We asked participants to learn categorically structured associations between novel objects and scenes in a paired associate memory task while ERPs were recorded. In the test phase, scenes were first presented (2500 msec), followed by objects that matched or mismatched the scene; degree of contextual mismatch was manipulated along visual and categorical dimensions. Matching objects elicited a reduced N300 response, suggesting visuostructural priming based on recently formed associations. Amplitude of an extended positivity (onset ∼200 msec) was sensitive to visual distance between the presented object and the contextually associated target object, most likely indexing visual template matching. Results suggest recent associative memories may be rapidly recruited to facilitate object recognition in a top–down fashion, with clinical implications for populations with impairments in hippocampal-dependent memory and executive function.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Laura Maffongelli ◽  
Sabine Öhlschläger ◽  
Melissa Lê-Hoa Võ

Finding a bottle of milk in the bathroom would probably be quite surprising to most of us. Such a surprised reaction is driven by our strong expectations, learned through experience, that a bottle of milk belongs in the kitchen. Our environment is not randomly organized but governed by regularities that allow us to predict what objects can be found in which types of scene. These scene semantics are thought to play an important role in the recognition of objects. But when during development are the semantic predictions so far implemented that such scene-object inconsistencies would lead to semantic processing difficulties? Here we investigated how toddlers perceive their environments, and what expectations govern their attention and perception. To this aim, we used a purely visual paradigm in an ERP experiment and presented 24-month-olds with familiar scenes in which either a semantically consistent or an inconsistent object would appear. The scene-inconsistency effect has been previously studied in adults by means of the N400, a neural marker responding to semantic inconsistencies across many types of stimuli. Our results show that semantic object-scene inconsistencies indeed elicited an enhanced N400 over the left anterior brain region between 750 and 1150 ms post stimulus onset. This modulation of the N400 marker provides first indications that by the age of two toddlers have already established their scene semantics allowing them to detect a purely visual, semantic object-scene inconsistency. Our data suggest the presence of specific semantic knowledge regarding what objects occur in a certain scene category.


Sign in / Sign up

Export Citation Format

Share Document