scene context
Recently Published Documents


TOTAL DOCUMENTS

139
(FIVE YEARS 32)

H-INDEX

19
(FIVE YEARS 2)

2022 ◽  
pp. 095679762110326
Author(s):  
Eelke Spaak ◽  
Marius V. Peelen ◽  
Floris P. de Lange

Visual scene context is well-known to facilitate the recognition of scene-congruent objects. Interestingly, however, according to predictive-processing accounts of brain function, scene congruency may lead to reduced (rather than enhanced) processing of congruent objects, compared with incongruent ones, because congruent objects elicit reduced prediction-error responses. We tested this counterintuitive hypothesis in two online behavioral experiments with human participants ( N = 300). We found clear evidence for impaired perception of congruent objects, both in a change-detection task measuring response times and in a bias-free object-discrimination task measuring accuracy. Congruency costs were related to independent subjective congruency ratings. Finally, we show that the reported effects cannot be explained by low-level stimulus confounds, response biases, or top-down strategy. These results provide convincing evidence for perceptual congruency costs during scene viewing, in line with predictive-processing theory.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Tim Lauer ◽  
Filipp Schmidt ◽  
Melissa L.-H. Võ

AbstractWhile scene context is known to facilitate object recognition, little is known about which contextual “ingredients” are at the heart of this phenomenon. Here, we address the question of whether the materials that frequently occur in scenes (e.g., tiles in a bathroom) associated with specific objects (e.g., a perfume) are relevant for the processing of that object. To this end, we presented photographs of consistent and inconsistent objects (e.g., perfume vs. pinecone) superimposed on scenes (e.g., a bathroom) and close-ups of materials (e.g., tiles). In Experiment 1, consistent objects on scenes were named more accurately than inconsistent ones, while there was only a marginal consistency effect for objects on materials. Also, we did not find any consistency effect for scrambled materials that served as color control condition. In Experiment 2, we recorded event-related potentials and found N300/N400 responses—markers of semantic violations—for objects on inconsistent relative to consistent scenes. Critically, objects on materials triggered N300/N400 responses of similar magnitudes. Our findings show that contextual materials indeed affect object processing—even in the absence of spatial scene structure and object content—suggesting that material is one of the contextual “ingredients” driving scene context effects.


2021 ◽  
Author(s):  
Takasuke Nagai ◽  
Shoichiro Takeda ◽  
Masaaki Matsumura ◽  
Shinya Shimizu ◽  
Susumu Yamamoto

2021 ◽  
Author(s):  
Dian Jin

<div>As a highly dynamic operating process, flight activity requires a lot of attention from pilots. Thus, it’s quite imperative to give research to their visual attention. Traditional research methods mostly based on qualitative analysis, or hypothetical model, and seldom put context information into their model. However, the underlying knowledge (tacit knowledge) hidden in the different performances of pilot’s attention allocation is context related, and is hard to express by experts, thus it is difficult to use those traditional methods to construct an interaction system. In this paper, we mined attention pattern with scene context to achieve the quantitative analysis of tacit knowledge of pilots during flight tasks, and use the method of data mining as well as attribute graph model to construct visual cognitive graph(s). The attribute graph model was adopted to construct visual cognitive graphs which associate the obtained visual information within the flight context. Based on the model, the attention pattern with scene context was mined to achieve the quantitative analysis of tacit knowledge of pilots during flight tasks. Besides, three physical quantities derived from graph theory was introduced to describe the tacit knowledge, which can be used directly to construct an interaction system: first, key information, which shown as central node in the graph we built, reveals the most important information during flight mission within context; second, relevant information, which contains several nodes that was closely connected and strongly impact the central node, reveals the factors affecting the key information; third, bridge information based on betweenness centrality, which can be regard as the important information bridge(s), reveals the process of decision making. Our work can be directly used to train novice pilots, to guide the interface design, and to construct the adaptive interaction system.</div>


2021 ◽  
Author(s):  
Dian Jin

<div>As a highly dynamic operating process, flight activity requires a lot of attention from pilots. Thus, it’s quite imperative to give research to their visual attention. Traditional research methods mostly based on qualitative analysis, or hypothetical model, and seldom put context information into their model. However, the underlying knowledge (tacit knowledge) hidden in the different performances of pilot’s attention allocation is context related, and is hard to express by experts, thus it is difficult to use those traditional methods to construct an interaction system. In this paper, we mined attention pattern with scene context to achieve the quantitative analysis of tacit knowledge of pilots during flight tasks, and use the method of data mining as well as attribute graph model to construct visual cognitive graph(s). The attribute graph model was adopted to construct visual cognitive graphs which associate the obtained visual information within the flight context. Based on the model, the attention pattern with scene context was mined to achieve the quantitative analysis of tacit knowledge of pilots during flight tasks. Besides, three physical quantities derived from graph theory was introduced to describe the tacit knowledge, which can be used directly to construct an interaction system: first, key information, which shown as central node in the graph we built, reveals the most important information during flight mission within context; second, relevant information, which contains several nodes that was closely connected and strongly impact the central node, reveals the factors affecting the key information; third, bridge information based on betweenness centrality, which can be regard as the important information bridge(s), reveals the process of decision making. Our work can be directly used to train novice pilots, to guide the interface design, and to construct the adaptive interaction system.</div>


2021 ◽  
Author(s):  
Tim Lauer ◽  
Filipp Schmidt ◽  
Melissa L.-H. Vo

While scene context is known to facilitate object recognition, little is known about whichcontextual “ingredients” are at the heart of this phenomenon. Here, we address the question ofwhether the materials that frequently occur in scenes (e.g., tiles in bathroom) associated withspecific objects (e.g., a perfume) are relevant for processing of that object. To this end, wepresented photographs of consistent and inconsistent objects (e.g., perfume vs. pinecone)superimposed on scenes (e.g., bathroom) and close-ups of materials (e.g., tiles). In Experiment1, consistent objects on scenes were named more accurately than inconsistent ones, while therewas only a marginal consistency effect for objects on materials. Also, we did not find anyconsistency effect for scrambled materials that served as color control condition. In Experiment2, we recorded event-related potentials (ERPs) and found N300/N400 responses – markers ofsemantic violations – for objects on inconsistent relative to consistent scenes. Critically, objectson materials triggered N300/N400 responses of similar magnitudes. Our findings show thatcontextual materials indeed affect object processing – even in the absence of spatial scenestructure and object content – suggesting that material is one of the contextual “ingredients”driving scene context effects.


Cortex ◽  
2021 ◽  
Author(s):  
Heath E. Matheson ◽  
Frank E. Garcea ◽  
Laurel J. Buxbaum
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document