scholarly journals Come Together, Right Now: Dynamic Overwriting of an Object's History through Common Fate

2014 ◽  
Vol 26 (8) ◽  
pp. 1819-1828 ◽  
Author(s):  
Roy Luria ◽  
Edward K. Vogel

The objects around us constantly move and interact, and the perceptual system needs to monitor on-line these interactions and to update the object's status accordingly. Gestalt grouping principles, such as proximity and common fate, play a fundamental role in how we perceive and group these objects. Here, we investigated situations in which the initial object representation as a separate item was updated by a subsequent Gestalt grouping cue (i.e., proximity or common fate). We used a version of the color change detection paradigm, in which the objects started to move separately, then met and stayed stationary, or moved separately, met, and then continued to move together. We monitored the object representations on-line using the contralateral delay activity (CDA; an ERP component indicative of the number of maintained objects), during their movement, and after the objects disappeared and became working memory representations. The results demonstrated that the objects' representations (as indicated by the CDA amplitude) persisted as being separate, even after a Gestalt proximity cue (when the objects “met” and remained stationary on the same position). Only a strong common fate Gestalt cue (when the objects not just met but also moved together) was able to override the objects' initial separate status, creating an integrated representation. These results challenge the view that Gestalt principles cause reflexive grouping. Instead, the object initial representation plays an important role that can override even powerful grouping cues.

Author(s):  
Elise L. Radtke ◽  
Ulla Martens ◽  
Thomas Gruber

AbstractWe applied high-density EEG to examine steady-state visual evoked potentials (SSVEPs) during a perceptual/semantic stimulus repetition design. SSVEPs are evoked oscillatory cortical responses at the same frequency as visual stimuli flickered at this frequency. In repetition designs, stimuli are presented twice with the repetition being task irrelevant. The cortical processing of the second stimulus is commonly characterized by decreased neuronal activity (repetition suppression). The behavioral consequences of stimulus repetition were examined in a companion reaction time pre-study using the same experimental design as the EEG study. During the first presentation of a stimulus, we confronted participants with drawings of familiar object images or object words, respectively. The second stimulus was either a repetition of the same object image (perceptual repetition; PR) or an image depicting the word presented during the first presentation (semantic repetition; SR)—all flickered at 15 Hz to elicit SSVEPs. The behavioral study revealed priming effects in both experimental conditions (PR and SR). In the EEG, PR was associated with repetition suppression of SSVEP amplitudes at left occipital and repetition enhancement at left temporal electrodes. In contrast, SR was associated with SSVEP suppression at left occipital and central electrodes originating in bilateral postcentral and occipital gyri, right middle frontal and right temporal gyrus. The conclusion of the presented study is twofold. First, SSVEP amplitudes do not only index perceptual aspects of incoming sensory information but also semantic aspects of cortical object representation. Second, our electrophysiological findings can be interpreted as neuronal underpinnings of perceptual and semantic priming.


2017 ◽  
Author(s):  
Paola Bressan

The specific gray shades in a visual scene can be derived from relative luminance values only when an anchoring rule is followed. The double-anchoring theory I propose in this article, as a development of the anchoring theory of Gilchrist et al. (1999), assumes that any given region (a) belongs to one or more frameworks, created by Gestalt grouping principles, and (b) is independently anchored, within each framework, to both the highest luminance and the surround luminance. The region's final lightness is a weighted average of the values computed, relative to both anchors, in all frameworks. The new model accounts not only for all lightness illusions that are qualitatively explained by the anchoring theory but also for a number of additional effects, and it does so quantitatively, with the support of mathematical simulations.


2012 ◽  
Vol 12 (9) ◽  
pp. 1313-1313
Author(s):  
N. R. Twarog ◽  
R. Rosenholtz

Author(s):  
Nicholas J. Wade

It is relatively easy to hide pictorial images, but this is of little value if they remain hidden. Presenting hidden images for visual purposes is a modern preoccupation, and some of the perceptual processes involved in them are described in this chapter. Pictorial images can be concealed in terms of detection or recognition. In both cases there is interplay between the global features of the concealed image and the local elements that carry it. Gestalt grouping principles can hinder as well as help recognition. Examples of images (mostly faces) hidden in geometrical designs and text as well as orientation are shown. Rather than being pictorial puzzles alone, hidden images can reveal aspects of visual processing. This chapter explores these concepts and related ideas such as perceptual portraits and pictorial puzzles.


2002 ◽  
Vol 14 (1) ◽  
pp. 37-47 ◽  
Author(s):  
Michael A. Kraut ◽  
Sarah Kremen ◽  
Lauren R. Moo ◽  
Jessica B. Segal ◽  
Vincent Calhoun ◽  
...  

The human brain's representation of objects has been proposed to exist as a network of coactivated neural regions present in multiple cognitive systems. However, it is not known if there is a region specific to the process of activating an integrated object representation in semantic memory from multimodal feature stimuli (e.g., picture–word). A previous study using word–word feature pairs as stimulus input showed that the left thalamus is integrally involved in object activation (Kraut, Kremen, Segal, et al., this issue). In the present study, participants were presented picture–word pairs that are features of objects, with the task being to decide if together they “activated” an object not explicitly presented (e.g., picture of a candle and the word “icing” activate the internal representation of a “cake”). For picture–word pairs that combine to elicit an object, signal change was detected in the ventral temporo-occipital regions, pre-SMA, left primary somatomotor cortex, both caudate nuclei, and the dorsal thalami bilaterally. These findings suggest that the left thalamus is engaged for either picture or word stimuli, but the right thalamus appears to be involved when picture stimuli are also presented with words in semantic object activation tasks. The somatomotor signal changes are likely secondary to activation of the semantic object representations from multimodal visual stimuli.


Perception ◽  
1994 ◽  
Vol 23 (5) ◽  
pp. 505-515 ◽  
Author(s):  
Emanuel Leeuwenberg ◽  
Peter Van der Helm ◽  
Rob Van Lier

Two models of object perception are compared: recognition by components (RBC), proposed by Biederman, and structural information theory (SIT), initially proposed by Leeuwenberg. According to RBC a complex object is decomposed into predefined elementary objects, called geons. According to SIT, the decomposition is guided by regularities in the object. It is assumed that the simplest of all possible interpretations of any object is perceptually preferred. The comparison deals with two aspects of the models. One is the representation of simple objects—various definitions of object axes are considered. It is shown that the more these definitions account for object regularity and thus the more they agree with SIT, the better the object representations predict object classification. Another topic concerns assumptions underlying the models: the identification of geons is mediated by cues which are supposed to be invariant under varying viewpoints of objects. It is argued that such cues are not based on this invariance but on the regularity of actual objects. The latter conclusion is in line with SIT. An advantage of RBC, however, is that it deals with the perceptual process from stimulus to interpretation, whereas SIT merely concerns the outcome of the process, not the process itself.


2021 ◽  
Vol 8 (3) ◽  
Author(s):  
Barbara Pomiechowska ◽  
Teodora Gliga

To what extent does language shape how we think about the world? Studies suggest that linguistic symbols expressing conceptual categories (‘apple’, ‘squirrel’) make us focus on categorical information (e.g. that you saw a squirrel) and disregard individual information (e.g. whether that squirrel had a long or short tail). Across two experiments with preverbal infants, we demonstrated that it is not language but nonverbal category knowledge that determines what information is packed into object representations. Twelve-month-olds ( N = 48) participated in an electroencephalography (EEG) change-detection task involving objects undergoing a brief occlusion. When viewing objects from unfamiliar categories, infants detected both across- and within-category changes, as evidenced by their negative central wave (Nc) event-related potential. Conversely, when viewing objects from familiar categories, they did not respond to within-category changes, which indicates that nonverbal category knowledge interfered with the representation of individual surface features necessary to detect such changes. Furthermore, distinct patterns of γ and α oscillations between familiar and unfamiliar categories were evident before and during occlusion, suggesting that categorization had an influence on the format of recruited object representations. Thus, we show that nonverbal category knowledge has rapid and enduring effects on object representation and discuss their functional significance for generic knowledge acquisition in the absence of language.


Sign in / Sign up

Export Citation Format

Share Document