Canonical views in object representation and recognition

1994 ◽  
Vol 34 (22) ◽  
pp. 3037-3056 ◽  
Author(s):  
Florin Cutzu ◽  
Shimon Edelman
Author(s):  
NICK BARNES ◽  
ZHI-QIANG LIU

We present a system for vision guided autonomous circumnavigation, allowing a mobile robot to navigate safely around objects of arbitrary pose, and avoid obstacles. The system performs model-based object recognition from an intensity image. By enabling robots to recognize and navigate with respect to particular objects, this system empowers robots to perform deterministic actions on specific objects, rather than general exploration and navigation as emphasized in much of the current literature. This paper describes a fully integrated system, and, in particular, introduces canonical-views. Further, we derive a direct algebraic method for finding object pose and position for the four-dimensional case of a ground-based robot with uncalibrated vertical movement of its camera. Vision for mobile robots can be treated as a very different problem to traditional computer vision, as mobile robots have a characteristic perspective, and there is a causal relation between robot actions and view changes. Canonical-views are a novel, active object representation designed specifically to take advantage of the constraints of the robot navigation problem to allow efficient recognition and navigation.


Author(s):  
Elise L. Radtke ◽  
Ulla Martens ◽  
Thomas Gruber

AbstractWe applied high-density EEG to examine steady-state visual evoked potentials (SSVEPs) during a perceptual/semantic stimulus repetition design. SSVEPs are evoked oscillatory cortical responses at the same frequency as visual stimuli flickered at this frequency. In repetition designs, stimuli are presented twice with the repetition being task irrelevant. The cortical processing of the second stimulus is commonly characterized by decreased neuronal activity (repetition suppression). The behavioral consequences of stimulus repetition were examined in a companion reaction time pre-study using the same experimental design as the EEG study. During the first presentation of a stimulus, we confronted participants with drawings of familiar object images or object words, respectively. The second stimulus was either a repetition of the same object image (perceptual repetition; PR) or an image depicting the word presented during the first presentation (semantic repetition; SR)—all flickered at 15 Hz to elicit SSVEPs. The behavioral study revealed priming effects in both experimental conditions (PR and SR). In the EEG, PR was associated with repetition suppression of SSVEP amplitudes at left occipital and repetition enhancement at left temporal electrodes. In contrast, SR was associated with SSVEP suppression at left occipital and central electrodes originating in bilateral postcentral and occipital gyri, right middle frontal and right temporal gyrus. The conclusion of the presented study is twofold. First, SSVEP amplitudes do not only index perceptual aspects of incoming sensory information but also semantic aspects of cortical object representation. Second, our electrophysiological findings can be interpreted as neuronal underpinnings of perceptual and semantic priming.


1998 ◽  
Vol 06 (03) ◽  
pp. 265-279 ◽  
Author(s):  
Shimon Edelman

The paper outlines a computational approach to face representation and recognition, inspired by two major features of biological perceptual systems: graded-profile overlapping receptive fields, and object-specific responses in the higher visual areas. This approach, according to which a face is ultimately represented by its similarities to a number of reference faces, led to the development of a comprehensive theory of object representation in biological vision, and to its subsequent psychophysical exploration and computational modeling.


2013 ◽  
Vol 33 (42) ◽  
pp. 16642-16656 ◽  
Author(s):  
T. Sato ◽  
G. Uchida ◽  
M. D. Lescroart ◽  
J. Kitazono ◽  
M. Okada ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document