Semantic Object Recognition Based on Qualitative Probabilistic Spatial Relations

Author(s):  
Malgorzata Goldhoorn ◽  
Frank Kirchner
2018 ◽  
Vol 29 (7) ◽  
pp. 3023-3033 ◽  
Author(s):  
Johan N Lundström ◽  
Christina Regenbogen ◽  
Kathrin Ohla ◽  
Janina Seubert

Abstract While matched crossmodal information is known to facilitate object recognition, it is unclear how our perceptual systems encode the more gradual congruency variations that occur in our natural environment. Combining visual objects with odor mixtures to create a gradual increase in semantic object overlap, we demonstrate high behavioral acuity to linear variations of olfactory–visual overlap in a healthy adult population. This effect was paralleled by a linear increase in cortical activation at the intersection of occipital fusiform and lingual gyri, indicating linear encoding of crossmodal semantic overlap in visual object recognition networks. Effective connectivity analyses revealed that this integration of olfactory and visual information was achieved by direct information exchange between olfactory and visual areas. In addition, a parallel pathway through the superior frontal gyrus was increasingly recruited towards the most ambiguous stimuli. These findings demonstrate that cortical structures involved in object formation are inherently crossmodal and encode sensory overlap in a linear manner. The results further demonstrate that prefrontal control of these processes is likely required for ambiguous stimulus combinations, a fact of high ecological relevance that may be inappropriately captured by common task designs juxtaposing congruency and incongruency.


Perception ◽  
1993 ◽  
Vol 22 (11) ◽  
pp. 1261-1270 ◽  
Author(s):  
John Duncan

Performance often suffers when two visual discriminations must be made concurrently (‘divided attention’). In the modular primate visual system, different cortical areas analyse different kinds of visual information. Especially important is a distinction between an occipitoparietal ‘where?’ system, analysing spatial relations, and an occipitotemporal ‘what?’ system responsible for object recognition. Though such visual subsystems are anatomically parallel, their functional relationship when ‘what?’ and ‘where?’ discriminations are made concurrently is unknown. In the present experiments, human subjects made concurrent discriminations concerning a brief visual display. Discriminations were either similar (two ‘what?’ or two ‘where?’ discriminations) or dissimilar (one of each), and concerned the same or different objects. When discriminations concerned different objects, there was strong interference between them. This was equally severe whether discriminations were similar—and therefore dependent on the same cortical system—or dissimilar. When concurrent ‘what?’ and ‘where?’ discriminations concerned the same object, however, all interference disappeared. Such results suggest that ‘what?’ and ‘where?’ systems are coordinated in visual attention: their separate outputs can be used simultaneously without cost, but only when they concern one object.


Author(s):  
YANG WU ◽  
NANNING ZHENG ◽  
YUANLIU LIU ◽  
ZEJIAN YUAN

This paper presents a novel research on promoting the performance and enriching the functionalities of object recognition. Instead of simply fitting various data to a few predefined semantic object categories, we propose to generate proper results for different object instances based on their actual visual appearances. The results can be fine-grained and layered categorization along with absolute or relative localization. We present a generic model based on structured prediction and an efficient online learning algorithm to solve it. Experiments on a new benchmark dataset demonstrate the effectiveness of our model and its superiority against traditional recognition methods.


Author(s):  
Pascal Mettes ◽  
William Thong ◽  
Cees G. M. Snoek

AbstractThis work strives for the classification and localization of human actions in videos, without the need for any labeled video training examples. Where existing work relies on transferring global attribute or object information from seen to unseen action videos, we seek to classify and spatio-temporally localize unseen actions in videos from image-based object information only. We propose three spatial object priors, which encode local person and object detectors along with their spatial relations. On top we introduce three semantic object priors, which extend semantic matching through word embeddings with three simple functions that tackle semantic ambiguity, object discrimination, and object naming. A video embedding combines the spatial and semantic object priors. It enables us to introduce a new video retrieval task that retrieves action tubes in video collections based on user-specified objects, spatial relations, and object size. Experimental evaluation on five action datasets shows the importance of spatial and semantic object priors for unseen actions. We find that persons and objects have preferred spatial relations that benefit unseen action localization, while using multiple languages and simple object filtering directly improves semantic matching, leading to state-of-the-art results for both unseen action classification and localization.


Sign in / Sign up

Export Citation Format

Share Document