scholarly journals The forest or the trees: preference for global over local image processing is reversed by prior experience in honeybees

2015 ◽  
Vol 282 (1799) ◽  
pp. 20142384 ◽  
Author(s):  
Aurore Avarguès-Weber ◽  
Adrian G. Dyer ◽  
Noha Ferrah ◽  
Martin Giurfa

Traditional models of insect vision have assumed that insects are only capable of low-level analysis of local cues and are incapable of global, holistic perception. However, recent studies on honeybee ( Apis mellifera ) vision have refuted this view by showing that this insect also processes complex visual information by using spatial configurations or relational rules. In the light of these findings, we asked whether bees prioritize global configurations or local cues by setting these two levels of image analysis in competition. We trained individual free-flying honeybees to discriminate hierarchical visual stimuli within a Y-maze and tested bees with novel stimuli in which local and/or global cues were manipulated. We demonstrate that even when local information is accessible, bees prefer global information, thus relying mainly on the object's spatial configuration rather than on elemental, local information. This preference can be reversed if bees are pre-trained to discriminate isolated local cues. In this case, bees prefer the hierarchical stimuli with the local elements previously primed even if they build an incorrect global configuration. Pre-training with local cues induces a generic attentional bias towards any local elements as local information is prioritized in the test, even if the local cues used in the test are different from the pre-trained ones. Our results thus underline the plasticity of visual processing in insects and provide new insights for the comparative analysis of visual recognition in humans and animals.

Author(s):  
Nicolas Poirel ◽  
Claire Sara Krakowski ◽  
Sabrina Sayah ◽  
Arlette Pineau ◽  
Olivier Houdé ◽  
...  

The visual environment consists of global structures (e.g., a forest) made up of local parts (e.g., trees). When compound stimuli are presented (e.g., large global letters composed of arrangements of small local letters), the global unattended information slows responses to local targets. Using a negative priming paradigm, we investigated whether inhibition is required to process hierarchical stimuli when information at the local level is in conflict with the one at the global level. The results show that when local and global information is in conflict, global information must be inhibited to process local information, but that the reverse is not true. This finding has potential direct implications for brain models of visual recognition, by suggesting that when local information is conflicting with global information, inhibitory control reduces feedback activity from global information (e.g., inhibits the forest) which allows the visual system to process local information (e.g., to focus attention on a particular tree).


2020 ◽  
Author(s):  
Genevieve L. Quek ◽  
Marius V. Peelen

AbstractMuch of what we know about object recognition arises from the study of isolated objects. In the real world, however, we commonly encounter groups of contextually-associated objects (e.g., teacup, saucer), often in stereotypical spatial configurations (e.g., teacup above saucer). Here we used EEG to test whether identity-based associations between objects (e.g., teacup-saucer vs. teacup-stapler) are encoded jointly with their typical relative positioning (e.g., teacup above saucer vs. below saucer). Observers viewed a 2.5Hz image stream of contextually-associated object pairs intermixed with non-associated pairs as every fourth image. The differential response to non-associated pairs (measurable at 0.625Hz in 28/37 participants), served as an index of contextual integration, reflecting the association of object identities in each pair. Over right occipitotemporal sites, this signal was larger for typically-positioned object streams, indicating that spatial configuration facilitated the extraction of the objects’ contextual association. This high-level influence of spatial configuration on object identity integration arose ∼320ms post stimulus onset, with lower-level perceptual grouping (shared with inverted displays) present at ∼130ms. These results demonstrate that contextual and spatial associations between objects interactively influence object processing. We interpret these findings as reflecting the high-level perceptual grouping of objects that frequently co-occur in highly stereotyped relative positions.


2020 ◽  
Vol 30 (12) ◽  
pp. 6391-6404
Author(s):  
Genevieve L Quek ◽  
Marius V Peelen

Abstract Much of what we know about object recognition arises from the study of isolated objects. In the real world, however, we commonly encounter groups of contextually associated objects (e.g., teacup and saucer), often in stereotypical spatial configurations (e.g., teacup above saucer). Here we used electroencephalography to test whether identity-based associations between objects (e.g., teacup–saucer vs. teacup–stapler) are encoded jointly with their typical relative positioning (e.g., teacup above saucer vs. below saucer). Observers viewed a 2.5-Hz image stream of contextually associated object pairs intermixed with nonassociated pairs as every fourth image. The differential response to nonassociated pairs (measurable at 0.625 Hz in 28/37 participants) served as an index of contextual integration, reflecting the association of object identities in each pair. Over right occipitotemporal sites, this signal was larger for typically positioned object streams, indicating that spatial configuration facilitated the extraction of the objects’ contextual association. This high-level influence of spatial configuration on object identity integration arose ~ 320 ms post-stimulus onset, with lower-level perceptual grouping (shared with inverted displays) present at ~ 130 ms. These results demonstrate that contextual and spatial associations between objects interactively influence object processing. We interpret these findings as reflecting the high-level perceptual grouping of objects that frequently co-occur in highly stereotyped relative positions.


2021 ◽  
Vol 33 (5) ◽  
pp. 799-813
Author(s):  
Carole Peyrin ◽  
Alexia Roux-Sibilon ◽  
Audrey Trouilloud ◽  
Sarah Khazaz ◽  
Malena Joly ◽  
...  

Abstract Theories of visual recognition postulate that our ability to understand our visual environment at a glance is based on the extraction of the gist of the visual scene, a first global and rudimentary visual representation. Gist perception would be based on the rapid analysis of low spatial frequencies in the visual signal and would allow a coarse categorization of the scene. We aimed to study whether the low spatial resolution information available in peripheral vision could modulate the processing of visual information presented in central vision. We combined behavioral measures (Experiments 1 and 2) and fMRI measures (Experiment 2). Participants categorized a scene presented in central vision (artificial vs. natural categories) while ignoring another scene, either semantically congruent or incongruent, presented in peripheral vision. The two scenes could either share the same physical properties (similar amplitude spectrum and spatial configuration) or not. Categorization of the central scene was impaired by a semantically incongruent peripheral scene, in particular when the two scenes were physically similar. This semantic interference effect was associated with increased activation of the inferior frontal gyrus. When the two scenes were semantically congruent, the dissimilarity of their physical properties impaired the categorization of the central scene. This effect was associated with increased activation in occipito-temporal areas. In line with the hypothesis of predictive mechanisms involved in visual recognition, results suggest that semantic and physical properties of the information coming from peripheral vision would be automatically used to generate predictions that guide the processing of signal in central vision.


2020 ◽  
Author(s):  
Bahareh Jozranjbar ◽  
Arni Kristjansson ◽  
Heida Maria Sigurdardottir

While dyslexia is typically described as a phonological deficit, recent evidence suggests that ventral stream regions, important for visual categorization and object recognition, are hypoactive in dyslexic readers who might accordingly show visual recognition deficits. By manipulating featural and configural information of faces and houses, we investigated whether dyslexic readers are disadvantaged at recognizing certain object classes or utilizing particular visual processing mechanisms. Dyslexic readers found it harder to recognize objects (houses), suggesting that visual problems in dyslexia are not completely domain-specific. Mean accuracy for faces was equivalent in the two groups, compatible with domain-specificity in face processing. While face recognition abilities correlated with reading ability, lower house accuracy was nonetheless related to reading difficulties even when accuracy for faces was kept constant, suggesting a specific relationship between visual word recognition and the recognition of non-face objects. Representational similarity analyses (RSA) revealed that featural and configural processes were clearly separable in typical readers, while dyslexic readers appeared to rely on a single process. This occurred for both faces and houses and was not restricted to particular visual categories. We speculate that reading deficits in some dyslexic readers reflect their reliance on a single process for object recognition.


2016 ◽  
Vol 45 (2) ◽  
pp. 233-252
Author(s):  
Pepijn Viaene ◽  
Alain De Wulf ◽  
Philippe De Maeyer

Landmarks are ideal wayfinding tools to guide a person from A to B as they allow fast reasoning and efficient communication. However, very few path-finding algorithms start from the availability of landmarks to generate a path. In this paper, which focuses on indoor wayfinding, a landmark-based path-finding algorithm is presented in which the endpoint partition is proposed as spatial model of the environment. In this model, the indoor environment is divided into convex sub-shapes, called e-spaces, that are stable with respect to the visual information provided by a person’s surroundings (e.g. walls, landmarks). The algorithm itself implements a breadth-first search on a graph in which mutually visible e-spaces suited for wayfinding are connected. The results of a case study, in which the calculated paths were compared with their corresponding shortest paths, show that the proposed algorithm is a valuable alternative for Dijkstra’s shortest path algorithm. It is able to calculate a path with a minimal amount of actions that are linked to landmarks, while the path length increase is comparable to the increase observed when applying other path algorithms that adhere to natural wayfinding behaviour. However, the practicability of the proposed algorithm is highly dependent on the availability of landmarks and on the spatial configuration of the building.


2017 ◽  
Vol 372 (1717) ◽  
pp. 20160077 ◽  
Author(s):  
Anna Honkanen ◽  
Esa-Ville Immonen ◽  
Iikka Salmela ◽  
Kyösti Heimonen ◽  
Matti Weckström

Night vision is ultimately about extracting information from a noisy visual input. Several species of nocturnal insects exhibit complex visually guided behaviour in conditions where most animals are practically blind. The compound eyes of nocturnal insects produce strong responses to single photons and process them into meaningful neural signals, which are amplified by specialized neuroanatomical structures. While a lot is known about the light responses and the anatomical structures that promote pooling of responses to increase sensitivity, there is still a dearth of knowledge on the physiology of night vision. Retinal photoreceptors form the first bottleneck for the transfer of visual information. In this review, we cover the basics of what is known about physiological adaptations of insect photoreceptors for low-light vision. We will also discuss major enigmas of some of the functional properties of nocturnal photoreceptors, and describe recent advances in methodologies that may help to solve them and broaden the field of insect vision research to new model animals. This article is part of the themed issue ‘Vision in dim light’.


1983 ◽  
Vol 27 (5) ◽  
pp. 354-354
Author(s):  
Bruce W. Hamill ◽  
Robert A. Virzi

This investigation addresses the problem of attention in the processing of symbolic information from visual displays. Its scope includes the nature of attentive processes, the structural properties of stimuli that influence visual information processing mechanisms, and the manner in which these factors interact in perception. Our purpose is to determine the effects of configural feature structure on visual information processing. It is known that for stimuli comprising separable features, one can distinguish between conditions in which only one relevant feature differs among stimuli in the array being searched and conditions in which conjunctions of two (or more) features differ: Since the visual process of conjoining separable features is additive, this fact is reflected in search time as a function of array size, with feature conditions yielding flat curves associated with parallel search (no increase in search time across array sizes) and conjunction conditions yielding linearly increasing curves associated with serial search. We studied configural-feature stimuli within this framework to determine the nature of visual processing for such stimuli as a function of their feature structure. Response times of subjects searching for particular targets among structured arrays of distractors were measured in a speeded visual search task. Two different sets of stimulus materials were studied in array sizes of up to 32 stimuli, using both tachistoscope and microcomputer-based CRT presentation for each. Our results with configural stimuli indicate serial search in all of the conditions, with the slope of the response-time-by-array-size function being steeper for conjunction conditions than for feature conditions. However, for each of the two sets of stimuli we studied, there was one configuration that stood apart from the others in its set in that it yielded significantly faster response times, and in that conjunction conditions involving these particular stimuli tended to cluster with the feature conditions rather than with the other conjunction conditions. In addition to these major effects of particular targets, context effects also appeared in our results as effects of the various distractor sets used; certain of these context effects appear to be reversible. The effects of distractor sets on target search were studied in considerable detail. We have found interesting differences in visual processing between stimuli comprising separable features and those comprising configural features. We have also been able to characterize the effects we have found with configural-feature stimuli as being related to the specific feature structure of the target stimulus in the context of the specific feature structure of distractor stimuli. These findings have strong implications for the design of symbology that can enhance visual performance in the use of automated displays.


1999 ◽  
Vol 11 (3) ◽  
pp. 300-311 ◽  
Author(s):  
Edmund T. Rolls ◽  
Martin J. Tovée ◽  
Stefano Panzeri

Backward masking can potentially provide evidence of the time needed for visual processing, a fundamental constraint that must be incorporated into computational models of vision. Although backward masking has been extensively used psychophysically, there is little direct evidence for the effects of visual masking on neuronal responses. To investigate the effects of a backward masking paradigm on the responses of neurons in the temporal visual cortex, we have shown that the response of the neurons is interrupted by the mask. Under conditions when humans can just identify the stimulus, with stimulus onset asynchronies (SOA) of 20 msec, neurons in macaques respond to their best stimulus for approximately 30 msec. We now quantify the information that is available from the responses of single neurons under backward masking conditions when two to six faces were shown. We show that the information available is greatly decreased as the mask is brought closer to the stimulus. The decrease is more marked than the decrease in firing rate because it is the selective part of the firing that is especially attenuated by the mask, not the spontaneous firing, and also because the neuronal response is more variable at short SOAs. However, even at the shortest SOA of 20 msec, the information available is on average 0.1 bits. This compares to 0.3 bits with only the 16-msec target stimulus shown and a typical value for such neurons of 0.4 to 0.5 bits with a 500-msec stimulus. The results thus show that considerable information is available from neuronal responses even under backward masking conditions that allow the neurons to have their main response in 30 msec. This provides evidence for how rapid the processing of visual information is in a cortical area and provides a fundamental constraint for understanding how cortical information processing operates.


Author(s):  
Аnatoly М. Shutyi ◽  

Based on the general principle of the unity of the nature of interacting entities and the principle of the relativity of motion, as well as following the requirement of an indissoluble and conditioning connection of space and time, the model of a discrete space-time consisting of identical interacting particles is proposed as the most acceptable one. We consider the consequences of the discreteness of space, such as: the occurrence of time quanta, the limiting speed of signal propa­gation, and the constancy of this speed, regardless of the motion of the reference frame. Regularly performed acts of particles of space-time (PST) interaction en­sure the connectivity of space, set the quantum of time and the maximum speed – the speed of light. In the process of PST communication, their mixing occurs, which ensures the relativity of inertial motion, and can also underlie quantum uncertainty. In this case, elementary particles are spatial configurations of an excited “lattice” of PST, and particles with mass must contain loop struc­tures in their configuration. A new interpretation of quantum mechanics is pro­posed, according to which the wave function determines the probability of de­struction of a spatial configuration (representing a quantum object) in its corresponding region, which leads to the contraction of the entire structure to a given, detectable component. Particle entanglement is explained by the appear­ance of additional links between the PST – the appearance of a local coordinate along which the distance between entangled objects does not increase. It is shown that the movement of a body should lead to an asymmetry of the tension of the bonds between the PST – to the asymmetry of its effective gravity, the es­tablishment of which is one of the possibilities for experimental verification of the proposed model. It is shown that the constancy of the speed of light in a vac­uum and the appearance of relativistic effects are based on ensuring the connec­tivity of space-time, i.e. striving to prevent its rupture.


Sign in / Sign up

Export Citation Format

Share Document