feature integration theory
Recently Published Documents


TOTAL DOCUMENTS

40
(FIVE YEARS 8)

H-INDEX

9
(FIVE YEARS 2)

2021 ◽  
Vol 12 ◽  
Author(s):  
Ciara M. Greene ◽  
John Broughan ◽  
Anthony Hanlon ◽  
Seán Keane ◽  
Sophia Hanrahan ◽  
...  

Previous research has successfully used feature integration theory to operationalise the predictions of Perceptual Load Theory, while simultaneously testing the predictions of both models. Building on this work, we test the extent to which these models hold up in a 3D world. In two experiments, participants responded to a target stimulus within an array of shapes whose apparent depth was manipulated using a combination of monoscopic and stereoscopic cues. The search task was designed to test the predictions of (a) feature integration theory, as the target was identified by a single feature or a conjunction of features and embedded in search arrays of varying size, and (b) perceptual load theory, as the task included congruent and incongruent distractors presented alongside search tasks imposing high or low perceptual load. Findings from both experiments upheld the predictions of feature integration theory, regardless of 2D/3D condition. Longer search times in conditions with a combination of monoscopic and stereoscopic depth cues suggests that binding features into three-dimensional objects requires greater attentional effort. This additional effort should have implications for perceptual load theory, yet our findings did not uphold its predictions; the effect of incongruent distractors did not differ between conjunction search trials (conceptualised as high perceptual load) and feature search trials (low perceptual load). Individual differences in susceptibility to the effects of perceptual load were evident and likely explain the absence of load effects. Overall, our findings suggest that feature integration theory may be useful for predicting attentional performance in a 3D world.


2021 ◽  
Vol 19 (1) ◽  
pp. 57-74
Author(s):  
Marina Folescu

The starting point of this paper is Thomas Reid's anti-skepticism: our knowledge of the external world is justified. The justificatory process, in his view, starts with and relies upon one of the main faculties of the human mind: perception. Reid's theory of perception has been thoroughly studied, but there are some missing links in the explanatory chain offered by the secondary literature. In particular, I will argue that we do not have a complete picture of the mechanism of perception of bodies. The present paper, relying, in part, on a particular theory in psychology – the feature integration theory of attention – will make a contribution in this regard.


2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Kai Chu ◽  
Guang-Hai Liu

Feature integration theory can be regarded as a perception theory, but the extraction of visual features using such a theory within the CBIR framework is a challenging problem. To address this problem, we extract the color and edge features based on a multi-integration features model and use these for image retrieval. A novel and highly simple but efficient visual feature descriptor, namely, a multi-integration features histogram, is proposed for image representation and content-based image retrieval. First, a color image is converted from the RGB to the HSV color space, and the color features and color differences are extracted. Then, the color differences are calculated to extract the edge features using a set of simple integration processes. Finally, combining the color, edge, and spatial layout features allows representing the image content. Experiments show that our method produces results comparable to existing and well-known methods on three datasets that contain 25,000 natural images. The performances are significantly better than that of the BOW histogram, local binary pattern histogram, histogram of oriented gradient, and multi-texton histogram, with performances similar to the color volume histogram.


2020 ◽  
Vol 82 (2) ◽  
pp. 752-774 ◽  
Author(s):  
Adam Reichenthal ◽  
Ronen Segev ◽  
Ohad Ben-Shahar

2019 ◽  
Vol 82 (2) ◽  
pp. 533-549 ◽  
Author(s):  
Josephine Reuther ◽  
Ramakrishna Chakravarthi ◽  
Amelia R. Hunt

AbstractFeature integration theory proposes that visual features, such as shape and color, can only be combined into a unified object when spatial attention is directed to their location in retinotopic maps. Eye movements cause dramatic changes on our retinae, and are associated with obligatory shifts in spatial attention. In two experiments, we measured the prevalence of conjunction errors (that is, reporting an object as having an attribute that belonged to another object), for brief stimulus presentation before, during, and after a saccade. Planning and executing a saccade did not itself disrupt feature integration. Motion did disrupt feature integration, leading to an increase in conjunction errors. However, retinal motion of an equal extent but caused by saccadic eye movements is spared this disruption, and showed similar rates of conjunction errors as a condition with static stimuli presented to a static eye. The results suggest that extra-retinal signals are able to compensate for the motion caused by saccadic eye movements, thereby preserving the integrity of objects across saccades and preventing their features from mixing or mis-binding.


2018 ◽  
Author(s):  
Shekoofeh Hedayati ◽  
Brad Wyble

To what extent does spatiotopic location accompany the representation of a visual event? Feature integration theory suggests that identifying a multi-feature object requires focus on its spatial location to integrate those features. Moreover, single unit data from neurons preferring complex objects, indicates that they have retinotopic receptive fields. It can therefore be predicted that identification of complex stimuli is contingent upon localization of their features by attention. To evaluate this, we presented participants with a brief array of characters with instructions to identify and locate the solitary letter. Surprisingly, subjects sometimes identified the target without knowing where it had been presented. However, when targets were marked by a single feature (color), there was no evidence of identifying the target without locating it also. These results indicate that consciously accessible representations of visual events can form despite being untethered to spatially specific neural activity in early visual areas.


2018 ◽  
Vol 13 (3) ◽  
Author(s):  
Eva Walther ◽  
Georg Halbeisen ◽  
Katarina Blask

In this paper, we outline the predominant theoretical perspectives on evaluative conditioning (EC)—the changes in liking that are due to the pairing of stimuli—identify their weaknesses, and propose a new framework, the binding perspective on EC, which might help to overcome at least some of these issues. Based on feature integration theory (Treisman & Gelade, 1980, https://doi.org/10.1016/0010-0285(80)90005-5) and the theory of event coding (TEC; Hommel, Müsseler, Aschersleben, & Prinz, 2001, https://doi.org/10.1017/S0140525X01000103), we assume that EC depends on a selective integration mechanism that binds relevant CS, US, and action features into an event-file, while simultaneously inhibiting features irrelevant for current goals. This perspective examines hitherto unspecified processes relevant to the encoding of CS-US pairs and their consequences for behavior, which we hope will stimulate further theoretical development. We also present some preliminary evidence for binding in EC and discuss the scope and limitations of this perspective.


Sign in / Sign up

Export Citation Format

Share Document