inhibition of return
Recently Published Documents


TOTAL DOCUMENTS

708
(FIVE YEARS 71)

H-INDEX

61
(FIVE YEARS 2)

2021 ◽  
Vol 27 (2) ◽  
pp. 293-316
Author(s):  
Jacek Bielas

The crux of the dispute on the mutual relations between attention and consciousness, and to which I have referred in this paper, lies in the question of what can be attended in spatial attention that obviously resonates with the phenomenological issue of intentionality (e.g., the noesis-noema structure). The discussion has been initiated by Christopher Mole. He began by calling for a commonsense psychology, according to which one is conscious of everything that one pays attention to, but one does not pay attention to all the things that one is conscious of. In other words, attention is supposed to be a condition which is sufficient but not necessary for consciousness, i.e., consciousness is a necessary concomitant of attention, but attention is not a necessary concomitant of consciousness. Mole seeks to validate his stance with data from psychology labs. His view is, however, partly confronted, for instance, by Robert Kentridge, Lee de-Wit and Charles Heywood, who used their experimental research on a neurological condition called blindsight as evidence of a dissociation between attention and consciousness, i.e., that visual attention is not a sufficient precondition for visual awareness. In this meta-theoretical state of affairs, I would like to focus on the cognitive phenomenon most often referred to as Inhibition of Return (IOR) and suggest that, following its micro dynamics from the perspective of micro-phenomenology, it can be used to actually showcase all of the options on both sides of the argument. One of my leading goals would be also to follow Mole’s attempt to link attention with agency but where we differ is that I wish to heuristically articulate the matter in terms of Merleau-Ponty’s phenomenological notion of embodied pre-reflective intentionality.


2021 ◽  
Author(s):  
◽  
Arindam Bhakta

<p>Humans and many animals can selectively sample important parts of their visual surroundings to carry out their daily activities like foraging or finding prey or mates. Selective attention allows them to efficiently use the limited resources of the brain by deploying sensory apparatus to collect data believed to be pertinent to the organism's current task in hand.  Robots or other computational agents operating in dynamic environments are similarly exposed to a wide variety of stimuli, which they must process with limited sensory and computational resources. Developing computational models of visual attention has long been of interest as such models enable artificial systems to select necessary information from complex and cluttered visual environments, hence reducing the data-processing burden.  Biologically inspired computational saliency models have previously been used in selectively sampling a visual scene, but these have limited capacity to deal with dynamic environments and have no capacity to reason about uncertainty when planning their visual scene sampling strategy. These models typically select contrast in colour, shape or orientation as salient and sample locations of a visual scene in descending order of salience. After each observation, the area around the sampled location is blocked using inhibition of return mechanism to keep it from being re-visited.  This thesis generalises the traditional model of saliency by using an adaptive Kalman filter estimator to model an agent's understanding of the world and uses a utility function based approach to describe what the agent cares about in the visual scene. This allows the agents to adopt a richer set of perceptual strategies than is possible with the classical winner-take-all mechanism of the traditional saliency model. In contrast with the traditional approach, inhibition of return is achieved without implementing an extra mechanism on top of the underlying structure.  This thesis demonstrates the use of five utility functions that are used to encapsulate the perceptual state that is valued by the agent. Each utility function thereby produces a distinct perceptual behaviour that is matched to particular scenarios.  The resulting visual attention distribution of the five proposed utility functions is demonstrated on five real-life videos.  In most of the experiments, pixel intensity has been used as the source of the saliency map. As the proposed approach is independent of the saliency map used, it can be used with other existing more complex saliency map building models. Moreover, the underlying structure of the model is sufficiently general and flexible, hence it can be used as the base of a new range of more sophisticated gaze control systems.</p>


2021 ◽  
Author(s):  
◽  
Arindam Bhakta

<p>Humans and many animals can selectively sample important parts of their visual surroundings to carry out their daily activities like foraging or finding prey or mates. Selective attention allows them to efficiently use the limited resources of the brain by deploying sensory apparatus to collect data believed to be pertinent to the organism's current task in hand.  Robots or other computational agents operating in dynamic environments are similarly exposed to a wide variety of stimuli, which they must process with limited sensory and computational resources. Developing computational models of visual attention has long been of interest as such models enable artificial systems to select necessary information from complex and cluttered visual environments, hence reducing the data-processing burden.  Biologically inspired computational saliency models have previously been used in selectively sampling a visual scene, but these have limited capacity to deal with dynamic environments and have no capacity to reason about uncertainty when planning their visual scene sampling strategy. These models typically select contrast in colour, shape or orientation as salient and sample locations of a visual scene in descending order of salience. After each observation, the area around the sampled location is blocked using inhibition of return mechanism to keep it from being re-visited.  This thesis generalises the traditional model of saliency by using an adaptive Kalman filter estimator to model an agent's understanding of the world and uses a utility function based approach to describe what the agent cares about in the visual scene. This allows the agents to adopt a richer set of perceptual strategies than is possible with the classical winner-take-all mechanism of the traditional saliency model. In contrast with the traditional approach, inhibition of return is achieved without implementing an extra mechanism on top of the underlying structure.  This thesis demonstrates the use of five utility functions that are used to encapsulate the perceptual state that is valued by the agent. Each utility function thereby produces a distinct perceptual behaviour that is matched to particular scenarios.  The resulting visual attention distribution of the five proposed utility functions is demonstrated on five real-life videos.  In most of the experiments, pixel intensity has been used as the source of the saliency map. As the proposed approach is independent of the saliency map used, it can be used with other existing more complex saliency map building models. Moreover, the underlying structure of the model is sufficiently general and flexible, hence it can be used as the base of a new range of more sophisticated gaze control systems.</p>


2021 ◽  
Author(s):  
Malgorzata Kasprzyk ◽  
Margaret Jackson ◽  
Bert Timmermans

We investigated whether the reward that has previously been associated with initiated joint attention (the experience of having one’s gaze followed by someone else; Pfeiffer et al., 2014, Schilbach et al., 2010) can influence gaze behaviour and, similarly to monetary rewards (Blaukopf &amp; DiGirolamo, 2005; Manohar et al., 2017; Milstein &amp; Dorris, 2007), elicit learning effects. To this end, we adapted Milstein and Dorris (2007) gaze contingent paradigm, so it required participants to look at an anthropomorphic avatar and then conduct a saccade towards the left or right peripheral target. If participants were fast enough, they could experience social reward in terms of the avatar looking at the same target as they did and thus engaging with them in joint attention. One side had higher reward probability than the other (80 % vs 20 %; on the other fast trials the avatar would simply keep staring ahead). We expected that if participants learned about the reward contingency and if they found the experience of having their gaze followed rewarding, their latency and success rate would improve for saccades to the high rewarded targets. Although our current study did not demonstrate that such social reward has a long lasting effect on gaze behaviour, we found that latencies became shorter over time and that latencies were longer on congruent trials (target location was identical to the previous trial) than on noncongruent trials (target location different than on the previous trial), which could reflect inhibition of return.


Data in Brief ◽  
2021 ◽  
pp. 107565
Author(s):  
Margit Höfler ◽  
Sebastian A. Bauch ◽  
Katrin Liebergesell ◽  
Iain D. Gilchrist ◽  
Anja Ischebeck ◽  
...  

2021 ◽  
Vol 15 ◽  
Author(s):  
Xing Peng ◽  
Xiaoyu Tang ◽  
Hao Jiang ◽  
Aijun Wang ◽  
Ming Zhang ◽  
...  

Previous behavioral studies have found that inhibition of return decreases the audiovisual integration, while the underlying neural mechanisms are unknown. The current work utilized the high temporal resolution of event-related potentials (ERPs) to investigate how audiovisual integration would be modulated by inhibition of return. We employed the cue-target paradigm and manipulated the target type and cue validity. Participants were required to perform the task of detection of visual (V), auditory (A), or audiovisual (AV) targets shown in the identical (valid cue) or opposed (invalid cue) side to be the preceding exogenous cue. The neural activities between AV targets and the sum of the A and V targets were compared, and their differences were calculated to present the audiovisual integration effect in different cue validity conditions (valid, invalid). The ERPs results showed that a significant super-additive audiovisual integration effect was observed on the P70 (60∼90 ms, frontal-central) only under the invalid cue condition. The significant audiovisual integration effects were observed on the N1 or P2 components (N1, 120∼180 ms, frontal-central-parietal; P2, 200∼260 ms, frontal-central-parietal) in both valid cue as well as invalid cue condition. And there were no significant differences on the later components between invalid cue and valid cue. The result offers the first neural demonstration that inhibition of return modulates the early audiovisual integration process.


2021 ◽  
Vol 12 ◽  
Author(s):  
Paige J. Foletta ◽  
Meaghan Clough ◽  
Allison M. McKendrick ◽  
Emma J. Solly ◽  
Owen B. White ◽  
...  

Visual snow syndrome (VSS) is a complex, sensory processing disorder. We have previously shown that visual processing changes manifest in significantly faster eye movements toward a suddenly appearing visual stimulus and difficulty inhibiting an eye movement toward a non-target visual stimulus. We propose that these changes reflect poor attentional control and occur whether attention is directed exogenously by a suddenly appearing event, or endogenously as a function of manipulating expectation surrounding an upcoming event. Irrespective of how attention is captured, competing facilitatory and inhibitory processes prioritise sensory information that is important to us, filtering out that which is irrelevant. A well-known feature of this conflict is the alteration to behaviour that accompanies variation in the temporal relationship between competing sensory events that manipulate facilitatory and inhibitory processes. A classic example of this is the “Inhibition of Return” (IOR) phenomenon that describes the relative slowing of a response to a validly cued location compared to invalidly cued location with longer cue/target intervals. This study explored temporal changes in the allocation of attention using an ocular motor version of Posner's IOR paradigm, manipulating attention exogenously by varying the temporal relationship between a non-predictive visual cue and target stimulus. Forty participants with VSS (20 with migraine) and 20 controls participated. Saccades were generated to both validly cued and invalidly cued targets with 67, 150, 300, and 500 ms cue/target intervals. VSS participants demonstrated delayed onset of IOR. Unlike controls, who exhibited IOR with 300 and 500 ms cue/target intervals, VSS participants only exhibited IOR with 500 ms cue/target intervals. These findings provide further evidence that attention is impacted in VSS, manifesting in a distinct saccadic behavioural profile, and delayed onset of IOR. Whether IOR is perceived as the build-up of an inhibitory bias against returning attention to an already inspected location or a consequence of a stronger attentional orienting response elicited by the cue, our results are consistent with the proposal that in VSS, a shift of attention elicits a stronger increase in saccade-related activity than healthy controls. This work provides a more refined saccadic behavioural profile of VSS that can be interrogated further using sophisticated neuroimaging techniques and may, in combination with other saccadic markers, be used to monitor the efficacy of any future treatments.


Author(s):  
Hansol Rheem ◽  
Kelly S. Steelman ◽  
Robert S. Gutzwiller

The SEEV model of visual scanning offers a quick and easy way of evaluating the attentional demands of various tasks and displays. A SEEV model can be developed without relying on complicated mathematical software or background, making the conceptual model highly accessible. Implementation of SEEV modeling can further be improved by easing the process of running simulations and providing actionable information. In this paper, we showcase the SEEV Modeler, a GUI-based prototype of the computational SEEV model that lowers the technical barriers for human factors practitioners. Furthermore, the prototype’s ability to predict eye movements in dynamic driving scenarios was tested, with an emphasis on the impacts of the attention shifting effort and inhibition of return (IOR) on the model’s prediction performance. The SEEV Modeler produced model fits comparable to those of previous mathematical modeling approaches but also revealed limitations and practical issues to be addressed in the final version.


2021 ◽  
Vol 12 ◽  
Author(s):  
Asenath X. A. Huether ◽  
Linda K. Langley ◽  
Laura E. Thomas

Inhibition of return (IOR) is thought to reflect a cognitive mechanism that biases attention from returning to previously engaged items. While models of cognitive aging have proposed deficits within select inhibitory domains, older adults have demonstrated preserved IOR functioning in previous studies. The present study investigated whether inhibition associated with objects shows the same age patterns as inhibition associated with locations. Young adults (18–22 years) and older adults (60–86 years) were tested in two experiments measuring location- and object-based IOR. Using a dynamic paradigm (Experiment 1), both age groups produced significant location-based IOR, but only young adults produced significant object-based IOR, consistent with previous findings. However, with a static paradigm (Experiment 2), young adults and older adults produced both location- and object-based IOR, indicating that object-based IOR is preserved in older adults under some conditions. The findings provide partial support for unique age-related inhibitory patterns associated with attention to objects and locations.


Sign in / Sign up

Export Citation Format

Share Document