scholarly journals Object-Guided Spatial Selection in Touch Without Concurrent Changes in the Perceived Location of the Hands

Author(s):  
Helge Gillmeister ◽  
Simona Cantarella ◽  
Ana Ioana Gheorghiu ◽  
Julia Adler

In an endogenous cueing paradigm with central visual cues, observers made speeded responses to tactile targets at the hands, which were either close together or far apart, and holding either two separate objects or one common object between them. When the hands were far apart, the response time costs associated with attending to the wrong hand were reduced when attention had to be shifted along one object jointly held by both hands compared to when it was shifted over the same distance but across separate objects. Similar reductions in attentional costs were observed when the hands were placed closer together, suggesting that processing at one hand is less prioritized over that at another when the hands can be “grouped” by virtue of arising from the same spatial location or from the same object. Probes of perceived hand locations throughout the task showed that holding a common object decreased attentional separability without decreasing the perceived separation between the hands. Our findings suggest that tactile events at the hands may be represented in a spatial framework that flexibly adapts to (object-guided) attentional demands, while their relative coordinates are simultaneously preserved.

1976 ◽  
Vol 28 (2) ◽  
pp. 193-202 ◽  
Author(s):  
Philip Merikle

Report of single letters from centrally-fixated, seven-letter, target rows was probed by either auditory or visual cues. The target rows were presented for 100 ms, and the report cues were single digits which indicated the spatial location of a letter. In three separate experiments, report was always better with the auditory cues. The advantage for the auditory cues was maintained both when target rows were masked by a patterned stimulus and when the auditory cues were presented 500 ms later than comparable visual cues. The results indicate that visual cues produce modality-specific interference which operates at a level of processing beyond iconic representation.


Author(s):  
Chong Wang ◽  
Zheng-Jun Zha ◽  
Dong Liu ◽  
Hongtao Xie

High-level semantic knowledge in addition to low-level visual cues is essentially crucial for co-saliency detection. This paper proposes a novel end-to-end deep learning approach for robust co-saliency detection by simultaneously learning highlevel group-wise semantic representation as well as deep visual features of a given image group. The inter-image interaction at semantic-level as well as the complementarity between group semantics and visual features are exploited to boost the inferring of co-salient regions. Specifically, the proposed approach consists of a co-category learning branch and a co-saliency detection branch. While the former is proposed to learn group-wise semantic vector using co-category association of an image group as supervision, the latter is to infer precise co-salient maps based on the ensemble of group semantic knowledge and deep visual cues. The group semantic vector is broadcasted to each spatial location of multi-scale visual feature maps and is used as a top-down semantic guidance for boosting the bottom-up inferring of co-saliency. The co-category learning and co-saliency detection branches are jointly optimized in a multi-task learning manner, further improving the robustness of the approach. Moreover, we construct a new large-scale co-saliency dataset COCO-SEG to facilitate research of co-saliency detection. Extensive experimental results on COCO-SEG and a widely used benchmark Cosal2015 have demonstrated the superiority of the proposed approach as compared to the state-of-the-art methods.


2015 ◽  
Vol 113 (6) ◽  
pp. 1896-1906 ◽  
Author(s):  
William K. Page ◽  
Nobuya Sato ◽  
Michael T. Froehler ◽  
William Vaughn ◽  
Charles J. Duffy

Navigation relies on the neural processing of sensory cues about observer self-movement and spatial location. Neurons in macaque dorsal medial superior temporal cortex (MSTd) respond to visual and vestibular self-movement cues, potentially contributing to navigation and orientation. We moved monkeys on circular paths around a room while recording the activity of MSTd neurons. MSTd neurons show a variety of sensitivities to the monkey's heading direction, circular path through the room, and place in the room. Changing visual cues alters the relative prevalence of those response properties. Disrupting the continuity of self-movement paths through the environment disrupts path selectivity in a manner linked to the time course of single neuron responses. We hypothesize that sensory cues interact with the spatial and temporal integrative properties of MSTd neurons to derive path selectivity for navigational path integration supporting spatial orientation.


2019 ◽  
Vol 13 (2) ◽  
pp. 86-93 ◽  
Author(s):  
Jose Angelo Barela ◽  
Anselmo A Rocha ◽  
Andrew R Novak ◽  
Job Fransen ◽  
Gabriella A Figueiredo

Background: Many activities require a complex interrelationship between a performer and stimuli available in the environment without explicit perception, but many aspects regarding developmental changes in the use of implicit cues remain unknown. Aim: To investigate the use of implicit visual precueing presented at different time intervals in children, adolescents, and adults. Method: Seventy-two people, male and female, constituted four age groups: 8-, 10- and 12-year-olds and adults. Participants performed 32 trials, four-choice-time task across four conditions: no precue and a 43 ms centralized dot appearing in the stimulus circle at 43, 86 or 129 ms prior the stimulus. Response times were obtained for each trial and pooled into each condition. Results: Response times for 8-year-olds were longer than for 12-year-olds and adults and for 10-year-olds were longer than for adults. Response times were longer in the no precue condition compared to when precues were presented at 86 and 129 ms before the stimulus. Response times were longer when precue was presented at 43 ms compared presented at 129 ms before the stimulus. Interpretation: Implicit precues reduce response time in children, adolescents and adults, but young children benefit less from implicit precues than adolescents and adults.


2001 ◽  
Vol 204 (1) ◽  
pp. 15-23 ◽  
Author(s):  
H.R. Campbell

Blowflies, Phaenicia sericata, can be trained to discriminate in a learning paradigm in which one of the two visual cues is positively rewarded. Retinotopic matching of a learned visual image to the same retinal location from viewing to viewing has been hypothesized to underlie visual pattern learning and memory in insects. To address the theory of retinotopic matching, a detailed analysis was made of the flies' body orientations during learned discriminations between +45 degrees and −45 degrees gratings. Initial approaches to the positive rewarded visual cue did not originate from the same spatial location within the behavioral arena with respect to the visual cues; thus, individual flies approached the positive cue from a different vantage point from trial to trial. During initial approaches to the rewarded visual cue, the distributions of body angles with respect to the cue were different from trial to trial for each individual. These data suggest that Phaenicia sericata can learn a visual pattern with one eye region and later recognize the same pattern with another eye region. Thus, retinotopic matching is not necessary for the recognition of pattern orientation in the experimental paradigm used here. The average amount of head turning in the yaw plane was too small to compensate for the changes in body orientation exhibited by the flies. Flies view the visual patterns with distinct retinal regions from trial to trial during orientation discrimination.


2015 ◽  
Vol 2 (10) ◽  
pp. 150324 ◽  
Author(s):  
Vivek Nityananda ◽  
Lars Chittka

Attentional demands can prevent humans and other animals from performing multiple tasks simultaneously. Some studies, however, show that tasks presented in different sensory modalities (e.g. visual and auditory) can be processed simultaneously. This suggests that, at least in these cases, attention might be modality-specific and divided differently between tasks when present in the same modality compared with different modalities. We investigated this possibility in bumblebees ( Bombus terrestris ) using a biologically relevant experimental set-up where they had to simultaneously choose more rewarding flowers and avoid simulated predatory attacks by robotic ‘spiders’. We found that when the tasks had to be performed using visual cues alone, bees failed to perform both tasks simultaneously. However, when highly rewarding flowers were indicated by olfactory cues and predators were indicated by visual cues, bees managed to perform both tasks successfully. Our results thus provide evidence for modality-specific attention in foraging bees and establish a novel framework for future studies of crossmodal attention in ecologically realistic settings.


2004 ◽  
Vol 4 (8) ◽  
pp. 381-381 ◽  
Author(s):  
L. R. Harris ◽  
M. R. Jenkin ◽  
R. T. Dyde ◽  
H. L. Jenkin
Keyword(s):  

2019 ◽  
Author(s):  
Aaron Blaisdell

We studied object-location binding in pigeons using a sequence learning procedure. A sequence of four objects was presented, one at a time at one of four locations on a touchscreen. A single peck at the object ended the trial, and food reinforcement was delivered intermittently. In Experiment 1, a between-subjects design was used to present objects, locations, or both in a regular sequence or randomly. Response time costs on nonreinforced probe tests on which object order, location order, or both were disrupted revealed sequence learning effects. Pigeons encoded location order when it was consistent, but not object order when it alone was consistent. When both were consistent, pigeons encoded both, and also showed evidence of object-location binding. In Experiment 2, two groups of pigeons received training on sequences where the same object always appeared at the same location. For some pigeons a consistent sequence was used while for others sequence order was randomized. Only when sequence order was consistent was object-location binding found. These experiments are the first demonstrations of strong and lasting feature binding in pigeons.


Sign in / Sign up

Export Citation Format

Share Document