scholarly journals Online and offline influences of vision on bimanual tactile spatial perception

2020 ◽  
Author(s):  
Yash R. Wani ◽  
Silvia Convento ◽  
Jeffrey M Yau

Vision and touch interact in spatial perception. How vision exerts online influences on tactile spatial perception is well-appreciated, but far less is known about how vision modulates tactile perception offline. Here, we investigated how visual cues exert both online and offline biases in bimanual tactile spatial perception. In a series of experiments, participants performed a 4-alternative forced-choice tactile detection task in which they reported the perception of peri-threshold taps on the left hand, right hand, both hands, or no touch (LRBN). Participants initially performed the LRBN task in the absence of visual cues. Subsequently, participants performed the LRBN task in blocks comprising non-informative visual cues that were presented on the left and right hands. To explore the effect of distractor salience on the visuo-tactile spatial interactions, we varied the brightness of the visual cues such that visual stimuli associated with one hand were consistently brighter than visual stimuli associated with the other hand. We found that participants performed the tactile detection task in an unbiased manner in the absence of visual distractors. Visual cues biased tactile performance, despite an instruction to ignore vision, and these online effects tended to be larger with brighter distractors. Moreover, tactile performance was biased toward the side of the brighter visual cues even on trials when no visual cues were presented during the visuo-tactile block. Using a modeling framework based on signal detection theory, we compared a number of alternative models to recapitulate the behavioral results and to link the visual influences on touch to sensitivity and criterion changes. Our collective results reveal the obligatory and systematic influences of vision on both online and offline tactile spatial perception. The nervous system appears to automatically leverage multiple sensory modalities to build representations and calibrate decision processes for bimanual tactile spatial perception.

1986 ◽  
Vol 63 (2) ◽  
pp. 537-538 ◽  
Author(s):  
Sarah Grogan

Lateralisation for tactile-spatial perception was studied in 21 10- to 15-yr.-old boys of average intelligence who were underachieving in reading and 21 matched controls. Controls showed a significant advantage at recognising shapes on a visual display when they had been actively felt with the left hand. There was no difference between left- and right-hand scores in the poor readers. This supports Witelson's (1977) finding that poor readers are less lateralised for spatial-processing than are average readers.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Stefano Rozzi ◽  
Marco Bimbi ◽  
Alfonso Gravante ◽  
Luciano Simone ◽  
Leonardo Fogassi

AbstractThe ventral part of lateral prefrontal cortex (VLPF) of the monkey receives strong visual input, mainly from inferotemporal cortex. It has been shown that VLPF neurons can show visual responses during paradigms requiring to associate arbitrary visual cues to behavioral reactions. Further studies showed that there are also VLPF neurons responding to the presentation of specific visual stimuli, such as objects and faces. However, it is largely unknown whether VLPF neurons respond and differentiate between stimuli belonging to different categories, also in absence of a specific requirement to actively categorize or to exploit these stimuli for choosing a given behavior. The first aim of the present study is to evaluate and map the responses of neurons of a large sector of VLPF to a wide set of visual stimuli when monkeys simply observe them. Recent studies showed that visual responses to objects are also present in VLPF neurons coding action execution, when they are the target of the action. Thus, the second aim of the present study is to compare the visual responses of VLPF neurons when the same objects are simply observed or when they become the target of a grasping action. Our results indicate that: (1) part of VLPF visually responsive neurons respond specifically to one stimulus or to a small set of stimuli, but there is no indication of a “passive” categorical coding; (2) VLPF neuronal visual responses to objects are often modulated by the task conditions in which the object is observed, with the strongest response when the object is target of an action. These data indicate that VLPF performs an early passive description of several types of visual stimuli, that can then be used for organizing and planning behavior. This could explain the modulation of visual response both in associative learning and in natural behavior.


2021 ◽  
Author(s):  
Judith M. Varkevisser ◽  
Ralph Simon ◽  
Ezequiel Mendoza ◽  
Martin How ◽  
Idse van Hijlkema ◽  
...  

AbstractBird song and human speech are learned early in life and for both cases engagement with live social tutors generally leads to better learning outcomes than passive audio-only exposure. Real-world tutor–tutee relations are normally not uni- but multimodal and observations suggest that visual cues related to sound production might enhance vocal learning. We tested this hypothesis by pairing appropriate, colour-realistic, high frame-rate videos of a singing adult male zebra finch tutor with song playbacks and presenting these stimuli to juvenile zebra finches (Taeniopygia guttata). Juveniles exposed to song playbacks combined with video presentation of a singing bird approached the stimulus more often and spent more time close to it than juveniles exposed to audio playback only or audio playback combined with pixelated and time-reversed videos. However, higher engagement with the realistic audio–visual stimuli was not predictive of better song learning. Thus, although multimodality increased stimulus engagement and biologically relevant video content was more salient than colour and movement equivalent videos, the higher engagement with the realistic audio–visual stimuli did not lead to enhanced vocal learning. Whether the lack of three-dimensionality of a video tutor and/or the lack of meaningful social interaction make them less suitable for facilitating song learning than audio–visual exposure to a live tutor remains to be tested.


Author(s):  
Elizabeth Thorpe Davis ◽  
Larry F. Hodges

Two fundamental purposes of human spatial perception, in either a real or virtual 3D environment, are to determine where objects are located in the environment and to distinguish one object from another. Although various sensory inputs, such as haptic and auditory inputs, can provide this spatial information, vision usually provides the most accurate, salient, and useful information (Welch and Warren, 1986). Moreover, of the visual cues available to humans, stereopsis provides an enhanced perception of depth and of three-dimensionality for a visual scene (Yeh and Silverstein, 1992). (Stereopsis or stereoscopic vision results from the fusion of the two slightly different views of the external world that our laterally displaced eyes receive (Schor, 1987; Tyler, 1983).) In fact, users often prefer using 3D stereoscopic displays (Spain and Holzhausen, 1991) and find that such displays provide more fun and excitement than do simpler monoscopic displays (Wichanski, 1991). Thus, in creating 3D virtual environments or 3D simulated displays, much attention recently has been devoted to visual 3D stereoscopic displays. Yet, given the costs and technical requirements of such displays, we should consider several issues. First, we should consider in what conditions and situations these stereoscopic displays enhance perception and performance. Second, we should consider how binocular geometry and various spatial factors can affect human stereoscopic vision and, thus, constrain the design and use of stereoscopic displays. Finally, we should consider the modeling geometry of the software, the display geometry of the hardware, and some technological limitations that constrain the design and use of stereoscopic displays by humans. In the following section we consider when 3D stereoscopic displays are useful and why they are useful in some conditions but not others. In the section after that we review some basic concepts about human stereopsis and fusion that are of interest to those who design or use 3D stereoscopic displays. Also in that section we point out some spatial factors that limit stereopsis and fusion in human vision as well as some potential problems that should be considered in designing and using 3D stereoscopic displays. Following that we discuss some software and hardware issues, such as modelling geometry and display geometry as well as geometric distortions and other artifacts that can affect human perception.


1979 ◽  
Vol 49 (3) ◽  
pp. 867-870
Author(s):  
Allan L. Combs ◽  
Dana A. Beezley ◽  
Gary M. Prater ◽  
Gerald F. Henning ◽  
Rhonda F. Cottrell

Among a group of 12 persons selected for the ability to write with ease with either hand, none were found to write using a hooked hand posture with either the right or left hand. Tests of verbal and manipulospatial ability indicated a normal balance of these two types of abilities, usually associated with the left and right hemispheres. Findings are discussed in terms of implications for cerebral organization and hand position in writing.


2014 ◽  
Vol 281 (1787) ◽  
pp. 20140634 ◽  
Author(s):  
Matthew Collett

Insects such as desert ants learn stereotyped visual routes between their nests and reliable food sites. Studies here reveal an important control element for ensuring that the route memories are used appropriately. They find that visual route memories can be disengaged, so that they do not provide guidance, even when all appropriate visual cues are present and when there are no competing guidance cues. Ants were trained along a simple route dominated by a single isolated landmark. If returning ants were caught just before entering the nest and replaced at the feeder, then they often interrupted the recapitulation of their homeward route with a period of apparent confusion during which the route memories were ignored. A series of experiments showed that this confusion occurred in response to the repetition of the route, and that the ants must therefore maintain some kind of a memory of their visual experience on the current trip home. A conceptual model of route guidance is offered to explain the results here. It proposes how the memory might act and suggests a general role for disengagement in regulating route guidance.


1989 ◽  
Vol 68 (3_suppl) ◽  
pp. 1031-1039 ◽  
Author(s):  
Naohiro Minagawa ◽  
Kan Kashu

16 adult subjects performed a tactile recognition task. According to our 1984 study, half of the subjects were classified as having a left hemispheric preference for the processing of visual stimuli, while the other half were classified as having a right hemispheric preference for the processing of visual stimuli. The present task was conducted according to the S1–S2 matching paradigm. The standard stimulus was a readily recognizable object and was presented factually to either the left or right hand of each subject. The comparison stimulus was an object-picture and was presented visually by slide in a tachistoscope. The interstimulus interval was .05 sec. or 2.5 sec. Analysis indicated that the left-preference group showed right-hand superiority, and the right-preference group showed left-hand superiority. The notion of individual hemisphericity was supported in tactile processing.


1996 ◽  
Vol 74 (12) ◽  
pp. 2248-2253 ◽  
Author(s):  
Lamar A. Windberg

Individual coyotes (Canis latrans) are infrequently captured within their familiar areas of activity. Current hypotheses are that the differential capture vulnerability may involve neophobia or inattentiveness. To assess the effect of familiarity, I measured coyote responsiveness to sensory cues encountered in familiar and novel settings. Seventy-four captive coyotes were presented with visual and olfactory stimuli in familiar and unfamiliar 1-ha enclosures. The visual stimuli were black or white wooden cubes of three sizes (4, 8, and 16 cm per side). The olfactory stimuli were fatty acid scent, W-U lure (trimethylammonium decanoate plus sulfide additives), and coyote urine and liquefied feces. Overall, coyotes were more responsive to stimuli during exploration in unfamiliar than in familiar enclosures. None of 38 coyotes that responded were neophobic toward the olfactory stimuli. The frequency of coyote response, and the resulting degrees of neophobia, did not differ between the black and white visual stimuli. Regardless of context, the largest visual stimuli were recognized at the greatest distance and evoked the strongest neophobic response. A greater proportion of coyotes were neophobic toward the small and medium-sized stimuli in familiar than in unfamiliar enclosures. This study demonstrated that when encountered in familiar environments, visual cues are more likely to elicit neophobic responses by coyotes than are olfactory stimuli.


1995 ◽  
Vol 48 (2) ◽  
pp. 367-383 ◽  
Author(s):  
Daniel J. Weeks ◽  
Robert W. Proctor ◽  
Brad Beyak

It has previously been shown that, when stimuli positioned above or below a central fixation point (“up” and “down” stimuli) are assigned to left and right responses, the stimulus–response mapping up-left/down-right is more compatible than the mapping up-right/down-left for responses executed by the left hand in the left hemispace, but this relation is reversed for responses executed by the right hand in the right hemispace. In Experiment 1, each hand responded at locations in both hemispaces to dissociate the influence of hand identity from response location, and response location was found to be the determinant of relative compatibility. In Experiment 2 responses were made at the sagittal midline, and an inactive response switch was placed to the left or right to induce coding of the active switch as right or left, respectively. This manipulation of relative location had an effect similar to, although of lesser magnitude than, that produced by physically changing location of the response switch in Experiment 1. It is argued that these results are counter to predictions of a movement-preference account and consistent with the view that spatial coding underlies compatibility effects for orthogonally oriented stimulus and response sets.


2014 ◽  
Vol 85 (3) ◽  
pp. 408-412 ◽  
Author(s):  
Abraham N. Safer ◽  
Peter Homel ◽  
David D. Chung

ABSTRACT Objective:  To assess lateral differences between ossification events and stages of bone development in the hands and wrists utilizing Fishman's skeletal maturation indicators (SMIs). Materials and Methods:  The skeletal ages of 125 subjects, aged 8 to 20 years, were determined with left and right hand-wrist radiographs using Fishman's SMI assessment. Each subject was also given the Edinburgh Handedness Questionnaire to assess handedness. The skeletal ages of both hand-wrist radiographs were analyzed against each other, handedness, chronologic age, and gender. Results:  There were no significant differences overall in right and left SMI scores (P  =  .70); 79% of all patients showed no difference in right and left SMI scores, regardless of handedness, gender, or age. However, when patients were categorized based on clinical levels of SMI score for the right hand-wrist, there was a significant difference (P  =  .01) between the SMI 1-3 group and the SMI 11 group. Subjects in the SMI 1-3 group were more likely to show a left > right SMI score, while subjects in the SMI 11 group were likely to show a right > left SMI score. Conclusion:  Although no significant overall lateral differences in SMI scores were noted, it may be advisable to obtain a left hand-wrist radiograph and/or additional diagnostic information to estimate completion of growth in young surgical patients.


Sign in / Sign up

Export Citation Format

Share Document