scholarly journals Using HMD Virtual Reality to investigate individual differences in visual processing styles

2021 ◽  
Author(s):  
Sarune Savickaite ◽  
Kimberley McNaughton ◽  
Elisa Gaillard ◽  
Ioanna Amaya ◽  
Neil McDonnell ◽  
...  

Global and local processing is part of human perceptual organisation, where global processing enables us to extract the ‘gist’ of the visual information and local processing helps us to perceive the details. Individual differences in these two types of visual processing have been found in autism and ADHD. Virtual Reality (VR) has become a more available method of research in the last few decades. No previous research has investigated perceptual differences using this technology. The standard ROCF test was used as a baseline task to look at a practical aspect of using VR as an experimental platform. 94 participants were tested. Attention-to-Detail, Attention Switching and Imagination subscales of AQ questionnaire were found to be predictors of organisational ROCF scores, whereas only Attention-to?Detail subscale was predictive of perceptual ROCF scores. Current study is an example of how classic psychological paradigms can be transferred into the virtual world. Further investigation of the distinct individual preferences in drawing tasks in VR could lead to a better understanding on how we process visuospatial information. As a result, such findings would inevitably extend to industrial applications.

2022 ◽  
Author(s):  
Sarune Savickaite ◽  
Neil McDonnell ◽  
David Simmons

One approach to characterizing human perceptual organization is to distinguish global and local processing. In visual perception, global processing enables us to extract the ‘gist’ of the visual information and local processing helps us to perceive the details. Individual differences in these two types of visual processing have been found in conditions like autism and ADHD. The Rey-Osterrieth Complex Figure (ROCF) test is commonly used to investigate differences between local and global processing. Whilst Virtual Reality (VR) has become more accessible, cheaper, and widely used in psychological research, no previous study has investigated local vs global perceptual differences using immersive technology. In this study, we investigated individual differences in local and global processing as a function of autistic and ADHD traits. The ROCF was presented in the virtual environment and a standard protocol for using the figure was followed. A novel method of quantitative data extraction was used, which will be described in this paper in greater detail. Whilst some performance differences were found between experimental conditions, no relationship was observed between these differences and participants’ levels of autistic and ADHD traits. Limitations of the study and implications of the novel methodology are discussed.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Jen-Chun Hsiang ◽  
Keith P Johnson ◽  
Linda Madisen ◽  
Hongkui Zeng ◽  
Daniel Kerschensteiner

Neurons receive synaptic inputs on extensive neurite arbors. How information is organized across arbors and how local processing in neurites contributes to circuit function is mostly unknown. Here, we used two-photon Ca2+ imaging to study visual processing in VGluT3-expressing amacrine cells (VG3-ACs) in the mouse retina. Contrast preferences (ON vs. OFF) varied across VG3-AC arbors depending on the laminar position of neurites, with ON responses preferring larger stimuli than OFF responses. Although arbors of neighboring cells overlap extensively, imaging population activity revealed continuous topographic maps of visual space in the VG3-AC plexus. All VG3-AC neurites responded strongly to object motion, but remained silent during global image motion. Thus, VG3-AC arbors limit vertical and lateral integration of contrast and location information, respectively. We propose that this local processing enables the dense VG3-AC plexus to contribute precise object motion signals to diverse targets without distorting target-specific contrast preferences and spatial receptive fields.


2004 ◽  
Vol 63 (3) ◽  
pp. 143-149 ◽  
Author(s):  
Fred W. Mast ◽  
Charles M. Oman

The role of top-down processing on the horizontal-vertical line length illusion was examined by means of an ambiguous room with dual visual verticals. In one of the test conditions, the subjects were cued to one of the two verticals and were instructed to cognitively reassign the apparent vertical to the cued orientation. When they have mentally adjusted their perception, two lines in a plus sign configuration appeared and the subjects had to evaluate which line was longer. The results showed that the line length appeared longer when it was aligned with the direction of the vertical currently perceived by the subject. This study provides a demonstration that top-down processing influences lower level visual processing mechanisms. In another test condition, the subjects had all perceptual cues available and the influence was even stronger.


2020 ◽  
Author(s):  
David Harris ◽  
Mark Wilson ◽  
Tim Holmes ◽  
Toby de Burgh ◽  
Samuel James Vine

Head-mounted eye tracking has been fundamental for developing an understanding of sporting expertise, as the way in which performers sample visual information from the environment is a major determinant of successful performance. There is, however, a long running tension between the desire to study realistic, in-situ gaze behaviour and the difficulties of acquiring accurate ocular measurements in dynamic and fast-moving sporting tasks. Here, we describe how immersive technologies, such as virtual reality, offer an increasingly compelling approach for conducting eye movement research in sport. The possibility of studying gaze behaviour in representative and realistic environments, but with high levels of experimental control, could enable significant strides forward for eye tracking in sport and improve understanding of how eye movements underpin sporting skills. By providing a rationale for virtual reality as an optimal environment for eye tracking research, as well as outlining practical considerations related to hardware, software and data analysis, we hope to guide researchers and practitioners in the use of this approach.


2020 ◽  
Author(s):  
Valentina Cazzato ◽  
Elizabeth Walters ◽  
Cosimo Urgesi

We examined whether visual processing mechanisms of the body of conspecifics are different in women and men and whether these rely on westernised socio-cultural ideals and body image concerns. Twenty-four women and 24 men performed a visual discrimination task of upright or inverted images of female or male bodies and faces (Experiment 1) and objects (Experiment 2). In Experiment 1, both groups of women and men showed comparable abilities in the discrimination of upright and inverted bodies and faces. However, the genders of the human stimuli yielded different effects on participants’ performance, so that male bodies and female faces appeared to be processed less configurally than female bodies and male faces, respectively. Interestingly, altered configural processing for male bodies was significantly predicted by participants’ Body Mass Index (BMI) and their level of internalization of muscularity. Our findings suggest that configural visual processing of bodies and faces in women and men may be linked to a selective attention to detail needed for discriminating salient physical (perhaps sexual) cues of conspecifics. Importantly, BMI and muscularity internalization of beauty ideals may also play a crucial role in this mechanism.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jacques Pesnot Lerousseau ◽  
Gabriel Arnold ◽  
Malika Auvray

AbstractSensory substitution devices aim at restoring visual functions by converting visual information into auditory or tactile stimuli. Although these devices show promise in the range of behavioral abilities they allow, the processes underlying their use remain underspecified. In particular, while an initial debate focused on the visual versus auditory or tactile nature of sensory substitution, since over a decade, the idea that it reflects a mixture of both has emerged. In order to investigate behaviorally the extent to which visual and auditory processes are involved, participants completed a Stroop-like crossmodal interference paradigm before and after being trained with a conversion device which translates visual images into sounds. In addition, participants' auditory abilities and their phenomenologies were measured. Our study revealed that, after training, when asked to identify sounds, processes shared with vision were involved, as participants’ performance in sound identification was influenced by the simultaneously presented visual distractors. In addition, participants’ performance during training and their associated phenomenology depended on their auditory abilities, revealing that processing finds its roots in the input sensory modality. Our results pave the way for improving the design and learning of these devices by taking into account inter-individual differences in auditory and visual perceptual strategies.


1983 ◽  
Vol 27 (5) ◽  
pp. 354-354
Author(s):  
Bruce W. Hamill ◽  
Robert A. Virzi

This investigation addresses the problem of attention in the processing of symbolic information from visual displays. Its scope includes the nature of attentive processes, the structural properties of stimuli that influence visual information processing mechanisms, and the manner in which these factors interact in perception. Our purpose is to determine the effects of configural feature structure on visual information processing. It is known that for stimuli comprising separable features, one can distinguish between conditions in which only one relevant feature differs among stimuli in the array being searched and conditions in which conjunctions of two (or more) features differ: Since the visual process of conjoining separable features is additive, this fact is reflected in search time as a function of array size, with feature conditions yielding flat curves associated with parallel search (no increase in search time across array sizes) and conjunction conditions yielding linearly increasing curves associated with serial search. We studied configural-feature stimuli within this framework to determine the nature of visual processing for such stimuli as a function of their feature structure. Response times of subjects searching for particular targets among structured arrays of distractors were measured in a speeded visual search task. Two different sets of stimulus materials were studied in array sizes of up to 32 stimuli, using both tachistoscope and microcomputer-based CRT presentation for each. Our results with configural stimuli indicate serial search in all of the conditions, with the slope of the response-time-by-array-size function being steeper for conjunction conditions than for feature conditions. However, for each of the two sets of stimuli we studied, there was one configuration that stood apart from the others in its set in that it yielded significantly faster response times, and in that conjunction conditions involving these particular stimuli tended to cluster with the feature conditions rather than with the other conjunction conditions. In addition to these major effects of particular targets, context effects also appeared in our results as effects of the various distractor sets used; certain of these context effects appear to be reversible. The effects of distractor sets on target search were studied in considerable detail. We have found interesting differences in visual processing between stimuli comprising separable features and those comprising configural features. We have also been able to characterize the effects we have found with configural-feature stimuli as being related to the specific feature structure of the target stimulus in the context of the specific feature structure of distractor stimuli. These findings have strong implications for the design of symbology that can enhance visual performance in the use of automated displays.


1999 ◽  
Vol 11 (3) ◽  
pp. 300-311 ◽  
Author(s):  
Edmund T. Rolls ◽  
Martin J. Tovée ◽  
Stefano Panzeri

Backward masking can potentially provide evidence of the time needed for visual processing, a fundamental constraint that must be incorporated into computational models of vision. Although backward masking has been extensively used psychophysically, there is little direct evidence for the effects of visual masking on neuronal responses. To investigate the effects of a backward masking paradigm on the responses of neurons in the temporal visual cortex, we have shown that the response of the neurons is interrupted by the mask. Under conditions when humans can just identify the stimulus, with stimulus onset asynchronies (SOA) of 20 msec, neurons in macaques respond to their best stimulus for approximately 30 msec. We now quantify the information that is available from the responses of single neurons under backward masking conditions when two to six faces were shown. We show that the information available is greatly decreased as the mask is brought closer to the stimulus. The decrease is more marked than the decrease in firing rate because it is the selective part of the firing that is especially attenuated by the mask, not the spontaneous firing, and also because the neuronal response is more variable at short SOAs. However, even at the shortest SOA of 20 msec, the information available is on average 0.1 bits. This compares to 0.3 bits with only the 16-msec target stimulus shown and a typical value for such neurons of 0.4 to 0.5 bits with a 500-msec stimulus. The results thus show that considerable information is available from neuronal responses even under backward masking conditions that allow the neurons to have their main response in 30 msec. This provides evidence for how rapid the processing of visual information is in a cortical area and provides a fundamental constraint for understanding how cortical information processing operates.


Sign in / Sign up

Export Citation Format

Share Document