The Integration of Tactile and Visual Cues Increases Golf Putting Error in a Mixed-Reality Paradigm.

2021 ◽  
Author(s):  
Caitlin Elisabeth Naylor ◽  
David Harris ◽  
Samuel James Vine ◽  
Jack Brookes ◽  
Faisal Mushtaq ◽  
...  

The integration of visual and tactile cues can enhance perception. However, the nature of this integration, and the subsequent benefits on perception and action execution, are context-dependent. Here, we examined how visual-tactile integration can influence performance on a complex motor task using virtual reality. We asked participants to wear a VR head-mounted display while using a tracked physical putter to make golf putts on a VR golf course in two conditions. In the ‘tactile’ condition, putter contact with the virtual golf ball coincided with physical contact with a physical ball. In a second ‘no tactile’ condition, no physical ball was present, such that only the virtual ball contacted the putter. In contrast to our pre-registered prediction that performance would benefit from the integration of visual and tactile cues, we found golf putting accuracy was higher in the no tactile condition compared to the tactile condition. Participants exhibited higher lateral error variance and over/undershooting when the physical ball was present. These differences in performance between the conditions suggest that tactile cues, when available, were integrated with visual cues. Second, this integration is not necessarily beneficial to performance. We suggest that the decreased performance caused by the addition of a physical ball may have been due to minor incongruencies between the virtual visual cues and the physical tactile cues. We discuss the implications of these results on the use of VR sports training and highlight that the absence of matched tactile cues in VR can result in sub-optimal learning and performance.

Author(s):  
Adam F. Werner ◽  
Jamie C. Gorman

Objective This study examines visual, auditory, and the combination of both (bimodal) coupling modes in the performance of a two-person perceptual-motor task, in which one person provides the perceptual inputs and the other the motor inputs. Background Parking a plane or landing a helicopter on a mountain top requires one person to provide motor inputs while another person provides perceptual inputs. Perceptual inputs are communicated either visually, auditorily, or through both cues. Methods One participant drove a remote-controlled car around an obstacle and through a target, while another participant provided auditory, visual, or bimodal cues for steering and acceleration. Difficulty was manipulated using target size. Performance (trial time, path variability), cue rate, and spatial ability were measured. Results Visual coupling outperformed auditory coupling. Bimodal performance was best in the most difficult task condition but also high in the easiest condition. Cue rate predicted performance in all coupling modes. Drivers with lower spatial ability required a faster auditory cue rate, whereas drivers with higher ability performed best with a lower rate. Conclusion Visual cues result in better performance when only one coupling mode is available. As predicted by multiple resource theory, when both cues are available, performance depends more on auditory cueing. In particular, drivers must be able to transform auditory cues into spatial actions. Application Spotters should be trained to provide an appropriate cue rate to match the spatial ability of the driver or pilot. Auditory cues can enhance visual communication when the interpersonal task is visual with spatial outputs.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Stefano Rozzi ◽  
Marco Bimbi ◽  
Alfonso Gravante ◽  
Luciano Simone ◽  
Leonardo Fogassi

AbstractThe ventral part of lateral prefrontal cortex (VLPF) of the monkey receives strong visual input, mainly from inferotemporal cortex. It has been shown that VLPF neurons can show visual responses during paradigms requiring to associate arbitrary visual cues to behavioral reactions. Further studies showed that there are also VLPF neurons responding to the presentation of specific visual stimuli, such as objects and faces. However, it is largely unknown whether VLPF neurons respond and differentiate between stimuli belonging to different categories, also in absence of a specific requirement to actively categorize or to exploit these stimuli for choosing a given behavior. The first aim of the present study is to evaluate and map the responses of neurons of a large sector of VLPF to a wide set of visual stimuli when monkeys simply observe them. Recent studies showed that visual responses to objects are also present in VLPF neurons coding action execution, when they are the target of the action. Thus, the second aim of the present study is to compare the visual responses of VLPF neurons when the same objects are simply observed or when they become the target of a grasping action. Our results indicate that: (1) part of VLPF visually responsive neurons respond specifically to one stimulus or to a small set of stimuli, but there is no indication of a “passive” categorical coding; (2) VLPF neuronal visual responses to objects are often modulated by the task conditions in which the object is observed, with the strongest response when the object is target of an action. These data indicate that VLPF performs an early passive description of several types of visual stimuli, that can then be used for organizing and planning behavior. This could explain the modulation of visual response both in associative learning and in natural behavior.


2021 ◽  
Author(s):  
Nina Rohrbach ◽  
Joachim Hermsdörfer ◽  
Lisa-Marie Huber ◽  
Annika Thierfelder ◽  
Gavin Buckingham

AbstractAugmented reality, whereby computer-generated images are overlaid onto the physical environment, is becoming significant part of the world of education and training. Little is known, however, about how these external images are treated by the sensorimotor system of the user – are they fully integrated into the external environmental cues, or largely ignored by low-level perceptual and motor processes? Here, we examined this question in the context of the size–weight illusion (SWI). Thirty-two participants repeatedly lifted and reported the heaviness of two cubes of unequal volume but equal mass in alternation. Half of the participants saw semi-transparent equally sized holographic cubes superimposed onto the physical cubes through a head-mounted display. Fingertip force rates were measured prior to lift-off to determine how the holograms influenced sensorimotor prediction, while verbal reports of heaviness after each lift indicated how the holographic size cues influenced the SWI. As expected, participants who lifted without augmented visual cues lifted the large object at a higher rate of force than the small object on early lifts and experienced a robust SWI across all trials. In contrast, participants who lifted the (apparently equal-sized) augmented cubes used similar force rates for each object. Furthermore, they experienced no SWI during the first lifts of the objects, with a SWI developing over repeated trials. These results indicate that holographic cues initially dominate physical cues and cognitive knowledge, but are dismissed when conflicting with cues from other senses.


2017 ◽  
Vol 26 (1) ◽  
pp. 16-41 ◽  
Author(s):  
Jonny Collins ◽  
Holger Regenbrecht ◽  
Tobias Langlotz

Virtual and augmented reality, and other forms of mixed reality (MR), have become a focus of attention for companies and researchers. Before they can become successful in the market and in society, those MR systems must be able to deliver a convincing, novel experience for the users. By definition, the experience of mixed reality relies on the perceptually successful blending of reality and virtuality. Any MR system has to provide a sensory, in particular visually coherent, set of stimuli. Therefore, issues with visual coherence, that is, a discontinued experience of a MR environment, must be avoided. While it is very easy for a user to detect issues with visual coherence, it is very difficult to design and implement a system for coherence. This article presents a framework and exemplary implementation of a systematic enquiry into issues with visual coherence and possible solutions to address those issues. The focus is set on head-mounted display-based systems, notwithstanding its applicability to other types of MR systems. Our framework, together with a systematic discussion of tangible issues and solutions for visual coherence, aims at guiding developers of mixed reality systems for better and more effective user experiences.


2020 ◽  
Vol 1 (1) ◽  
pp. 70-80
Author(s):  
Ekerin Oluseye Michael ◽  
Heidi Tan Yeen-Ju ◽  
Neo Tse Kian

Over the years educators have adopted a variety of technologies in a bid to improve student engagement, interest and understanding of abstract topics taught in the classroom. There has been an increasing interest in immersive technology such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). The ability of VR to bring ideas to life in three dimensional spaces in a way that is easy for students to understand the subject matter makes it one of the important tools available today for education. A key feature of VR is the ability to provide multi-sensory visuals and virtual interaction to students wearing a Head Mounted Display thus providing students better learning experience and connection to the subject matter. Virtual Reality has been used for training purposes in the health sector, military, workplace training, gamification and exploration of sites and countless others. With the potential benefits of virtual technology in visualizing abstract concepts in a realistic virtual world, this paper presents a plan to study the use of situated cognition theory as a learning framework to develop an immersive VR application that would be used to train and prepare students studying Telecommunications Engineering for the workplace. This paper presents a review of literature in the area of Virtual Reality in education, offers insight into the motivation behind this research and the planned methodology in carrying out the research.


2019 ◽  
Vol 72 (11) ◽  
pp. 2705-2716 ◽  
Author(s):  
Supreet Saluja ◽  
Richard J Stevenson

Tactile cues are said to be potent elicitors of disgust and reliable markers of disease. Despite this, no previous study had explored what the full range of tactile properties are that cue disgust, nor how interpretation of these sensations influences disgust. To answer these questions, participants were asked to touch nine objects, selected to cover the range of tactile properties, and evaluate their sensory, affective, and risk-based characteristics (primarily how sick they thought the object would make them). Object contact was manipulated in four ways, with participants randomly allocated to corresponding groups—one that could see the objects (i.e., the control) and three that could not (i.e., the blind groups). To manipulate disease risk interpretation of the objects, labelling was used on the blind groups, with one receiving Disgust-Labels, one True-Labels and one no labels. Disgust was strongly associated with sticky and wet textures, and moderately with viscous, cold, and lumpy textures, suggesting adherence-to-skin may predict disgust. The participants in the disgust-labelled condition had the highest disgust ratings, and this was mediated by their increased sickness belief and fear of the objects. Object identification was poor when labels or visual cues were absent. Our findings suggest that tactile disgust may reflect a bottom-up sensory component—skin adhesion—moderated by judgements of disease-related threat.


Author(s):  
C.S.-T. Choy ◽  
Ka-Chun Lu ◽  
Wing-Ming Chow ◽  
Ka-Man Sit ◽  
Ka-Hing Lay ◽  
...  

Author(s):  
Yuko Chinone ◽  
Hideki Aoyama ◽  
Tetsuo Oya

Three-dimensional models (CAD models) are constructed in the design processes of products because they are effective for design evaluation processes using CAE systems and manufacturing processes using CAM systems. However, mock-ups or prototypes are still required in the evaluation processes of designability and operability of products because the evaluation of the operations of real products is essential. It is however time-consuming and costly to make prototypes or to develop trial products for evaluation. For this problem, considerable studies have been conducted on the use of mixed reality technology by overlaying an image of the design model onto a physical model using a Head-Mounted Display (HMD) to evaluate the designability and operability of a product. Such technology reduces the need for making physical mock-ups (prototypes and trial products), but HMDs have drawbacks such as causing motion sickness and physical weight, bulkiness of the display, and high costs. In this paper, a method using projectors is proposed to establish mixed reality technology which does not have the drawbacks of HMDs. A mixed reality system was constructed according to the proposed method, and applied for evaluating designability and operability of products without physical mock-ups. In the mixed reality space built by the system, the functions of a product can be held in the hand as if they were real products.


Behaviour ◽  
2011 ◽  
Vol 148 (8) ◽  
pp. 909-925 ◽  
Author(s):  
Hannele Valkama ◽  
Hannu Huuskonen ◽  
Jouni Taskinen ◽  
Yi-Te Lai ◽  
Jukka Kekäläinen ◽  
...  

AbstractSexual displays often involve many different signal components, which may give information about the same or different mate qualities. We studied the information content of different signals in male minnows (Phoxinus phoxinus) and tested whether females are able to discriminate between males when only olfactory cues are present. We found that females preferred the odour of males with a more saturated (i.e., redder) belly, but only when the females had been in physical contact with the males before the experiments. Instead, when unfamiliar males were used, females did not discriminate between male odours and also the overall swimming activity (mate choice intensity) of the females was significantly lower. More ornamented males had lower number of Philometra ovata parasites (indicated by belly saturation) and Neoechinorhynchus rutili parasites (indicated by belly hue) than their less ornamented counterparts. We did not find experimental evidence for female odour preference being linked to belly hue and breeding tubercle number, but in the nature these traits were associated with the condition factor of the males. Taken together, our results suggest that belly colouration and breeding tubercles give honest information on several aspects of male quality. In addition females may learn the association between male colouration and their olfactory signals and utilize this information when visual signals are not present.


2017 ◽  
Author(s):  
Diogo Santos-Pata ◽  
Alex Escuredo ◽  
Zenon Mathews ◽  
Paul F.M.J. Verschure

ABSTRACTInsects are great explorers, able to navigate through long-distance trajectories and successfully find their way back. Their navigational routes cross dynamic environments suggesting adaptation to novel configurations. Arthropods and vertebrates share neural organizational principles and it has been shown that rodents modulate their neural spatial representation accordingly with environmental changes. However, it is unclear whether insects reflexively adapt to environmental changes or retain memory traces of previously explored situations. We sought to disambiguate between insect behavior at environmental novel situations and reconfiguration conditions. An immersive mixed-reality multi-sensory setup was built to replicate multi-sensory cues. We have designed an experimental setup where female crickets Gryllus Bimaculatus were trained to move towards paired auditory and visual cues during primarily phonotactic driven behavior. We hypothesized that insects were capable of identifying sensory modifications in known environments. Our results show that, regardless of the animals history, novel situation conditions did not compromise the animals performance and navigational directionality towards a novel target location. However, in trials where visual and auditory stimuli were spatially decoupled, the animals heading variability towards a previously known location significantly increased. Our findings showed that crickets are able to behaviorally manifest environmental reconfiguration, suggesting the encoding for spatial representation.


Sign in / Sign up

Export Citation Format

Share Document