motion cues
Recently Published Documents


TOTAL DOCUMENTS

377
(FIVE YEARS 58)

H-INDEX

38
(FIVE YEARS 3)

2022 ◽  
Vol 29 (2) ◽  
pp. 1-22
Author(s):  
Andrea Gauthier ◽  
Kaska Porayska-Pomsta ◽  
Iroise Dumontheil ◽  
Sveta Mayer ◽  
Denis Mareschal

The human–computer interaction (HCI) design of educational technologies influences cognitive behaviour, so it is imperative to assess how different HCI strategies support intended behaviour. We developed a neuroscience-inspired game that trains children's use of “stopping-and-thinking” (S&T)—an inhibitory control-related behaviour—in the context of counterintuitive science problems. We tested the efficacy of four HCI features in supporting S&T: (1) a readiness mechanic, (2) motion cues, (3) colour cues, and (4) rewards/penalties. In a randomised eye-tracking trial with 45 7-to-8-year-olds, we found that the readiness mechanic increased S&T duration, that motion and colour cues proved equally effective at promoting S&T, that combining symbolic colour with the readiness mechanic may have a cumulative effect, and that rewards/penalties may have distracted children from S&T. Additionally, S&T duration was related to in-game performance. Our results underscore the importance of interdisciplinary approaches to educational technology research that actively investigates how HCI impacts intended learning behaviours.


2022 ◽  
pp. 1-29
Author(s):  
Andrew R. Wagner ◽  
Megan J. Kobel ◽  
Daniel M. Merfeld

Abstract In an effort to characterize the factors influencing the perception of self-motion rotational cues, vestibular self-motion perceptual thresholds were measured in 14 subjects for rotations in the roll and pitch planes, as well as in the planes aligned with the anatomic orientation of the vertical semicircular canals (i.e., left anterior, right posterior; LARP, and right anterior, left posterior; RALP). To determine the multisensory influence of concurrent otolith cues, within each plane of motion, thresholds were measured at four discrete frequencies for rotations about earth-horizontal (i.e., tilts; EH) and earth-vertical axes (i.e., head positioned in the plane of the rotation; EV). We found that the perception of rotations, stimulating primarily the vertical canals, was consistent with the behavior of a high-pass filter for all planes of motion, with velocity thresholds increasing at lower frequencies of rotation. In contrast, tilt (i.e, EH rotation) velocity thresholds, stimulating both the canals and otoliths (i.e., multisensory integration), decreased at lower frequencies and were significantly lower than earth-vertical rotation thresholds at each frequency below 2 Hz. These data suggest that multisensory integration of otolithic gravity cues with semicircular canal rotation cues enhances perceptual precision for tilt motions at frequencies below 2 Hz. We also showed that rotation thresholds, at least partially, were dependent on the orientation of the rotation plane relative to the anatomical alignment of the vertical canals. Collectively these data provide the first comprehensive report of how frequency and axis of rotation influence perception of rotational self-motion cues stimulating the vertical canals.


2021 ◽  
Author(s):  
Jonathan Kelly ◽  
Melynda Hoover ◽  
Taylor Doty ◽  
Alex Renner ◽  
Lucia Cherep ◽  
...  

The wide availability of consumer-oriented virtual reality (VR) equipment has enabled researchers to recruit existing VR owners to participate remotely using their own equipment. Yet, there are many differences between lab environments and home environments, as well as differences between participant samples recruited for lab studies and remote studies. This paper replicates a lab-based experiment on VR locomotion interfaces using a remote sample. Participants completed a triangle-completion task (travel two path legs, then point to the path origin) using their own VR equipment in a remote, unsupervised setting. Locomotion was accomplished using two versions of the teleporting interface varying in availability of rotational self-motion cues. The size of the traveled path and the size of the surrounding virtual environment were also manipulated. Results from remote participants largely mirrored lab results, with overall better performance when rotational self-motion cues were available. Some differences also occurred, including a tendency for remote participants to rely less on nearby landmarks, perhaps due to increased competence with using the teleporting interface to update self-location. This replication study provides insight for VR researchers on aspects of lab studies that may or may not replicate remotely.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8079
Author(s):  
Jose V. Riera ◽  
Sergio Casas ◽  
Marcos Fernández ◽  
Francisco Alonso ◽  
Sergio A. Useche

Motion platforms have been widely used in Virtual Reality (VR) systems for decades to simulate motion in virtual environments, and they have several applications in emerging fields such as driving assistance systems, vehicle automation and road risk management. Currently, the development of new VR immersive systems faces unique challenges to respond to the user’s requirements, such as introducing high-resolution 360° panoramic images and videos. With this type of visual information, it is much more complicated to apply the traditional methods of generating motion cues, since it is generally not possible to calculate the necessary corresponding motion properties that are needed to feed the motion cueing algorithms. For this reason, this paper aims to present a new method for generating non-real-time gravito-inertial cues with motion platforms using a system fed both with computer-generated—simulation-based—images and video imagery. It is a hybrid method where part of the gravito-inertial cues—those with acceleration information—are generated using a classical approach through the application of physical modeling in a VR scene utilizing washout filters, and part of the gravito-inertial cues—the ones coming from recorded images and video, without acceleration information—were generated ad hoc in a semi-manual way. The resulting motion cues generated were further modified according to the contributions of different experts based on a successive approximation—Wideband Delphi-inspired—method. The subjective evaluation of the proposed method showed that the motion signals refined with this method were significantly better than the original non-refined ones in terms of user perception. The final system, developed as part of an international road safety education campaign, could be useful for developing further VR-based applications for key fields such as driving assistance, vehicle automation and road crash prevention.


2021 ◽  
pp. 003151252110529
Author(s):  
Eric Hiris ◽  
Sean Conway ◽  
William McLoughlin ◽  
Gaokhia Yang

Recent research has shown that the perception of biological motion may be influenced by aspects of the observer’s personality. In this study, we sought to determine how participant characteristics (including demographics, response inhibition, autism spectrum quotient, empathy, social anxiety, and motion imagery) might influence the use of form and motion to identify the actor’s sex in biological motion displays. We varied the degree of form and motion in biological motion displays and correlated 76 young adult participants’ performances for identifying the actor’s sex in these varied conditions with their individual differences on variables of interest. Differences in the separate use of form and motion cues were predictive of participant performance generally, with use of form most predictive of performance. Female participants relied primarily on form information, while male participants relied primarily on motion information. Participants less able to visualize movement tended to be better at using form information in the biological motion task. Overall, our findings suggest that similar group level performances across participants in identifying the sex of the actor in a biological motion task may result from quite different individual processing.


Author(s):  
Christina Breil ◽  
Lynn Huestegge ◽  
Anne Böckler

Abstract Human attention is strongly attracted by direct gaze and sudden onset motion. The sudden direct-gaze effect refers to the processing advantage for targets appearing on peripheral faces that suddenly establish eye contact. Here, we investigate the necessity of social information for attention capture by (sudden onset) ostensive cues. Six experiments involving 204 participants applied (1) naturalistic faces, (2) arrows, (3) schematic eyes, (4) naturalistic eyes, or schematic facial configurations (5) without or (6) with head turn to an attention-capture paradigm. Trials started with two stimuli oriented towards the observer and two stimuli pointing into the periphery. Simultaneous to target presentation, one direct stimulus changed to averted and one averted stimulus changed to direct, yielding a 2 × 2 factorial design with direction and motion cues being absent or present. We replicated the (sudden) direct-gaze effect for photographic faces, but found no corresponding effects in Experiments 2–6. Hence, a holistic and socially meaningful facial context seems vital for attention capture by direct gaze. Statement of significance The present study highlights the significance of context information for social attention. Our findings demonstrate that the direct-gaze effect, that is, the prioritization of direct gaze over averted gaze, critically relies on the presentation of a meaningful holistic and naturalistic facial context. This pattern of results is evidence in favor of early effects of surrounding social information on attention capture by direct gaze.


2021 ◽  
Author(s):  
Jawad Khan

Several recent studies on action recognition have emphasised the significance of including motioncharacteristics clearly in the video description. This work shows that properly partitioning visualmotion into dominant and residual motions enhances action recognition algorithms greatly, both interms of extracting space-time trajectories and computing descriptors. Then, using differentialmotion scalar variables, divergence, curl, and shear characteristics, we create a new motiondescriptor, the DCS descriptor. It adds to the results by capturing additional information on localmotion patterns. Finally, adopting the recently proposed VLAD coding technique in image retrievalimproves action recognition significantly. On three difficult datasets, namely Hollywood 2,HMDB51, and Olympic Sports, our three additions are complementary and lead to beat all reportedresults by a large margin.


2021 ◽  
Vol 18 (183) ◽  
Author(s):  
Laura A. Ryan ◽  
David J. Slip ◽  
Lucille Chapuis ◽  
Shaun P. Collin ◽  
Enrico Gennari ◽  
...  

Shark bites on humans are rare but are sufficiently frequent to generate substantial public concern, which typically leads to measures to reduce their frequency. Unfortunately, we understand little about why sharks bite humans. One theory for bites occurring at the surface, e.g. on surfers, is that of mistaken identity, whereby sharks mistake humans for their typical prey (pinnipeds in the case of white sharks). This study tests the mistaken identity theory by comparing video footage of pinnipeds, humans swimming and humans paddling surfboards, from the perspective of a white shark viewing these objects from below. Videos were processed to reflect how a shark's retina would detect the visual motion and shape cues. Motion cues of humans swimming, humans paddling surfboards and pinnipeds swimming did not differ significantly. The shape of paddled surfboards and human swimmers was also similar to that of pinnipeds with their flippers abducted. The difference in shape between pinnipeds with abducted versus adducted flippers was bigger than between pinnipeds with flippers abducted and surfboards or human swimmers. From the perspective of a white shark, therefore, neither visual motion nor shape cues allow an unequivocal visual distinction between pinnipeds and humans, supporting the mistaken identity theory behind some bites.


2021 ◽  
Vol 21 (9) ◽  
pp. 1974
Author(s):  
Emma Alexander ◽  
Venkatesh Krishna S. ◽  
Tim C. Hladnik ◽  
Nicholas C. Guilbeault ◽  
Lanya T. Cai ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document