scholarly journals Sensorimotor contingency modulates breakthrough of virtual 3D objects during a breaking continuous flash suppression paradigm

2018 ◽  
Author(s):  
Keisuke Suzuki ◽  
David J Schwartzman ◽  
Rafael Augusto ◽  
Anil Seth

To investigate how embodied sensorimotor interactions shape subjective visual experience, we developed a novel combination of Virtual Reality (VR) and Augmented Reality (AR) within an adapted breaking continuous flash suppression (bCFS) paradigm. In a first experiment, participants manipulated novel virtual 3D objects, viewed through a head-mounted display, using three interlocking cogs. This setup allowed us to manipulate the sensorimotor contingencies governing interactions with virtual objects, while characterising the effects on subjective visual experience by measuring breakthrough times from bCFS. We contrasted the effects of the congruency (veridical versus reversed sensorimotor coupling) and contingency (live versus replayed interactions) using a motion discrimination task. The results showed that the contingency but not congruency of sensorimotor coupling affected breakthrough times, with live interactions displaying faster breakthrough times. In a second experiment, we investigated how the contingency of sensorimotor interactions affected object category discrimination within a more naturalistic setting, using a motion tracker that allowed object interactions with increased degrees of freedom. We again found that breakthrough times were faster for live compared to replayed interactions (contingency effect). Together, these data demonstrate that bCFS breakthrough times for unfamiliar 3D virtual objects are modulated by the contingency of the dynamic causal coupling between actions and their visual consequences, in line with theories of perception that emphasise the influence of sensorimotor contingencies on visual experience. The combination of VR/AR and motion tracking technologies with bCFS provides a novel methodology extending the use of binocular suppression paradigms into more dynamic and realistic sensorimotor environments.

2011 ◽  
Vol 10 (3) ◽  
pp. 51-60
Author(s):  
Brahim Nini

This work deals with the virtual manipulation of a real object through its images. The results presented in this paper give a movie-based solution to the simulation process. We show how the simulation of infinite virtual views of a moving object can be reached using a finite number of object's taken images stored in an organized way. The basis of this solution is an analytical geometry-based method that links explicit applied user's actions, resulting in an object's views change, and images that match the best such views. This paper presents an overall solution for these three intertwined parts of the virtual manipulation that involves six degrees of freedom. Hence, a user is able to freely manipulate a virtual object in a scene in whatever manner s/he likes. In this case, the actions are transformed into rotations and/or translations which lead to some changes in object's appearance, both covered by two viewing features: zoom and/or rotations


2019 ◽  
Vol 9 (14) ◽  
pp. 2933 ◽  
Author(s):  
Ju Young Oh ◽  
Ji Hyung Park ◽  
Jung-Min Park

This paper proposes an interaction method to conveniently manipulate a virtual object by combining touch interaction and head movements for a head-mounted display (HMD), which provides mobile augmented reality (AR). A user can conveniently manipulate a virtual object with touch interaction recognized from the inertial measurement unit (IMU) attached to the index finger’s nail and head movements tracked by the IMU embedded in the HMD. We design two interactions that combine touch and head movements, to manipulate a virtual object on a mobile HMD. Each designed interaction method manipulates virtual objects by controlling ray casting and adjusting widgets. To evaluate the usability of the designed interaction methods, a user evaluation is performed in comparison with the hand interaction using Hololens. As a result, the designed interaction method receives positive feedback that virtual objects can be manipulated easily in a mobile AR environment.


2001 ◽  
Vol 24 (5) ◽  
pp. 999-999 ◽  
Author(s):  
Zenon W. Pylyshyn

The target article proposes that visual experience arises when sensorimotor contingencies are exploited in perception. This novel analysis of visual experience fares no better than the other proposals that the article rightly dismisses, and for the same reasons. Extracting invariants may be needed for recognition, but it is neither necessary nor sufficient for having a visual experience. While the idea that vision involves the active extraction of sensorimotor invariants has merit, it does not replace the need for perceptual representations. Vision is not just for the immediate controlling of action; it is also for finding out about the world, from which inferences may be drawn and beliefs changed.


2015 ◽  
Vol 75 (4) ◽  
Author(s):  
Ajune Wanis Ismail ◽  
Mark Bilinghust ◽  
Mohd Shahrizal Sunar

In this paper, we describe a new tracking approach for object handling in Augmented Reality (AR). Our approach improves the standard vision-based tracking system during marker extraction and its detection stage. It transforms a unique tracking pattern into set of vertices which are able to perform interaction such as translate, rotate, and copy. This is based on arobust real-time computer vision algorithm that tracks a paddle that a person uses for input. A paddle pose pattern is constructed in a one-time calibration process and through vertex-based calculation of the camera pose relative to the paddle we can show 3D graphics on top of it. This allows the user to look at virtual objects from different viewing angles in the AR interface and perform 3D object manipulation. This approach was implemented using marker-based tracking to improve the tracking in term of the accuracy and robustness in manipulating 3D objects in real-time. We demonstrate our improved tracking system with a sample Tangible AR application, and describe how the system could be improved in the future.


2001 ◽  
Vol 24 (5) ◽  
pp. 979-980 ◽  
Author(s):  
Andy Clark ◽  
Josefa Toribio

While applauding the bulk of the account on offer, we question one apparent implication, namely, that every difference in sensorimotor contingencies corresponds to a difference in conscious visual experience.


2004 ◽  
Vol 27 (6) ◽  
pp. 906-907 ◽  
Author(s):  
Stephen E. Robbins

Bergson, writing in 1896, anticipated “sensorimotor contingencies” under the concept that perception is “virtual action.” But to explain the external image, he embedded this concept in a holographic framework where time-motion is an indivisible and the relation of subject/object is in terms of time. The target article's account of qualitative visual experience falls short for lack of this larger framework.[Objects] send back, then, to my body, as would a mirror, their eventual influence; they take rank in an order corresponding to the growing or decreasing powers of my body. The objects which surround my body reflect its possible action upon them.– Henri Bergson (1896/1912, pp. 6–7)


Author(s):  
Rasul Fesharakifard ◽  
Maryam Khalili ◽  
Laure Leroy ◽  
Alexis Paljic ◽  
Philippe Fuchs

A grasp exoskeleton actuated by a string-based platform is proposed to provide the force feedback for a user’s hand in human-scale virtual environments. The user of this interface accedes to seven active degrees of freedom in interaction with virtual objects, which comprises three degrees of translation, three degrees of rotation, and one degree of grasping. The exoskeleton has a light and ergonomic structure and provides the grasp gesture for five fingers. The actuation of the exoskeleton is performed by eight strings that are the parallel arms of the platform. Each string is connected to a block of motor, rotary encoder, and force sensor with a novel design to create the necessary force and precision for the interface. A hybrid control method based on the string’s tension measured by the force sensor is developed to resolve the ordinary problems of string-based interface. The blocks could be moved on a cubic frame around the virtual environment. Finally the results of preliminary experimentation of interface are presented to show its practical characteristics. Also the interface is mounted on an automotive model to demonstrate its industrial adaptability.


Author(s):  
Monica Bordegoni ◽  
Mario Covarrubias ◽  
Giandomenico Caruso ◽  
Umberto Cugini

This paper presents a novel system that allows product designers to design, experience, and modify new shapes of objects, starting from existing ones. The system allows designers to acquire and reconstruct the 3D model of a real object and to visualize and physically interact with this model. In addition, the system allows designer to modify the shape through physical manipulation of the 3D model and to eventually print it using a 3D printing technology. The system is developed by integrating state-of-the-art technologies in the sectors of reverse engineering, virtual reality, and haptic technology. The 3D model of an object is reconstructed by scanning its shape by means of a 3D scanning device. Then, the 3D model is imported into the virtual reality environment, which is used to render the 3D model of the object through an immersive head mounted display (HMD). The user can physically interact with the 3D model by using the desktop haptic strip for shape design (DHSSD), a 6 degrees of freedom servo-actuated developable metallic strip, which reproduces cross-sectional curves of 3D virtual objects. The DHSSD device is controlled by means of hand gestures recognized by a leap motion sensor.


2008 ◽  
Vol 05 (02) ◽  
pp. 161-181 ◽  
Author(s):  
MICHA HERSCH ◽  
ERIC SAUSER ◽  
AUDE BILLARD

We present an algorithm enabling a humanoid robot to visually learn its body schema, knowing only the number of degrees of freedom in each limb. By "body schema" we mean the joint positions and orientations and thus the kinematic function. The learning is performed by visually observing its end-effectors when moving them. With simulations involving a body schema of more than 20 degrees of freedom, results show that the system is scalable to a high number of degrees of freedom. Real robot experiments confirm the practicality of our approach. Our results illustrate how subjective space representation can develop as a result of sensorimotor contingencies.


2012 ◽  
Vol 24 (05) ◽  
pp. 435-445
Author(s):  
Ren-Guey Lee ◽  
Sheng-Chung Tien ◽  
Chun-Chang Chen ◽  
Yu-Ying Chen

In this paper, rehabilitation tools are proposed and implemented to assist patients with stroke and body dysfunction via auxiliary physical activity. By integrating the entertainment of games and the needs of rehabilitation and utilizing motor assessment scale (MAS) as the building blocks, we propose a game system developed for assessment of stroke rehabilitation by using augmented reality (AR) technology. By means of application of AR Markers and based on related parameters of Wii remotes, various assessment games have been implemented, and vivid pictures can be presented to users via a head-mounted display by seamless combination of real environment and virtual objects. This game system takes various assessment scales into consideration, and each scale is specifically designed and individually integrated to enable the relevant capacity for assessment of motor functions. According to the experimental results, the accuracy rate of the users in successfully following the game steps is 91.2%, and the accuracy rate of the system in assessing the MAS categories is as high as 94.6%, which confirms the feasibility of our proposed and implemented rehabilitation game system.


Sign in / Sign up

Export Citation Format

Share Document