A Multi-modal Interface for Control of Omnidirectional Video Playing on Head Mount Display

Author(s):  
Yusi Machidori ◽  
Ko Takayama ◽  
Kaoru Sugita
Keyword(s):  
2021 ◽  
Vol 11 (7) ◽  
pp. 2987
Author(s):  
Takumi Okumura ◽  
Yuichi Kurita

Image therapy, which creates illusions with a mirror and a head mount display, assists movement relearning in stroke patients. Mirror therapy presents the movement of the unaffected limb in a mirror, creating the illusion of movement of the affected limb. As the visual information of images cannot create a fully immersive experience, we propose a cross-modal strategy that supplements the image with sensual information. By interacting with the stimuli received from multiple sensory organs, the brain complements missing senses, and the patient experiences a different sense of motion. Our system generates the sense of stair-climbing in a subject walking on a level floor. The force sensation is presented by a pneumatic gel muscle (PGM). Based on motion analysis in a human lower-limb model and the characteristics of the force exerted by the PGM, we set the appropriate air pressure of the PGM. The effectiveness of the proposed system was evaluated by surface electromyography and a questionnaire. The experimental results showed that by synchronizing the force sensation with visual information, we could match the motor and perceived sensations at the muscle-activity level, enhancing the sense of stair-climbing. The experimental results showed that the visual condition significantly improved the illusion intensity during stair-climbing.


Author(s):  
Shoichiro Mukai ◽  
Hiroyuki Egi ◽  
Minoru Hattori ◽  
Yusuke Sumi ◽  
Yuichi Kurita ◽  
...  

2016 ◽  
Vol 22 (9) ◽  
pp. 2354-2357
Author(s):  
Mankyu Sung ◽  
Myung Soo Jung ◽  
Yeahyung Moon

Author(s):  
Eric G. Hintz ◽  
Michael D. Jones ◽  
M. Jeannette Lawler ◽  
Nathan Bench ◽  
Fred Mangrubang

Accommodating the planetarium experience to members of the deaf or hard-of-hearing community has often created situations that are either disruptive to the rest of the audience or provide an insufficient accommodation. To address this issue, we examined the use of head-mounted displays to deliver an American Sign Language sound track to learners in the planetarium Here we present results from a feasibility study to see if an ASL sound track delivered through a head-mount display can be understood by deaf junior to senior high aged students who are fluent in ASL. We examined the adoption of ASL classifiers that were used as part of the sound track for a full dome planetarium show. We found that about 90% of all students in our sample adopted at least one classifier from the show. In addition, those who viewed the sound track in a head-mounted display did at least as well as those who saw the sound track projected directly on the dome. These results suggest that ASL transmitted through head-mounted displays is a promising method to help improve learning for those whose primary language is ASL and merits further investigation.


Author(s):  
Chi Chung Ko ◽  
Chang Dong Cheng

Our discussions in previous chapters have centered on the creation and interaction of visual objects in a virtual 3D world. The objects and scenes constructed, however, will ultimately have to be shown on appropriate display devices such as a single PC monitor, a stereoscopic head mount display (HMD), or a multi screen project system (Salisbury, Farr, & Moore, 1999). Also, it is quite often that we may need to show different views of the created universe at the same time for certain applications. Even for the case of a single PC monitor, showing different views of the same objects in different windows will be instructive and informative, and may be essential in some cases. While we have been using a single simple view in earlier chapters, Java 3D has inherent capabilities to give multiple views of the created 3D world for supporting, say, the use of head tracking HMD systems for user to carry out 3D navigation (Yabuki, Machinaka, & Li, 2006). In this chapter, we will discuss how multiple views can be readily generated after outlining the view model and the various components that make up the simple universe view used previously.


2010 ◽  
Vol 68 ◽  
pp. e106
Author(s):  
Kouji Takano ◽  
Naoki Hata ◽  
Yasoichi Nakajima ◽  
Kenji Kansaku

Sign in / Sign up

Export Citation Format

Share Document