Multiple Sensorial Media Advances and Applications
Latest Publications


TOTAL DOCUMENTS

13
(FIVE YEARS 0)

H-INDEX

4
(FIVE YEARS 0)

Published By IGI Global

9781609608217, 9781609608224

Author(s):  
Nikolaos Kaklanis ◽  
Konstantinos Moustakas ◽  
Dimitrios Tsovaras

This chapter describes an interaction technique wherein web pages are parsed so as to automatically generate a corresponding 3D virtual environment with haptic feedback. The automatically created 3D scene is composed of “hapgets” (haptically-enhanced widgets), which are three dimensional widgets providing a behavior that is analogous to the behavior of the original HTML components but are also enhanced with haptic feedback. Moreover, for each 2D map included in a web page a corresponding multimodal (haptic-aural) map is automatically generated. The proposed interaction technique enables the haptic navigation through the internet as well as the haptic exploration of conventional 2D maps for the visually impaired users. A rendering engine of web pages that was developed according to the proposed interaction technique is also presented.


Author(s):  
Gianluca Mura

Interaction systems with the user need complex and suitable conceptual and multisensorial new media definition. This study analyzes social and conceptual evolutions of digital media and proposes an interactive mixed-space media model which communicates the information contents and enhances the user experience between interactive space of physical objects and online virtual space. Its feedback gives information through user performance among its multisensorial interfaces. The research widens previous research publications, and gives precisely a definition of a fuzzy logic cognitive and emotional perception level to the metaplastic multimedia model. It augments the interaction quality within its conceptual media space through an action-making loop and gives as a result new contents of information within its metaplastic metaspace configurations.


Author(s):  
Rainer Herpers ◽  
David Scherfgen ◽  
Michael Kutz ◽  
Jens Bongartz ◽  
Ulrich Hartmann ◽  
...  

The FIVIS simulator system addresses the classical visual and acoustical cues as well as vestibular and further physiological cues. Sensory feedback from skin, muscles, and joints are integrated within this virtual reality visualization environment. By doing this it allows for simulating otherwise dangerous traffic situations in a controlled laboratory environment. The system has been successfully applied for road safety education applications of school children. In further research studies it is applied to perform multimedia perception experiments. It has been shown, that visual cues dominate by far the perception of visual depth in the majority of applications but the quality of depth perception might depend on the availability of other sensory information. This however, needs to be investigated in more detail in the future.


Author(s):  
Simon Hayhoe
Keyword(s):  

The study found that programmers who had been introduced to, and educated using a range of visual, audio and / or tactile devices, whether early or late blind, could adapt to produce code with GUIs, but programmers who were educated using only tactile and audio devices preferred to shun visual references in their work.


Author(s):  
Gheorghita Ghinea ◽  
Oluwakemi Ademoye

Olfaction (smell) is one of our commonly used senses in everyday life. However, this is not the case in digital media, where it is an exception to the rule that usually only our auditory and visual senses are employed. In mulsemedia, however, together with these traditional senses, we envisage that the olfactory will increasingly be employed, especially so with the increase in computing processing abilities and in the sophistication of olfactory devices employed. It is not surprising, however, that there are still many unanswered questions in the use of olfaction in mulsemedia. Accordingly in this chapter we present the results of an empirical study which explored one such question, namely does the correct association of scent and content enhance the user experience of multimedia applications?’


Author(s):  
Takamichi Nakamoto ◽  
Hiroshi Ishida ◽  
Haruka Matsukura

Olfaction is now becoming available in Multiple Sensorial Media because of recent progress of an olfactory display. One of the important functions of the olfactory display is to blend multiple of odor components to create a variety of odors. We have developed the olfactory display to blend up to 32 odor components using solenoid valves. High -speed switching of a solenoid valve enables us to blend many odors instantaneously at any recipe even if the solenoid valve has only two states such as ON and OFF. Since it is compact and is easy to use, it has been so far used to demonstrate a movie, an animation and a game with scents. However, a contents developer must manually adjust its concentration sequence because the concentration varies from place to place. The manually determined concentration sequence is not accurate and, moreover, it takes much time to make the plausible concentration sequence manually. Thus, it is adequate to calculate the concentration sequence using CFD (Computational Fluid Dynamics) simulation in the virtual environment. Since the spread of odor in spatial domain is very complicated, the isotropic diffusion from the odor source is not valid. Since the simulated odor distribution resembles the distribution actually measured in the real room, CFD simulation enables us to reproduce the spatial variation in the odor intensity that the user would experience in the real world. Most of the users successfully perceived the intended change in the odor intensity when they watched the scented movie, in which they approached an odor source hindered by an obstacle. Presentation of the spatial odor distribution to the users was tried, and encouraging results were obtained.


Author(s):  
Maria Chiara Caschera ◽  
Arianna D’Ulizia ◽  
Fernando Ferri ◽  
Patrizia Grifoni

The way by which people communicate each other changes in the different cultures due to the different communicative expectations and depending on their cultural backgrounds. The development of the Internet has caused an increasing use of computer systems by people from different cultures, highlighting the need for interaction systems that adapt the interaction according to the cultural background of the user. This is one of the reasons of the growing research activity that explores how to consider cultural issues during the design of multimodal interaction systems. This chapter is focused on such a challenging topic, proposing a grammatical approach representing multicultural issues in multimodal languages. The approach is based on a grammar, that is able to produce a set of structured sentences, composed of gestural, vocal, audio, graphical symbols, and so on, along with the meaning that these symbols have in the different cultures. This work provides a contribution to the area of mulsemedia research, as it deals with the integration of input produced by multiple human senses and acquired through multiple sensorial media.


Author(s):  
Alberto Gallace ◽  
Mary K. Ngo ◽  
John Sulaitis ◽  
Charles Spence

Perception in the real world is inherently multisensory, often involving visual, auditory, tactile, olfactory, gustatory, and, on occasion, nociceptive (i.e., painful) stimulation. In fact, the vast majority of life’s most enjoyable experiences involve the stimulation of several senses simultaneously. Outside of the entertainment industry, however, the majority of virtual reality (VR) applications thus far have involved the stimulation of only one, or at most two, senses, typically vision, audition, and, on occasion, touch/haptics. That said, the research that has been conducted to date has convincingly shown that increasing the number of senses stimulated in a VR simulator can dramatically enhance a user’s ‘sense of presence’, their enjoyment, and even their memory for the encounter/experience. What is more, given that the technology has been improving rapidly, and the costs associated with VR systems are continuing to come down, it seems increasingly likely that truly multisensory VR should be with us soon (albeit 50 years after Heilig, 1962, originally introduced Sensorama). However, it is important to note that there are both theoretical and practical limitations to the stimulation of certain senses in VR. In this chapter, after having defined the concept of ‘neurally-inspired VR’, we highlight some of the most exciting potential applications associated with engaging more of a user’s senses while in a simulated environment. We then review the key technical challenges associated with stimulating multiple senses in a VR setting. We focus on the particular problems associated with the stimulation of the senses of touch, smell, and taste. We also highlight the problems associated with the limited bandwidth of human sensory perception and the psychological costs associated with users having to divide their attention between multiple sensory modalities simultaneously. Finally, we discuss how the findings provided by the extant research in the cognitive neurosciences might help to overcome, at least in part, some of the cognitive and technological limitations affecting the development of multisensory VR systems.


Author(s):  
Meng Zhu ◽  
Atta Badii

Digitalised multimedia information today is typically represented in different modalities and distributed through various channels. The use of such a huge amount of data is highly dependent on effective and efficient cross-modal labelling, indexing and retrieval of multimodal information. In this Chapter, we mainly focus on the combining of the primary and collateral modalities of the information resource in an intelligent and effective way in order to provide better multimodal information understanding, classification, labelling and retrieval. Image and text are the two modalities we mainly talk about here. A novel framework for semantic-based collaterally cued image labelling had been proposed and implemented, aiming to automatically assign linguistic keywords to regions of interest in an image. A visual vocabulary was constructed based on manually labelled image segments. We use Euclidean distance and Gaussian distribution to map the low-level region-based image features to the high-level visual concepts defined in the visual vocabulary. Both the collateral content and context knowledge were extracted from the collateral textual modality to bias the mapping process. A semantic-based high-level image feature vector model was constructed based on the labelling results, and the performance of image retrieval using this feature vector model appears to outperform both content-based and text-based approaches in terms of its capability for combining both perceptual and conceptual similarity of the image content.


Author(s):  
Tom A. F. Anderson ◽  
Zhi-Hong Chen ◽  
Yean-Fu Wen ◽  
Marissa Milne ◽  
Adham Atyabi ◽  
...  

The hybrid world provides a framework for the creation of lessons that teach and test knowledge in the second language. In one of our example grammar and vocabulary lessons, the Thinking Head instructs the student to move a tiger to a lake—a student moving the toy tiger in the real world effectively moves a virtual tiger in the virtual arena. This type of interaction is beneficial for computer-based language learning especially because we are able to see a student has successfully understood the directions if the tiger is moved to the vicinity of the virtual lake. Physical movement helps the learner to internalise the novel relationships in the second language. We also provide for additional forms of communication, including dialogue with an embodied conversational agent and writing stories using markers on a whiteboard. In summary, our system provides a natural interface with which to communicate with the hybrid-reality learning environment.


Sign in / Sign up

Export Citation Format

Share Document