scholarly journals New emotional model environment for navigation in a virtual reality

Open Physics ◽  
2020 ◽  
Vol 18 (1) ◽  
pp. 864-870
Author(s):  
Marcin Daszuta ◽  
Dominik Szajerman ◽  
Piotr Napieralski

Abstract Emotions are commonly considered to be the most expressive of everyday human experiences, giving a sense of fullness and reality of life. The ability to recognize human emotions as a manifestation of higher intelligence is desirable feature. There are several models of emotional experience that can become the basis for building a universal emotional recognition system. In this article, we check the correctness of the designed emotional model. We check the evaluation of the system’s operation by human observers.

Author(s):  
Giuliana Guazzaroni

Virtual reality (VR), augmented reality (AR), and artificial intelligence (AI) are increasingly being used by educational institutions and museums worldwide. Visitors of museums and art galleries may live different layers of reality while enjoying works of art augmented with immersive VR. Research points out that this possibility may strongly affect human emotions. Digital technologies may allow forms of hybridization between flesh and technological objects within virtual or real spaces. They are interactive processes that may contribute to the redefinition of the relationship between identity and technology, between technology and body (Mainardi, 2013). Interactive museums and art galleries are real environments amplified, through information systems, which allow a shift between reality, and electronically manipulated immersive experiences. VR is emotionally engaging and a VR scenario may enhance emotional experience (Diemer et al., 2015) or induce an emotional change (Wu et al., 2016). The main purpose of this chapter is to verify how art and VR affect emotions.


2015 ◽  
Vol 24 (1) ◽  
pp. 62-73 ◽  
Author(s):  
Jurica Katicic ◽  
Polina Häfner ◽  
Jivka Ovtcharova

The presented six-step methodology suggests a novel, integrated way of emotional assessment of future products during the early conceptual design stages by the customer. In the first step, the product developers create virtual product concepts. These will be evaluated by the lead user group, which is defined by market researchers within the second step. The third step is the design of the immersive emotional assessment environment, which combines compatible technological solutions for virtual reality and measurements of psycho-physiological data. The experiment in the fourth step consists of preparation and calibration sub-steps and of the product case scenario. After completing the statistical analysis of the experimental results in the fifth step, the variant space is reduced in a customer-centered way. The goal of this methodology is to provide relevant emotional customer feedback during the interactive experience of only virtually existing conceptual product designs at very early development stages and thus to identify emotionally suitable designs. Its novelty aspect lies in the holistic approach through the structured integration of experts from the areas of product development, marketing, virtual reality, and psychology. The presented validation study proved the coherence of the methodology. Furthermore, it showed clear preferences for the application of simple interaction gestures, the PA emotional model, and the measurement of psycho-physiological parameters—especially the zygomaticus major muscle activity—as suitable state-of-the-art solutions for capturing relevant customer emotional feedback in an immersive environment.


2021 ◽  
Author(s):  
◽  
Robyn Harkness

<p>Within healthcare architecture, there is a void of attention directed towards the non-medical spaces; the waiting rooms, hallways and all ‘between moments’ where many people spend extended periods of time under acute stress. Nowhere is this more prevalent that in the emergency departments where patients seek care and treatment for real or perceived, serious injuries or illnesses. While waiting for medical attention, exposure to high levels of harsh lighting, sterile furnishings, chaotic activity and cavernous rooms with others in distress can cause and increase anxiety, delirium and high blood pressure. The emotional experience of such spaces changes based upon a user’s unique sensory conditions and therefore their individual perception of space.  The architectural design tools and devices to explore these highly charged sensory spaces have been historically limited to technical plans and sections and rendered marketing perspectival images, which do not fully communicate the immersive experience of these spaces when in use. Virtual reality is emerging as a powerful three-dimensional visualisation tool, offering designers the opportunity to comprehend proposed designs more clearly during the planning and design phases, thus enabling a greater influence on design decision making.  This research explores the use of VR in a healthcare perspective, adopting a participatory design approach to simulate sensory conditions of blindness, deafness and autism and the emotions associated with these conditions within space. This approach diverges from a purely visual method of design towards an understanding of the haptic, exploring the critical phenomenology behind these non-medical spaces. The research finds significant potential for the use of virtual reality as a design tool to simulate the experience of these spaces in early design stages.</p>


2016 ◽  
Vol 7 ◽  
Author(s):  
Manuel Fernández-Alcántara ◽  
Francisco Cruz-Quintana ◽  
M. N. Pérez-Marfil ◽  
Andrés Catena-Martínez ◽  
Miguel Pérez-García ◽  
...  

Author(s):  
Rosalind W. Picard ◽  
Adolfo Plasencia

In this dialogue, the scientist Rosalind W. Picard from MIT Media Lab begins by explaining why the expression "Affective computing" is not an oxymoron, and describes how they are trying to bridge the gap between information systems and human emotions in her laboratory. She details  how they are attempting to give computers and digital machines better abilities so that they can “see” the emotions of their users, and outlines what a machine would have to be like to pass the Turing ‘emotions’ test. Rosalind goes on to describe why emotion is part of all communication, even when the communication itself might not explicitly have emotion in it, arguing that consciousness also involves feelings that cannot be expressed and why emotional experience is an essential part of the normal functioning of the conscious system. Later she outlines her research in affective computing, where they managed to measure signals using a sensor that responds to some human emotion or feelings, and explains how technology can become a sort of ‘affective prosthesis’ to help the disabled, and people with difficulties, in understanding and handling emotions.


2021 ◽  
Vol 41 (6) ◽  
pp. 171-178
Author(s):  
Federica Marcolin ◽  
Giulia Wally Scurati ◽  
Luca Ulrich ◽  
Francesca Nonis ◽  
Enrico Vezzetti ◽  
...  

2016 ◽  
Vol 12 (04) ◽  
pp. 37 ◽  
Author(s):  
Bruno Patrão ◽  
Samuel Pedro ◽  
Paulo Menezes

In this paper we present a Virtual Reality based laboratory experience that can be used to demonstrate the effect that emotions may play in our bodies. For attaining this purpose, a Virtual Reality-based system is presented where three different virtual environments aim at inducing specific sensations and emotions on the students participating in a classroom experiment. The objective is that the students be able to analyze their own physiological data and understand the correlation between data patterns and experienced situation.


2020 ◽  
Vol 27 (2) ◽  
pp. 183-201 ◽  
Author(s):  
Federica Pallavicini ◽  
Alessandro Pepe ◽  
Ambra Ferrari ◽  
Giacomo Garcea ◽  
Andrea Zanacchi ◽  
...  

Scientific knowledge is still limited about the effect of commercial virtual reality content, such as experiences developed for advertising purposes, on individual emotional experience. In addition, even though correlations between emotional responses and perceived sense of presence in virtual reality have often been reported, the relationship remains unclear. Some studies have suggested an important effect of ease of interaction on both emotions and the sense of presence, but only a few studies have scientifically explored this topic. Within this context, this study aimed to: (a) test the effect of inducing positive emotions of a commercial virtual experience developed for the promotion of an urban renewal project, (b) investigate the relationship between positive emotions and the perceived sense of presence, and (c) explore the association between the ease of interaction of the virtual experience with positive emotions and the sense of presence reported by the users. Sixty-one participants were recruited from visitors to the 2017 Milan Design Week “Fuorisalone” event. A survey was administered before and after the experience to collect information about users' demographics, positive emotions, sense of presence, and the ease of interaction with the virtual content. Results give evidence that: (a) the commercial virtual reality experience was able to induce positive emotions; (b) the positive emotions reported by users were associated with the sense of presence experienced in the virtual environment, with a directional effect from emotion to sense of presence; and (c) the easier the interaction, the more the sense of presence and positive emotions were reported by users.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Linqin Cai ◽  
Yaxin Hu ◽  
Jiangong Dong ◽  
Sitong Zhou

With the rapid development in social media, single-modal emotion recognition is hard to satisfy the demands of the current emotional recognition system. Aiming to optimize the performance of the emotional recognition system, a multimodal emotion recognition model from speech and text was proposed in this paper. Considering the complementarity between different modes, CNN (convolutional neural network) and LSTM (long short-term memory) were combined in a form of binary channels to learn acoustic emotion features; meanwhile, an effective Bi-LSTM (bidirectional long short-term memory) network was resorted to capture the textual features. Furthermore, we applied a deep neural network to learn and classify the fusion features. The final emotional state was determined by the output of both speech and text emotion analysis. Finally, the multimodal fusion experiments were carried out to validate the proposed model on the IEMOCAP database. In comparison with the single modal, the overall recognition accuracy of text increased 6.70%, and that of speech emotion recognition soared 13.85%. Experimental results show that the recognition accuracy of our multimodal is higher than that of the single modal and outperforms other published multimodal models on the test datasets.


Sign in / Sign up

Export Citation Format

Share Document