scholarly journals Audio-video virtual reality environments in building acoustics: An exemplary study reproducing performance results and subjective ratings of a laboratory listening experiment

2019 ◽  
Vol 146 (3) ◽  
pp. EL310-EL316 ◽  
Author(s):  
Imran Muhammad ◽  
Michael Vorländer ◽  
Sabine J. Schlittmeier
2015 ◽  
Vol 6 (1) ◽  
pp. 60-73 ◽  
Author(s):  
Ali Oker ◽  
Matthieu Courgeon ◽  
Elise Prigent ◽  
Victoria Eyharabide ◽  
Nadine Bazin ◽  
...  

Advances in the use of virtual affective agents for therapeutic purposes in mental health opened a research avenue to improve the way patients interpret other's behavior as helpful instead of menacing. Here, the authors propose an original paradigm based on affective computing and virtual reality technologies requiring the assessment of helping intentions as well as self-monitoring metacognition. Sixteen healthy subjects played a 38-turns card games with a virtual affective agent (MARC) during which they had to guess between two cards the one that would be color-matched with another card. Their guesses could be oriented by the agent's emotional displays. Three subjective ratings on a percentage analog scale were recorded after each trial: helpfulness, self- monitoring, and sympathy. Help recognition and self-monitoring metacognitive ratings raise the question of the importance to enhance both components in therapeutic situations within psychiatric populations. Overall, this study exemplifies the promising use of virtual reality settings for future studies in the medical psychology field.


1977 ◽  
Vol 21 (2) ◽  
pp. 155-158
Author(s):  
Terrence W. Faulkner ◽  
Stanley H. Caplan

An experiment was run to assess two different versions of instructions for clearing stoppages on a KODAK EKTAPRINT Copier-Duplicator. One group of operators used a prose version (P) and a second group used a flowchart version (F). Performance of operators clearing stoppages was measured and subjective ratings of the instructions were obtained. Performance results showed that the F was quicker, but the P group had fewer errors. Subjective ratings showed F was preferred to P. No other subjective differences were found. Since neither version was adequate by itself, specific operator errors identified during clearance trials served as basic data for reducing clearance complexity by reallocating some clearance steps from operator to machine. Trade test results showed that the improved approach was satisfactory.


2016 ◽  
Vol 116 (6) ◽  
pp. 2656-2662 ◽  
Author(s):  
M. Fusaro ◽  
G. Tieri ◽  
S. M. Aglioti

Studies have explored behavioral and neural responses to the observation of pain in others. However, much less is known about how taking a physical perspective influences reactivity to the observation of others' pain and pleasure. To explore this issue we devised a novel paradigm in which 24 healthy participants immersed in a virtual reality scenario observed a virtual: needle penetrating (pain), caress (pleasure), or ball touching (neutral) the hand of an avatar seen from a first (1PP)- or a third (3PP)-person perspective. Subjective ratings and physiological responses [skin conductance responses (SCR) and heart rate (HR)] were collected in each trial. All participants reported strong feelings of ownership of the virtual hand only in 1PP. Subjective measures also showed that pain and pleasure were experienced as more salient than neutral. SCR analysis demonstrated higher reactivity in 1PP than in 3PP. Importantly, vicarious pain induced stronger responses with respect to the other conditions in both perspectives. HR analysis revealed equally lower activity during pain and pleasure with respect to neutral. SCR may reflect egocentric perspective, and HR may merely index general arousal. The results suggest that behavioral and physiological indexes of reactivity to seeing others' pain and pleasure were qualitatively similar in 1PP and 3PP. Our paradigm indicates that virtual reality can be used to study vicarious sensation of pain and pleasure without actually delivering any stimulus to participants' real body and to explore behavioral and physiological reactivity when they observe pain and pleasure from ego- and allocentric perspectives.


PLoS ONE ◽  
2021 ◽  
Vol 16 (3) ◽  
pp. e0248225
Author(s):  
Natalia Cooper ◽  
Ferdinando Millela ◽  
Iain Cant ◽  
Mark D. White ◽  
Georg Meyer

Virtual reality (VR) can create safe, cost-effective, and engaging learning environments. It is commonly assumed that improvements in simulation fidelity lead to better learning outcomes. Some aspects of real environments, for example vestibular or haptic cues, are difficult to recreate in VR, but VR offers a wealth of opportunities to provide additional sensory cues in arbitrary modalities that provide task relevant information. The aim of this study was to investigate whether these cues improve user experience and learning outcomes, and, specifically, whether learning using augmented sensory cues translates into performance improvements in real environments. Participants were randomly allocated into three matched groups: Group 1 (control) was asked to perform a real tyre change only. The remaining two groups were trained in VR before performance was evaluated on the same, real tyre change task. Group 2 was trained using a conventional VR system, while Group 3 was trained in VR with augmented, task relevant, multisensory cues. Objective performance, time to completion and error number, subjective ratings of presence, perceived workload, and discomfort were recorded. The results show that both VR training paradigms improved performance for the real task. Providing additional, task-relevant cues during VR training resulted in higher objective performance during the real task. We propose a novel method to quantify the relative performance gains between training paradigms that estimates the relative gain in terms of training time. Systematic differences in subjective ratings that show comparable workload ratings, higher presence ratings and lower discomfort ratings, mirroring objective performance measures, were also observed. These findings further support the use of augmented multisensory cues in VR environments as an efficient method to enhance performance, user experience and, critically, the transfer of training from virtual to real environment scenarios.


2021 ◽  
Author(s):  
Philipp A Schroeder ◽  
Enrico Collantoni ◽  
Johannes Lohmann ◽  
Martin V Butz ◽  
Christian Plewnia

Abstract Purpose: Attractive food elicits approaching behavior, which could be directly assessed in a combination of Virtual Reality (VR) with online motion-capture. Thus, VR enables the assessment of motivated approach and avoidance behavior towards food and non-food cues in controlled laboratory environments. Aim of this study was to test the specificity of a behavioral approach bias for high-calorie food in grasp movements compared to low-calorie food and neutral objects of different complexity, namely, simple balls and geometrically more complex tools. Methods: In a VR setting, healthy participants repeatedly grasped or pushed high-calorie food, low-calorie food, balls and office tools in randomized order with 30 item repetitions. All objects were rated for valence and arousal. Results: High-calorie food was less attractive and more arousing in subjective ratings than low-calorie food and neutral objects. Responses to high-calorie food were fastest only in grasp trials, but comparisons with low-calorie food and complex tools were inconclusive. Conclusion: A behavioral bias for food may be specific to high-calorie food objects, but more systematic variations of object fidelity are outstanding. The utility of VR in assessing approach behavior is confirmed in this study by exploring manual interactions in a controlled environment.


1998 ◽  
Vol 41 (6) ◽  
pp. 1282-1293 ◽  
Author(s):  
Jane Mertz Garcia ◽  
Paul A. Dagenais

This study examined changes in the sentence intelligibility scores of speakers with dysarthria in association with different signal-independent factors (contextual influences). This investigation focused on the presence or absence of iconic gestures while speaking sentences with low or high semantic predictiveness. The speakers were 4 individuals with dysarthria, who varied from one another in terms of their level of speech intelligibility impairment, gestural abilities, and overall level of motor functioning. Ninety-six inexperienced listeners (24 assigned to each speaker) orthographically transcribed 16 test sentences presented in an audio + video or audio-only format. The sentences had either low or high semantic predictiveness and were spoken by each speaker with and without the corresponding gestures. The effects of signal-independent factors (presence or absence of iconic gestures, low or high semantic predictiveness, and audio + video or audio-only presentation formats) were analyzed for individual speakers. Not all signal-independent information benefited speakers similarly. Results indicated that use of gestures and high semantic predictiveness improved sentence intelligibility for 2 speakers. The other 2 speakers benefited from high predictive messages. The audio + video presentation mode enhanced listener understanding for all speakers, although there were interactions related to specific speaking situations. Overall, the contributions of relevant signal-independent information were greater for the speakers with more severely impaired intelligibility. The results are discussed in terms of understanding the contribution of signal-independent factors to the communicative process.


1997 ◽  
Vol 40 (4) ◽  
pp. 900-911 ◽  
Author(s):  
Marilyn E. Demorest ◽  
Lynne E. Bernstein

Ninety-six participants with normal hearing and 63 with severe-to-profound hearing impairment viewed 100 CID Sentences (Davis & Silverman, 1970) and 100 B-E Sentences (Bernstein & Eberhardt, 1986b). Objective measures included words correct, phonemes correct, and visual-phonetic distance between the stimulus and response. Subjective ratings were made on a 7-point confidence scale. Magnitude of validity coefficients ranged from .34 to .76 across materials, measures, and groups. Participants with hearing impairment had higher levels of objective performance, higher subjective ratings, and higher validity coefficients, although there were large individual differences. Regression analyses revealed that subjective ratings are predictable from stimulus length, response length, and objective performance. The ability of speechreaders to make valid performance evaluations was interpreted in terms of contemporary word recognition models.


Sign in / Sign up

Export Citation Format

Share Document