scholarly journals What Am I Drinking? Vision Modulates the Perceived Flavor of Drinks, but No Evidence of Flavor Altering Color Perception in a Mixed Reality Paradigm

2021 ◽  
Vol 12 ◽  
Author(s):  
Lorena Stäger ◽  
Marte Roel Lesur ◽  
Bigna Lenggenhager

It is well established that vision, and in particular color, may modulate our experience of flavor. Such cross-modal correspondences have been argued to be bilateral, in the sense that one modality can modulate the other and vice versa. However, the amount of literature assessing how vision modulates flavor is remarkably larger than that directly assessing how flavor might modulate vision. This is more exaggerated in the context of cross-modal contrasts (when the expectancy in one modality contrasts the experience through another modality). Here, using an embodied mixed reality setup in which participants saw a liquid while ingesting a contrasting one, we assessed both how vision might modulate basic dimensions of flavor perception and how the flavor of the ingested liquid might alter the perceived color of the seen drink. We replicated findings showing the modulation of flavor perception by vision but found no evidence of flavor modulating color perception. These results are discussed in regard to recent accounts of multisensory integration in the context of visual modulations of flavor and bilateral cross-modulations. Our findings might be important as a step in understanding bilateral visual and flavor cross-modulations (or the lack of them) and might inform developments using embodied mixed reality technologies.

Author(s):  
Caterina Paola Venditti ◽  
Paolo Mele

Within digital archaeology, an important part is centered on technologies that allow representing, or replaying, ancient environments. It is a field where scientific competences' contribution to contents makes a difference, and pedagogical repercussion are stimulating. Among the other reality technologies, the Mixed Reality, giving the possibility to experience in front of the users' eyes both static models of individual objects and entire landscapes, it is increasingly used in archaeological contexts as display technology, with different purposes such as educational, informative, or simply for entertainment. This chapter provides a high-level overview about possible orientations and uses of this technology in cultural heritage, also sketching its use in gaming within the role of gaming itself in smart communication of archaeological contents and issues.


Author(s):  
Charles Spence

Abstract. Experimental psychologists, psychophysicists, food/sensory scientists, and marketers have long been interested in, and/or speculated about, what exactly the relationship, if any, might be between color and taste/flavor. While several influential early commentators argued against there being any relationship, a large body of empirical evidence published over the last 80 years or so clearly demonstrates that the hue and saturation, or intensity, of color in food and/or drink often influences multisensory flavor perception. Interestingly, the majority of this research has focused on vision’s influence on the tasting experience rather than looking for any effects in the opposite direction. Recently, however, a separate body of research linking color and taste has emerged from the burgeoning literature on the crossmodal correspondences. Such correspondences, or associations, between attributes or dimensions of experience, are thought to be robustly bidirectional. When talking about the relationship between color and taste/flavor, some commentators would appear to assume that these two distinct literatures describe the same underlying empirical phenomenon. That said, a couple of important differences (in terms of the bidirectionality of the effects and their relative vs. absolute nature) are highlighted, meaning that the findings from one domain may not necessarily always be transferable to the other, as is often seemingly assumed.


2016 ◽  
Vol 29 (6-7) ◽  
pp. 557-583 ◽  
Author(s):  
Emiliano Macaluso ◽  
Uta Noppeney ◽  
Durk Talsma ◽  
Tiziana Vercillo ◽  
Jess Hartcher-O’Brien ◽  
...  

The role attention plays in our experience of a coherent, multisensory world is still controversial. On the one hand, a subset of inputs may be selected for detailed processing and multisensory integration in a top-down manner, i.e., guidance of multisensory integration by attention. On the other hand, stimuli may be integrated in a bottom-up fashion according to low-level properties such as spatial coincidence, thereby capturing attention. Moreover, attention itself is multifaceted and can be describedviaboth top-down and bottom-up mechanisms. Thus, the interaction between attention and multisensory integration is complex and situation-dependent. The authors of this opinion paper are researchers who have contributed to this discussion from behavioural, computational and neurophysiological perspectives. We posed a series of questions, the goal of which was to illustrate the interplay between bottom-up and top-down processes in various multisensory scenarios in order to clarify the standpoint taken by each author and with the hope of reaching a consensus. Although divergence of viewpoint emerges in the current responses, there is also considerable overlap: In general, it can be concluded that the amount of influence that attention exerts on MSI depends on the current task as well as prior knowledge and expectations of the observer. Moreover stimulus properties such as the reliability and salience also determine how open the processing is to influences of attention.


Author(s):  
Edoardo Provenzi

Abstract This is the first half of a two-part paper dealing with the geometry of color perception. Here we analyze in detail the seminal 1974 work by H.L. Resnikoff, who showed that there are only two possible geometric structures and Riemannian metrics on the perceived color space $\mathcal{P} $ P compatible with the set of Schrödinger’s axioms completed with the hypothesis of homogeneity. We recast Resnikoff’s model into a more modern colorimetric setting, provide a much simpler proof of the main result of the original paper, and motivate the need of psychophysical experiments to confute or confirm the linearity of background transformations, which act transitively on $\mathcal{P} $ P . Finally, we show that the Riemannian metrics singled out by Resnikoff through an axiom on invariance under background transformations are not compatible with the crispening effect, thus motivating the need of further research about perceptual color metrics.


Author(s):  
Mark Pegrum

What is it? Augmented Reality (AR) bridges the real and the digital. It is part of the Extended Reality (XR) spectrum of immersive technological interfaces. At one end of the continuum, Virtual Reality (VR) immerses users in fully digital simulations which effectively substitute for the real world. At the other end of the continuum, AR allows users to remain immersed in the real world while superimposing digital overlays on the world. The term mixed reality, meanwhile, is sometimes used as an alternative to AR and sometimes as an alternative to XR.


1986 ◽  
Vol 63 (2) ◽  
pp. 995-1007 ◽  
Author(s):  
Marian-Ortolf Bagley ◽  
Margaret Sathre Maxfield

This is a study of perception of negative afterimages. Surface color samples were viewed under Spectralight. Subjects fixated on 11 Munsell hues mounted on white cards and matched their afterimages with chips from the Munsell Book of Color. Samples were drawn from 125 participants in two groups, one practiced, the other unfamiliar with afterimage. No single afterimage or Munsell color chip was reported for any of the stimulus hues. However, most afterimage responses for nine stimulus colors fell within one Munsell hue family. Afterimages reported for the remaining two stimulus colors of purple-blue and yellow-red span two adjacent hue families. Results suggest new alternatives to traditional subtractive color complements. New afterimage opposites are provided.


2007 ◽  
Vol 97 (5) ◽  
pp. 3193-3205 ◽  
Author(s):  
Juan Carlos Alvarado ◽  
J. William Vaughan ◽  
Terrence R. Stanford ◽  
Barry E. Stein

The present study suggests that the neural computations used to integrate information from different senses are distinct from those used to integrate information from within the same sense. Using superior colliculus neurons as a model, it was found that multisensory integration of cross-modal stimulus combinations yielded responses that were significantly greater than those evoked by the best component stimulus. In contrast, unisensory integration of within-modal stimulus pairs yielded responses that were similar to or less than those evoked by the best component stimulus. This difference is exemplified by the disproportionate representations of superadditive responses during multisensory integration and the predominance of subadditive responses during unisensory integration. These observations suggest that different rules have evolved for integrating sensory information, one (unisensory) reflecting the inherent characteristics of the individual sense and, the other (multisensory), unique supramodal characteristics designed to enhance the salience of the initiating event.


2020 ◽  
Vol 26 (8) ◽  
pp. 972-995
Author(s):  
Sangsun Han ◽  
Kibum Kim ◽  
Seonghwan Choi ◽  
Mankyu Sung

Video/audio conferencing systems have been used extensively for remote collaboration over many years. Recently, virtual and mixed reality (VR/MR) systems have started to show great potential as communication media for remote collaboration. Prior studies revealed that the creation of common ground between discourse participants is crucial for collaboration and that grounding techniques change with the communication medium. However, it is difficult to find previous research that compares VR and MR communication system performances with video conferencing systems regarding the creation of common ground for collaborative problem solving. On the other hand, prior studies have found that display fidelity and interaction fidelity had significant effects on performance-intensive individual tasks in virtual reality. Fidelity in VR can be defined as the degree of objective accuracy with which the real-world is represented by the virtual world. However, to date, fidelity for collaborative tasks in VR/MR has not been defined or studied much. In this paper, we compare five different communication media for the establishment of common ground in collaborative problem-solving tasks: Webcam, headband camera, VR, MR, and audio-only conferencing systems. We analyzed these communication media with respect to collaborative fidelity components which we defined. For the experiments, we utilized two different types of collaborative tasks: a 2D Tangram puzzle and a 3D Soma cube puzzle. The experimental results show that the traditional Webcam performed better than the other media in the 2D task, while the headband camera performed better in the 3D task. In terms of collaboration fidelity, these results were somehow predictable, although there was a little difference between our expectations and the results.


Author(s):  
Caitlin Elisabeth Naylor ◽  
Michael J Proulx ◽  
Gavin Buckingham

AbstractThe material-weight illusion (MWI) demonstrates how our past experience with material and weight can create expectations that influence the perceived heaviness of an object. Here we used mixed-reality to place touch and vision in conflict, to investigate whether the modality through which materials are presented to a lifter could influence the top-down perceptual processes driving the MWI. University students lifted equally-weighted polystyrene, cork and granite cubes whilst viewing computer-generated images of the cubes in virtual reality (VR). This allowed the visual and tactile material cues to be altered, whilst all other object properties were kept constant. Representation of the objects’ material in VR was manipulated to create four sensory conditions: visual-tactile matched, visual-tactile mismatched, visual differences only and tactile differences only. A robust MWI was induced across all sensory conditions, whereby the polystyrene object felt heavier than the granite object. The strength of the MWI differed across conditions, with tactile material cues having a stronger influence on perceived heaviness than visual material cues. We discuss how these results suggest a mechanism whereby multisensory integration directly impacts how top-down processes shape perception.


2021 ◽  
Vol 83 (6) ◽  
pp. 377-381
Author(s):  
Maureen E. Dunbar ◽  
Jacqueline J. Shade

In a traditional anatomy and physiology lab, the general senses – temperature, pain, touch, pressure, vibration, and proprioception – and the special senses – olfaction (smell), vision, gustation (taste), hearing, and equilibrium – are typically taught in isolation. In reality, information derived from these individual senses interacts to produce the complex sensory experience that constitutes perception. To introduce students to the concept of multisensory integration, a crossmodal perception lab was developed. In this lab, students explore how vision impacts olfaction and how vision and olfaction interact to impact flavor perception. Students are required to perform a series of multisensory tasks that focus on the interaction of multiple sensory inputs and their impact on flavor and scent perception. Additionally, students develop their own hypothesis as to which sensory modalities they believe will best assist them in correctly identifying the flavor of a candy: taste alone, taste paired with scent, or taste paired with vision. Together these experiments give students an appreciation for multisensory integration while also encouraging them to actively engage in the scientific method. They are then asked to hypothesize the possible outcome of one last experiment after collecting and assessing data from the prior tasks.


Sign in / Sign up

Export Citation Format

Share Document