crossmodal correspondence
Recently Published Documents


TOTAL DOCUMENTS

37
(FIVE YEARS 11)

H-INDEX

11
(FIVE YEARS 2)

2021 ◽  
pp. 1-21
Author(s):  
Daniel Gurman ◽  
Colin R. McCormick ◽  
Raymond M. Klein

Abstract Crossmodal correspondences are defined as associations between crossmodal stimuli based on seemingly irrelevant stimulus features (i.e., bright shapes being associated with high-pitched sounds). There is a large body of research describing auditory crossmodal correspondences involving pitch and volume, but not so much involving auditory timbre, the character or quality of a sound. Adeli and colleagues (2014, Front. Hum. Neurosci. 8, 352) found evidence of correspondences between timbre and visual shape. The present study aimed to replicate Adeli et al.’s findings, as well as identify novel timbre–shape correspondences. Participants were tested using two computerized tasks: an association task, which involved matching shapes to presented sounds based on best perceived fit, and a semantic task, which involved rating shapes and sounds on a number of scales. The analysis of association matches reveals nonrandom selection, with certain stimulus pairs being selected at a much higher frequency. The harsh/jagged and smooth/soft correspondences observed by Adeli et al. were found to be associated with a high level of consistency. Additionally, high matching frequency of sounds with unstudied timbre characteristics suggests the existence of novel correspondences. Finally, the ability of the semantic task to supplement existing crossmodal correspondence assessments was demonstrated. Convergent analysis of the semantic and association data demonstrates that the two datasets are significantly correlated (−0.36) meaning stimulus pairs associated with a high level of consensus were more likely to hold similar perceived meaning. The results of this study are discussed in both theoretical and applied contexts.


Author(s):  
Aleksandra Ćwiek ◽  
Susanne Fuchs ◽  
Christoph Draxler ◽  
Eva Liina Asu ◽  
Dan Dediu ◽  
...  

The bouba/kiki effect—the association of the nonce word bouba with a round shape and kiki with a spiky shape—is a type of correspondence between speech sounds and visual properties with potentially deep implications for the evolution of spoken language. However, there is debate over the robustness of the effect across cultures and the influence of orthography. We report an online experiment that tested the bouba/kiki effect across speakers of 25 languages representing nine language families and 10 writing systems. Overall, we found strong evidence for the effect across languages, with bouba eliciting more congruent responses than kiki . Participants who spoke languages with Roman scripts were only marginally more likely to show the effect, and analysis of the orthographic shape of the words in different scripts showed that the effect was no stronger for scripts that use rounder forms for bouba and spikier forms for kiki . These results confirm that the bouba/kiki phenomenon is rooted in crossmodal correspondence between aspects of the voice and visual shape, largely independent of orthography. They provide the strongest demonstration to date that the bouba/kiki effect is robust across cultures and writing systems. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part II)’.


2021 ◽  
pp. 1-50
Author(s):  
Kelly McCormick ◽  
Simon Lacey ◽  
Randall Stilla ◽  
Lynne C. Nygaard ◽  
K. Sathian

Abstract Sound symbolism refers to the association between the sounds of words and their meanings, often studied using the crossmodal correspondence between auditory pseudowords, e.g., ‘takete’ or ‘maluma’, and pointed or rounded visual shapes, respectively. In a functional magnetic resonance imaging study, participants were presented with pseudoword–shape pairs that were sound-symbolically congruent or incongruent. We found no significant congruency effects in the blood oxygenation level-dependent (BOLD) signal when participants were attending to visual shapes. During attention to auditory pseudowords, however, we observed greater BOLD activity for incongruent compared to congruent audiovisual pairs bilaterally in the intraparietal sulcus and supramarginal gyrus, and in the left middle frontal gyrus. We compared this activity to independent functional contrasts designed to test competing explanations of sound symbolism, but found no evidence for mediation via language, and only limited evidence for accounts based on multisensory integration and a general magnitude system. Instead, we suggest that the observed incongruency effects are likely to reflect phonological processing and/or multisensory attention. These findings advance our understanding of sound-to-meaning mapping in the brain.


2021 ◽  
pp. 1-21
Author(s):  
Mick Zeljko ◽  
Philip M. Grove ◽  
Ada Kritikos

Abstract We examine whether crossmodal correspondences (CMCs) modulate perceptual disambiguation by considering the influence of lightness/pitch congruency on the perceptual resolution of the Rubin face/vase (RFV). We randomly paired a black-and-white RFV (black faces and white vase, or vice versa) with either a high or low pitch tone and found that CMC congruency biases the dominant visual percept. The perceptual option that was CMC-congruent with the tone (white/high pitch or black/low pitch) was reported significantly more often than the perceptual option CMC-incongruent with the tone (white/low pitch or black/high pitch). However, the effect was only observed for stimuli presented for longer and not shorter durations suggesting a perceptual effect rather than a response bias, and moreover, we infer an effect on perceptual reversals rather than initial percepts. We found that the CMC congruency effect for longer-duration stimuli only occurred after prior exposure to the stimuli of several minutes, suggesting that the CMC congruency develops over time. These findings extend the observed effects of CMCs from relatively low-level feature-based effects to higher-level object-based perceptual effects (specifically, resolving ambiguity) and demonstrate that an entirely new category of crossmodal factors (CMC congruency) influence perceptual disambiguation in bistability.


2020 ◽  
Author(s):  
Irune Fernandez-Prieto ◽  
Ferran Pons ◽  
Jordi Navarra

Crossmodal correspondences between auditory pitch and spatial elevation have been demonstrated extensively in adults. High- and low-pitched sounds tend to be mapped onto upper and lower spatial positions, respectively. We hypothesised that this crossmodal link could be influenced by the development of spatial and linguistic abilities during childhood. To explore this possibility, 70 children (9-12 years old) divided into three groups (4th, 5th and 6th grade of primary school) completed a crossmodal test to evaluate the perceptual correspondence between pure tones and spatial elevation. Additionally, we addressed possible correlations between the students’ performance in this crossmodal task and other auditory, spatial and linguistic measures. The participants’ auditory pitch performance was measured in a frequency classification test. The participants also completed three tests of the Wechsler Intelligence Scale for Children-IV (WISC-IV): (1) Vocabulary, to assess verbal intelligence, (2) Matrix reasoning, to measure visuospatial reasoning and (3) Blocks design, to analyse visuospatial/motor skills. The results revealed crossmodal effects between pitch and spatial elevation. Additionally, we found a correlation between the performance in the block design subtest with the pitch-elevation crossmodal correspondence and the auditory frequency classification test. No correlation was observed between auditory tasks with matrix and vocabulary subtests. This suggests (1) that the crossmodal correspondence between pitch and spatial elevation is already consolidated at the age of 9 and also (2) that good performance in a pitch-based auditory task is mildly associated, in childhood, with good performance in visuospatial/motor tasks.


2020 ◽  
Vol 33 (8) ◽  
pp. 805-836
Author(s):  
Neta B. Maimon ◽  
Dominique Lamy ◽  
Zohar Eitan

Abstract Crossmodal correspondences (CMC) systematically associate perceptual dimensions in different sensory modalities (e.g., auditory pitch and visual brightness), and affect perception, cognition, and action. While previous work typically investigated associations between basic perceptual dimensions, here we present a new type of CMC, involving a high-level, quasi-syntactic schema: music tonality. Tonality governs most Western music and regulates stability and tension in melodic and harmonic progressions. Musicians have long associated tonal stability with non-auditory domains, yet such correspondences have hardly been investigated empirically. Here, we investigated CMC between tonal stability and visual brightness, in musicians and in non-musicians, using explicit and implicit measures. On the explicit test, participants heard a tonality-establishing context followed by a probe tone, and matched each probe to one of several circles, varying in brightness. On the implicit test, we applied the Implicit Association Test to auditory (tonally stable or unstable sequences) and visual (bright or dark circles) stimuli. The findings indicate that tonal stability is associated with visual brightness both explicitly and implicitly. They further suggest that this correspondence depends only partially on conceptual musical knowledge, as it also operates through fast, unintentional, and arguably automatic processes in musicians and non-musicians alike. By showing that abstract musical structure can establish concrete connotations to a non-auditory perceptual domain, our results open a hitherto unexplored avenue for research, associating syntactical structure with connotative meaning.


2020 ◽  
pp. 1-15
Author(s):  
Simon Lacey ◽  
James Nguyen ◽  
Peter Schneider ◽  
K. Sathian

Abstract The crossmodal correspondence between auditory pitch and visuospatial elevation (in which high- and low-pitched tones are associated with high and low spatial elevation respectively) has been proposed as the basis for Western musical notation. One implication of this is that music perception engages visuospatial processes and may not be exclusively auditory. Here, we investigated how music perception is influenced by concurrent visual stimuli. Participants listened to unfamiliar five-note musical phrases with four kinds of pitch contour (rising, falling, rising–falling, or falling–rising), accompanied by incidental visual contours that were either congruent (e.g., auditory rising/visual rising) or incongruent (e.g., auditory rising/visual falling) and judged whether the final note of the musical phrase was higher or lower in pitch than the first. Response times for the auditory judgment were significantly slower for incongruent compared to congruent trials, i.e., there was a congruency effect, even though the visual contours were incidental to the auditory task. These results suggest that music perception, although generally regarded as an auditory experience, may actually be multisensory in nature.


Foods ◽  
2020 ◽  
Vol 9 (8) ◽  
pp. 966 ◽  
Author(s):  
Jérémy Roque ◽  
Jérémie Lafraire ◽  
Malika Auvray

Visual and auditory carbonation have been separately documented as being two sensory markers of perceived freshness in beverages. The aim of the present study is to investigate the cross-modal interactions between these two dimensions of carbonation. Three experiments focused on crossmodal correspondences between bubble size and pouring sound pitch, which have never been investigated with ecological stimuli. Experiment 1, using an implicit association test (IAT), showed a crossmodal correspondence between bubble size and pouring sound pitch. Experiment 2 confirmed this pitch-size correspondence effect by means of a Go/No-Go Association Task (GNAT). Experiment 3 investigated the mutual dependence between pitch, size, and spatial elevation as well as the influence of attentional factors. No dependence was found, however pitch-size correspondences were obtained only in the condition requiring attentional processes, suggesting that these effects might be driven by top-down influences. These results highlight the robustness of the pitch-size crossmodal correspondence across stimulus contexts varying in complexity. Thus, this correspondence might be fruitfully used to modulate consumers’ perceptions and expectations about carbonated beverages.


2019 ◽  
Vol 40 (2) ◽  
pp. 85-104
Author(s):  
Laura Puigcerver ◽  
Sara Rodríguez-Cuadrado ◽  
Víctor Gómez-Tapia ◽  
Jordi Navarra

AbstractAlthough the perceptual association between verticality and pitch has been widely studied, the link between loudness and verticality is not fully understood yet. While loud and quiet sounds are assumed to be equally associated crossmodally with spatial elevation, there are perceptual differences between the two types of sounds that may suggest the contrary. For example, loud sounds tend to generate greater activity, both behaviourally and neurally, than quiet sounds. Here we investigated whether this difference percolates into the crossmodal correspondence between loudness and verticality. In an initial phase, participants learned one-to-one arbitrary associations between two tones differing in loudness (82dB vs. 56dB) and two coloured rectangles (blue vs. yellow). During the experimental phase, they were presented with the two-coloured stimuli (each one located above or below a central “departure” point) together with one of the two tones.Participants had to indicate which of the two-coloured rectangles corresponded to the previously-associated tone by moving a mouse cursor from the departure point towards the target. The results revealed that participants were significantly faster responding to the loud tone when the visual target was located above (congruent condition) than when the target was below the departure point (incongruent condition). For quiet tones, no differences were found between the congruent (quiet-down) and the incongruent (quiet-up) conditions. Overall, this pattern of results suggests that possible differences in the neural activity generated by loud and quiet sounds influence the extent to which loudness and spatial elevation share representational content.


Foods ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 103 ◽  
Author(s):  
Kosuke Motoki ◽  
Toshiki Saito ◽  
Rui Nouchi ◽  
Ryuta Kawashima ◽  
Motoaki Sugiura

In retail settings, social perception of other peoples’ preferences is fundamental to successful interpersonal interactions (e.g., product recommendations, gift-giving). This type of perception must be made with little information, very often based solely on facial cues. Although people are capable of accurately predicting others’ preferences from facial cues, we do not yet know how such inferences are made by crossmodal correspondence (arbitrary sensory associations) between facial cues and inferred attributes. The crossmodal correspondence literature implies the existence of sensory associations between shapes and tastes, and people consistently match roundness and angularity to sweet and sour foods, respectively. Given that peoples’ faces have dimensions characterized by roundness and angularity, it may be plausible that people infer others’ preferences by relying on the correspondence between facial roundness and taste. Based on a crossmodal correspondence framework, this study aimed to reveal the role of shape–taste correspondences in social perception. We investigated whether Japanese participants infer others’ taste (sweet/sour) preferences based on facial shapes (roundness/angularity). The results showed that participants reliably inferred that round-faced (vs. angular-faced) individuals preferred sweet foods (Study 1). Round-faced individuals and sweet foods were well matched, and the matching mediated the inference of other person’s preferences (Study 2). An association between facial roundness and inference of sweet taste preferences was observed in more natural faces, and perceived obesity mediated this association (Study 3). These findings advance the applicability of crossmodal correspondences in social perception, and imply the pervasiveness of prejudicial bias in the marketplace.


Sign in / Sign up

Export Citation Format

Share Document