scholarly journals The effect of music background on the emotional appraisal of film sequences

Psihologija ◽  
2011 ◽  
Vol 44 (1) ◽  
pp. 71-91 ◽  
Author(s):  
Ivanka Pavlovic ◽  
Slobodan Markovic

In this study the effects of musical background on the emotional appraisal of film sequences was investigated. Four pairs of polar emotions defined in Plutchik?s model were used as basic emotional qualities: joy-sadness, anticipation-surprise, fear-anger, and trust disgust. In the preliminary study eight film sequences and eight music themes were selected as the best representatives of all eight Plutchik?s emotions. In the main experiment the participant judged the emotional qualities of film-music combinations on eight seven-point scales. Half of the combinations were congruent (e.g. joyful film - joyful music), and half were incongruent (e.g. joyful film - sad music). Results have shown that visual information (film) had greater effects on the emotion appraisal than auditory information (music). The modulation effects of music background depend on emotional qualities. In some incongruent combinations (joysadness) the modulations in the expected directions were obtained (e.g. joyful music reduces the sadness of a sad film), in some cases (anger-fear) no modulation effects were obtained, and in some cases (trust-disgust, anticipation-surprise) the modulation effects were in an unexpected direction (e.g. trustful music increased the appraisal of disgust of a disgusting film). These results suggest that the appraisals of conjoint effects of emotions depend on the medium (film masks the music) and emotional quality (three types of modulation effects).

2019 ◽  
Vol 27 ◽  
pp. 165-173
Author(s):  
Jung-Hun Kim ◽  
Ji-Eun Park ◽  
In-Hee Ji ◽  
Chul-Ho Won ◽  
Jong-Min Lee ◽  
...  

2020 ◽  
pp. 002383091989888
Author(s):  
Luma Miranda ◽  
Marc Swerts ◽  
João Moraes ◽  
Albert Rilliard

This paper presents the results of three perceptual experiments investigating the role of auditory and visual channels for the identification of statements and echo questions in Brazilian Portuguese. Ten Brazilian speakers (five male) were video-recorded (frontal view of the face) while they produced a sentence (“ Como você sabe”), either as a statement (meaning “ As you know.”) or as an echo question (meaning “ As you know?”). Experiments were set up including the two different intonation contours. Stimuli were presented in conditions with clear and degraded audio as well as congruent and incongruent information from both channels. Results show that Brazilian listeners were able to distinguish statements and questions prosodically and visually, with auditory cues being dominant over visual ones. In noisy conditions, the visual channel improved the interpretation of prosodic cues robustly, while it degraded them in conditions where the visual information was incongruent with the auditory information. This study shows that auditory and visual information are integrated during speech perception, also when applied to prosodic patterns.


1975 ◽  
Vol 69 (5) ◽  
pp. 226-233
Author(s):  
Sally Rogow

The blind child builds his perceptions from tactual (haptic) and auditory information. Assumptions on the part of professionals that tactual and visual data are identical can result in misconceptions that may lead to delayed development and distortions of cognitive process in blind children. A review of research on the perception of form and spatial relationships suggests that differences between tactual and visual information result in differences in perceptual organization. However, studies indicate that blind children reach developmental milestones (e.g., conservation) at approximately the same ages as sighted children.


2012 ◽  
Vol 25 (0) ◽  
pp. 148
Author(s):  
Marcia Grabowecky ◽  
Emmanuel Guzman-Martinez ◽  
Laura Ortega ◽  
Satoru Suzuki

Watching moving lips facilitates auditory speech perception when the mouth is attended. However, recent evidence suggests that visual attention and awareness are mediated by separate mechanisms. We investigated whether lip movements suppressed from visual awareness can facilitate speech perception. We used a word categorization task in which participants listened to spoken words and determined as quickly and accurately as possible whether or not each word named a tool. While participants listened to the words they watched a visual display that presented a video clip of the speaker synchronously speaking the auditorily presented words, or the same speaker articulating different words. Critically, the speaker’s face was either visible (the aware trials), or suppressed from awareness using continuous flash suppression. Aware and suppressed trials were randomly intermixed. A secondary probe-detection task ensured that participants attended to the mouth region regardless of whether the face was visible or suppressed. On the aware trials responses to the tool targets were no faster with the synchronous than asynchronous lip movements, perhaps because the visual information was inconsistent with the auditory information on 50% of the trials. However, on the suppressed trials responses to the tool targets were significantly faster with the synchronous than asynchronous lip movements. These results demonstrate that even when a random dynamic mask renders a face invisible, lip movements are processed by the visual system with sufficiently high temporal resolution to facilitate speech perception.


2020 ◽  
Vol 31 (01) ◽  
pp. 030-039 ◽  
Author(s):  
Aaron C. Moberly ◽  
Kara J. Vasil ◽  
Christin Ray

AbstractAdults with cochlear implants (CIs) are believed to rely more heavily on visual cues during speech recognition tasks than their normal-hearing peers. However, the relationship between auditory and visual reliance during audiovisual (AV) speech recognition is unclear and may depend on an individual’s auditory proficiency, duration of hearing loss (HL), age, and other factors.The primary purpose of this study was to examine whether visual reliance during AV speech recognition depends on auditory function for adult CI candidates (CICs) and adult experienced CI users (ECIs).Participants included 44 ECIs and 23 CICs. All participants were postlingually deafened and had met clinical candidacy requirements for cochlear implantation.Participants completed City University of New York sentence recognition testing. Three separate lists of twelve sentences each were presented: the first in the auditory-only (A-only) condition, the second in the visual-only (V-only) condition, and the third in combined AV fashion. Each participant’s amount of “visual enhancement” (VE) and “auditory enhancement” (AE) were computed (i.e., the benefit to AV speech recognition of adding visual or auditory information, respectively, relative to what could potentially be gained). The relative reliance of VE versus AE was also computed as a VE/AE ratio.VE/AE ratio was predicted inversely by A-only performance. Visual reliance was not significantly different between ECIs and CICs. Duration of HL and age did not account for additional variance in the VE/AE ratio.A shift toward visual reliance may be driven by poor auditory performance in ECIs and CICs. The restoration of auditory input through a CI does not necessarily facilitate a shift back toward auditory reliance. Findings suggest that individual listeners with HL may rely on both auditory and visual information during AV speech recognition, to varying degrees based on their own performance and experience, to optimize communication performance in real-world listening situations.


2019 ◽  
Vol 32 (2) ◽  
pp. 87-109 ◽  
Author(s):  
Galit Buchs ◽  
Benedetta Heimler ◽  
Amir Amedi

Abstract Visual-to-auditory Sensory Substitution Devices (SSDs) are a family of non-invasive devices for visual rehabilitation aiming at conveying whole-scene visual information through the intact auditory modality. Although proven effective in lab environments, the use of SSDs has yet to be systematically tested in real-life situations. To start filling this gap, in the present work we tested the ability of expert SSD users to filter out irrelevant background noise while focusing on the relevant audio information. Specifically, nine blind expert users of the EyeMusic visual-to-auditory SSD performed a series of identification tasks via SSDs (i.e., shape, color, and conjunction of the two features). Their performance was compared in two separate conditions: silent baseline, and with irrelevant background sounds from real-life situations, using the same stimuli in a pseudo-random balanced design. Although the participants described the background noise as disturbing, no significant performance differences emerged between the two conditions (i.e., noisy; silent) for any of the tasks. In the conjunction task (shape and color) we found a non-significant trend for a disturbing effect of the background noise on performance. These findings suggest that visual-to-auditory SSDs can indeed be successfully used in noisy environments and that users can still focus on relevant auditory information while inhibiting irrelevant sounds. Our findings take a step towards the actual use of SSDs in real-life situations while potentially impacting rehabilitation of sensory deprived individuals.


Author(s):  
Kostas Giannakis

This chapter investigates the use of visual texture for the visualization of multi-dimensional auditory information. Twenty subjects with a strong musical background performed a series of association tasks between high-level perceptual dimensions of visual texture and steady-state features of auditory timbre. The results indicated strong and intuitive mappings between (a) texture contrast and sharpness, (b) texture coarseness-granularity and compactness, and (c) texture periodicity and sensory dissonance. The findings contribute in setting the necessary groundwork for the application of empirically-derived auditory-visual mappings in multimedia environments.


i-Perception ◽  
2018 ◽  
Vol 9 (6) ◽  
pp. 204166951881570
Author(s):  
Sachiyo Ueda ◽  
Ayane Mizuguchi ◽  
Reiko Yakushijin ◽  
Akira Ishiguchi

To overcome limitations in perceptual bandwidth, humans condense various features of the environment into summary statistics. Variance constitutes indices that represent diversity within categories and also the reliability of the information regarding that diversity. Studies have shown that humans can efficiently perceive variance for visual stimuli; however, to enhance perception of environments, information about the external world can be obtained from multisensory modalities and integrated. Consequently, this study investigates, through two experiments, whether the precision of variance perception improves when visual information (size) and corresponding auditory information (pitch) are integrated. In Experiment 1, we measured the correspondence between visual size and auditory pitch for each participant by using adjustment measurements. The results showed a linear relationship between size and pitch—that is, the higher the pitch, the smaller the corresponding circle. In Experiment 2, sequences of visual stimuli were presented both with and without linked auditory tones, and the precision of perceived variance in size was measured. We consequently found that synchronized presentation of audio and visual stimuli that have the same variance improves the precision of perceived variance in size when compared with visual-only presentation. This suggests that audiovisual information may be automatically integrated in variance perception.


2017 ◽  
Vol 29 (2) ◽  
pp. 406-418
Author(s):  
Hiroshi Takahashi ◽  

[abstFig src='/00290002/13.jpg' width='300' text='Robotic arm operation system' ] This paper reports on a study on the intelligent cooperation control system with human operators. The remote operation of a robotic arm by a human operator is considered as a simplified resilient system. In the experiments, subjects operated a robotic arm to carry out a simple task, while observing it through a monitor. The display of the monitor suddenly disappeared, and the subject continued the task only by using auditory information. By analyzing the relationship between task performances and types of auditory information through a mathematico-statistical method, it was found that not only auditory information related to the position but also the auditory information to ideate the position of the robotic arm was effective for task completion.


2015 ◽  
Vol 3 (1-2) ◽  
pp. 88-101 ◽  
Author(s):  
Kathleen M. Einarson ◽  
Laurel J. Trainor

Recent work examined five-year-old children’s perceptual sensitivity to musical beat alignment. In this work, children watched pairs of videos of puppets drumming to music with simple or complex metre, where one puppet’s drumming sounds (and movements) were synchronized with the beat of the music and the other drummed with incorrect tempo or phase. The videos were used to maintain children’s interest in the task. Five-year-olds were better able to detect beat misalignments in simple than complex metre music. However, adults can perform poorly when attempting to detect misalignment of sound and movement in audiovisual tasks, so it is possible that the moving stimuli actually hindered children’s performance. Here we compared children’s sensitivity to beat misalignment in conditions with dynamic visual movement versus still (static) visual images. Eighty-four five-year-old children performed either the same task as described above or a task that employed identical auditory stimuli accompanied by a motionless picture of the puppet with the drum. There was a significant main effect of metre type, replicating the finding that five-year-olds are better able to detect beat misalignment in simple metre music. There was no main effect of visual condition. These results suggest that, given identical auditory information, children’s ability to judge beat misalignment in this task is not affected by the presence or absence of dynamic visual stimuli. We conclude that at five years of age, children can tell if drumming is aligned to the musical beat when the music has simple metric structure.


Sign in / Sign up

Export Citation Format

Share Document