scholarly journals The Relative Importance of Aural and Visual Information in the Evaluation of Western Canon Music Performance by Musicians and Nonmusicians

2018 ◽  
Vol 35 (3) ◽  
pp. 364-375 ◽  
Author(s):  
Noola K. Griffiths ◽  
Jonathon L. Reay

Aural and visual information have been shown to affect audience evaluations of music performance (Griffiths, 2010; Juslin, 2000); however, it is not fully understood which modality has the greatest relative impact upon judgements of performance or if the evaluator’s musical expertise mediates this effect. An opportunity sample of thirty-four musicians (8 male, 26 female Mage = 26.4 years) and 26 nonmusicians (6 male, 20 female, Mage = 44.0 years) rated four video clips for technical proficiency, musicality, and overall performance quality using 7-point Likert scales. Two video performances of Debussy’s Clare de lune (one professional, one amateur) were used to create the four video clips, comprising two clips with congruent modality information, and two clips with incongruent modality information. The incongruent clips contained the visual modality of one quality condition with the audio modality of the other. It was possible to determine which modality was most important in participants’ evaluative judgements based on the modality of the professional quality condition in the clip that was rated most highly. The current study confirms that both aural and visual information can affect audience members’ experience of musical performance. We provide evidence that visual information has a greater impact than aural information on evaluations of performance quality, as the incongruent clip with amateur audio + professional video was rated significantly higher than that with professional audio + amateur video. Participants’ level of musical expertise was found to have no effect on their judgements of performance quality.

2018 ◽  
Author(s):  
Samuel A Mehr ◽  
Daniel Scannell ◽  
Ellen Winner

Virtuosi impress audiences with their musical expressivity and with their theatrical flair. How do listeners use this auditory and visual information to judge performance quality? Both musicians and laypeople report a belief that sound should trump sight in the judgment of music performance, but surprisingly, their actual judgments reflect the opposite pattern. In a recent study, when presented with 6-second videos of music competition performers, listeners accurately guessed the winners only when the videos were muted. Here, we successfully replicate this finding in a highly-powered sample but then demonstrate that the sight-over-sound effect holds only under limited conditions. When using different videos from comparable performances, in a forced-choice task, listeners' judgments were at or below chance. And when differences in performance quality were made clearer, listeners' judgments were most accurate when they could hear the music — without audio, performance was at chance. Sight therefore does not necessarily trump sound in the judgment of music performance.


2021 ◽  
pp. 030573562110011
Author(s):  
Olivia Urbaniak ◽  
Helen F Mitchell

Audiences expect music performers to follow tacit dress codes for the concert stage. In classical music performance, audiences favor performers in formal dress over casual dress, but it is unclear what constitutes appropriate formal attire. A perceptual study was designed to test for different interpretations of suitable concert dress. Four female pianists in three contrasting black outfits (long dress, short dress, and suit) were video-recorded performing three musical pieces, and the audio was dubbed throughout for audio consistency. Thirty listener/viewers rated the clips on musicality, technical proficiency, overall performance, and appropriateness of dress. Performances in the long dress were rated significantly higher than in the short dress or suit. The short dress was consistently rated lowest, whereas the suit received more complex responses. Follow-up interviews confirmed listener/viewers’ unconscious bias toward untraditional formal attire and their tendency to objectify the performers. They were unblinded to the purpose of the task and were able to reflect on the tangible implications of concert dress, stage manner, and physical appearance on their evaluations. Future studies should harness the potential for experiential learning, or “learning by doing,” to expand future music professionals’ critical evaluation skills.


2017 ◽  
Vol 46 (1) ◽  
pp. 66-83 ◽  
Author(s):  
Dianna T. Kenny ◽  
Naomi Halls

This study presents the development, administration and evaluation of two brief group interventions for music performance anxiety (MPA) aimed at reducing anxiety and improving performance quality. A cognitive behavioural therapy intervention was developed based on an existing empirically-supported treatment Chilled (Rapee et al., 2006), focusing on cognitive, physiological and behavioural symptoms. The second treatment, anxiety sensitivity reduction, targeted primarily physiological symptoms and included relaxation strategies. Interventions were administered in a workshop format over one day with four intervention sessions, preceded by a pedagogic practice skills session that functioned as a control/placebo intervention. A quasi-experimental group randomization design compared the interventions in a heterogeneous sample of community musicians. Sixty-eight participants completed measures of trait anxiety, anxiety sensitivity, depression, and MPA. Participants performed four times (pre- and post-placebo, post-treatment and follow-up) and were assessed for state anxiety and performance quality at each performance. Results indicated that both interventions offered moderately significant gains for the musicians: anxiety was reduced and performance quality improved after each intervention and changes were maintained at follow-up. Anxiety sensitivity reduction showed a trend to exceed the CBT-based interventions, but a larger, higher-powered study is needed to confirm this advantage.


2021 ◽  
Vol 11 (4) ◽  
pp. 3023-3029
Author(s):  
Muhammad Junaid ◽  
Luqman Shah ◽  
Ali Imran Jehangiri ◽  
Fahad Ali Khan ◽  
Yousaf Saeed ◽  
...  

With each passing day resolutions of still image/video cameras are on the rise. This amelioration in resolutions has the potential to extract useful information on the view opposite the photographed subjects from their reflecting parts. Especially important is the idea to capture images formed on the eyes of photographed people and animals. The motivation behind this research is to explore the forensic importance of the images/videos to especially analyze the reflections of the background of the camera. This analysis may include extraction/ detection/recognition of the objects in front of the subjects but on the back of the camera. In the national context such videos/photographs are not rare and, specifically speaking, an abductee’s video footage at a good resolution may give some important clues to the identity of the person who kidnapped him/her. Our aim would be to extract visual information formed in human eyes from still images as well as from video clips. After extraction, our next task would be to recognize the extracted visual information. Initially our experiments would be limited on characters’ extraction and recognition, including characters of different styles and font sizes (computerized) as well as hand written. Although varieties of Optical Character Recognition (OCR) tools are available for characters’ extraction and recognition but, the problem is that they only provide results for clear images (zoomed).


Author(s):  
Jingkuan Song ◽  
Lianli Gao ◽  
Zhao Guo ◽  
Wu Liu ◽  
Dongxiang Zhang ◽  
...  

Recent progress has been made in using attention based encoder-decoder framework for video captioning. However, most existing decoders apply the attention mechanism to every generated words including both visual words (e.g., “gun” and "shooting“) and non-visual words (e.g. "the“, "a”).However, these non-visual words can be easily predicted using natural language model without considering visual signals or attention.Imposing attention mechanism on non-visual words could mislead and decrease the overall performance of video captioning.To address this issue, we propose a hierarchical LSTM with adjusted temporal attention (hLSTMat) approach for video captioning. Specifically, the proposed framework utilizes the temporal attention for selecting specific frames to predict related words, while the adjusted temporal attention is for deciding whether to depend on the visual information or the language context information. Also, a hierarchical LSTMs is designed to simultaneously consider both low-level visual information and deep semantic information to support the video caption generation. To demonstrate the effectiveness of our proposed framework, we test our method on two prevalent datasets: MSVD and MSR-VTT, and experimental results show that our approach outperforms the state-of-the-art methods on both two datasets.


2020 ◽  
Author(s):  
Karola Schlegelmilch ◽  
Annie E. Wertz

Visual processing of a natural environment occurs quickly and effortlessly. Yet, little is known about how young children are able to visually categorize naturalistic structures, since their perceptual abilities are still developing. We addressed this question by asking 76 children (age: 4.1-6.1 years) and 72 adults (age: 18-50 years) to first sort cards with greyscale images depicting vegetation, manmade artifacts, and non-living natural elements (e.g., stones) into groups according to visual similarity. Then, they were asked to choose the images' superordinate categories. We analyzed the relevance of different visual properties to the decisions of the participant groups. Children were very well able to interpret complex visual structures. However, children relied on fewer visual properties and, in general, were less likely to include properties which afforded the analysis of detailed visual information in their categorization decisions than adults, suggesting that immaturities of the still-developing visual system affected categorization. Moreover, when sorting according to visual similarity, both groups attended to the images' assumed superordinate categories—in particular to vegetation—in addition to visual properties. Children had a higher relative sensitivity for vegetation than adults did in the classification task when controlling for overall performance differences. Taken together, these findings add to the sparse literature on the role of developing perceptual abilities in processing naturalistic visual input.


2020 ◽  
Vol 10 (8) ◽  
pp. 2794 ◽  
Author(s):  
Uduak Edet ◽  
Daniel Mann

A study to determine the visual requirements for a remote supervisor of an autonomous sprayer was conducted. Observation of a sprayer operator identified 9 distinct “look zones” that occupied his visual attention, with 39% of his time spent viewing the look zone ahead of the sprayer. While observation of the sprayer operator was being completed, additional GoPro cameras were used to record video of the sprayer in operation from 10 distinct perspectives (some look zones were visible from the operator’s seat, but other look zones were selected to display other regions of the sprayer that might be of interest to a sprayer operator). In a subsequent laboratory study, 29 experienced sprayer operators were recruited to view and comment on video clips selected from the video footage collected during the initial ride-along. Only the two views from the perspective of the operator’s seat were rated highly as providing important information even though participants were able to identify relevant information from all ten of the video clips. Generally, participants used the video clips to obtain information about the boom status, the location and movement of the sprayer within the field, the weather conditions (especially the wind), obstacles to be avoided, crop conditions, and field conditions. Sprayer operators with more than 15 years of experience provided more insightful descriptions of the video clips than their less experienced peers. Designers can influence which features the user will perceive by positioning the camera such that those specific features are prominent in the camera’s field of view. Overall, experienced sprayer operators preferred the concept of presenting visual information on an automation interface using live video rather than presenting that same information using some type of graphical display using icons or symbols.


2021 ◽  
pp. 030573562110420
Author(s):  
Xin Zhou ◽  
Ying Wu ◽  
Yingcan Zheng ◽  
Zilun Xiao ◽  
Maoping Zheng

Previous studies on multisensory integration (MSI) of musical emotions have yielded inconsistent results. The distinct features of the music materials and different musical expertise levels of participants may account for that. This study aims to explore the neural mechanism for the audio-visual integration of musical emotions and infer the reasons for inconsistent results in previous studies by investigating the influence of the type of musical emotions and musical training experience on the mechanism. This fMRI study used a block-design experiment. Music excerpts were selected to express fear, happiness, and sadness, presented under audio only (AO) and audio-visual (AV) modality conditions. Participants were divided into two groups: one comprising musicians who had been musically trained for many years and the other non-musicians with no musical expertise. They assessed the type and intensity of musical emotion after listening to or watching excerpts. Brain regions related to MSI of emotional information and default mode network (DMN) are sensitive to sensory modality conditions and emotion-type changes. Participants in the non-musician group had more, and bilateral distribution of brain regions showed greater activation in the AV assessment stage. By contrast, the musician group had less and lateralized right-hemispheric distribution of brain regions.


2018 ◽  
Vol 48 (4) ◽  
pp. 480-494
Author(s):  
Mădălina Dana Rucsanda ◽  
Ana-Maria Cazan ◽  
Camelia Truța

Emotion is a condition that facilitates or inhibits music performance. Our research aimed to explore emotions of young musicians performing in music competitions. We tried to highlight the possible differences in terms of emotions between young singers who obtained prizes in musical competitions and those who did not. Another aim of the study was to explore the relationship between pre-competition emotions and music performance, focusing on the mediating role of singing experience. The sample consisted of 146 participants in international music competitions for young musicians. A nonverbal pictorial assessment technique measuring the valence, arousal and dominance dimensions of emotions was administered just before and immediately after each participant’s performance in the competition. Our study revealed that negative emotions were associated with lower performance quality while positive emotions, low arousal and increased dominance were associated with higher performance quality. Experienced young singers reported more positive emotions, low arousal and high dominance. Our results also revealed that experience in music competitions could mediate the associations between emotions and music performance in competition. The implications of the results support the inclusion of psychological/emotional training in music education of young singers.


Sign in / Sign up

Export Citation Format

Share Document