scholarly journals Voice in the Metadiegetic Space of the Motion Picture

2017 ◽  
Vol 9 (3) ◽  
pp. 46-59
Author(s):  
E A Rusinova

This extension of the authors publication cycle Audiovisual Means of Creating Metadiegetic Space in Cinema (Vestnik VGIK #1(31), 2017; #2 (32), 2017) is a historical, artistical and technological survey of special sound-design techniques that make it possible to use the expressive potential of a human voice in a subjective (metadiegetic) space of the motion picture and through the voice to separate the metadiegetis from the sound realism of the diegesis of an audio-visual production.

2017 ◽  
Vol 9 (2) ◽  
pp. 80-87
Author(s):  
Elena A Rusinova

This extension of the authors previous article udiovisual Means of Creating Metadiegetic Space in Cinema (see Vestnik VGIK #1 (31), 2017) is a historic survey of the sound design techniques which make it possible to use musical expressive means for designating the films subjective space (metadiegesis) and separating the metadiegesis from diegesis by means of music.


2018 ◽  
Vol 57 (6) ◽  
pp. 1534-1548 ◽  
Author(s):  
Scotty D. Craig ◽  
Noah L. Schroeder

Technology advances quickly in today’s society. This is particularly true in regard to instructional multimedia. One increasingly important aspect of instructional multimedia design is determining the type of voice that will provide the narration; however, research in the area is dated and limited in scope. Using a randomized pretest–posttest design, we examined the efficacy of learning from an instructional animation where narration was provided by an older text-to-speech engine, a modern text-to-speech engine, or a recorded human voice. In most respects, those who learned from the modern text-to-speech engine were not statistically different in regard to their perceptions, learning outcomes, or cognitive efficiency measures compared with those who learned from the recorded human voice. Our results imply that software technologies may have reached a point where they can credibly and effectively deliver the narration for multimedia learning environments.


The aim of the project is to develop a wheel chair which can be controlled by voice of the person. It is based on the speech recognition model. The project is focused on controlling the wheel chair by human voice. The system is intended to control a wheel seat by utilizing the voice of individual. The structure of this framework will be particularly valuable to the crippled individual and furthermore to the older individuals. It is a booming technology which interfaces human with machine. Smart phone device is the interface. This will allow the challenging people to move freely without the assistant of others. They will get a moral support to live independently .The hardware used are Arduino kit, Microcontroller, Wheelchair and DC motors. DC motor helps for the movement of wheel chair. Ultra Sonic Sensor senses the obstacles between wheelchair and its way.


2021 ◽  
pp. 194084472110428
Author(s):  
Grace O' Grady

One year after beginning a large-scale research inquiry into how young people construct their identities I became ill and subsequently underwent abdominal surgery which triggered an early menopause. The process which was experienced as creatively bruising called to be written as “Artful Autoethnography” using visual images and poetry to tell a “vulnerable, evocative and therapeutic” story of illness, menopause, and their subject positions in intersecting relations of power. The process which was experienced as disempowering called to be performed as an act of resistance and activism. This performance ethnography is in line with the call for qualitative inquirers to move beyond strict methodological boundaries. In particular, the voice of activism in this performance is in the space between data (human voice and visual art pieces) and theory. To this end, and in resisting stratifying institutional/medical discourse, the performance attempts to create a space for a merger of ethnography and activism in public/private life.


2020 ◽  
Vol 117 (21) ◽  
pp. 11364-11367 ◽  
Author(s):  
Wim Pouw ◽  
Alexandra Paxton ◽  
Steven J. Harrison ◽  
James A. Dixon

We show that the human voice has complex acoustic qualities that are directly coupled to peripheral musculoskeletal tensioning of the body, such as subtle wrist movements. In this study, human vocalizers produced a steady-state vocalization while rhythmically moving the wrist or the arm at different tempos. Although listeners could only hear and not see the vocalizer, they were able to completely synchronize their own rhythmic wrist or arm movement with the movement of the vocalizer which they perceived in the voice acoustics. This study corroborates recent evidence suggesting that the human voice is constrained by bodily tensioning affecting the respiratory–vocal system. The current results show that the human voice contains a bodily imprint that is directly informative for the interpersonal perception of another’s dynamic physical states.


2019 ◽  
Vol 37 (2) ◽  
pp. 134-146
Author(s):  
Weixia Zhang ◽  
Fang Liu ◽  
Linshu Zhou ◽  
Wanqi Wang ◽  
Hanyuan Jiang ◽  
...  

Timbre is an important factor that affects the perception of emotion in music. To date, little is known about the effects of timbre on neural responses to musical emotion. To address this issue, we used ERPs to investigate whether there are different neural responses to musical emotion when the same melodies are presented in different timbres. With a cross-modal affective priming paradigm, target faces were primed by affectively congruent or incongruent melodies without lyrics presented in the violin, flute, and voice. Results showed a larger P3 and a larger left anterior distributed LPC in response to affectively incongruent versus congruent trials in the voice version. For the flute version, however, only the LPC effect was found, which was distributed over centro-parietal electrodes. Unlike the voice and flute versions, an N400 effect was observed in the violin version. These findings revealed different patterns of neural responses to musical emotion when the same melodies were presented in different timbres, and provide evidence for the hypothesis that there are specialized neural responses to the human voice.


1973 ◽  
Vol 56 (4) ◽  
pp. 944-946
Author(s):  
Ernest W Nash

Abstract The human voice, as an instrument of crime, is used more often than a weapon and automobile combined. Some crimes are committed by the voice alone; therefore, to be able to identify a speaker by his voice is a very desirable goal in the fight against crime. However, desire has been somewhat hindered by the lack of technology and instrumentation. The use of spectrograms (voiceprints) to assist the expert in making an objective evaluation of the voices in question is discussed. The scientific reason for accepting the identification of a speaker’s voice is the uniqueness of man. Therefore, if a unique person uses unique physiological body parts to produce the sounds of speech, it logically follows that sound will also be unique. By the visual examination of the spectrographic analysis, a trained expert is able to compare the uniqueness.


2018 ◽  
Vol 42 (1) ◽  
pp. 37-59 ◽  
Author(s):  
Stefano Fasciani ◽  
Lonce Wyse

In this article we describe a user-driven adaptive method to control the sonic response of digital musical instruments using information extracted from the timbre of the human voice. The mapping between heterogeneous attributes of the input and output timbres is determined from data collected through machine-listening techniques and then processed by unsupervised machine-learning algorithms. This approach is based on a minimum-loss mapping that hides any synthesizer-specific parameters and that maps the vocal interaction directly to perceptual characteristics of the generated sound. The mapping adapts to the dynamics detected in the voice and maximizes the timbral space covered by the sound synthesizer. The strategies for mapping vocal control to perceptual timbral features and for automating the customization of vocal interfaces for different users and synthesizers, in general, are evaluated through a variety of qualitative and quantitative methods.


2019 ◽  
Vol 34 (1) ◽  
pp. 28-47 ◽  
Author(s):  
Emna Chérif ◽  
Jean-François Lemoine

Virtual assistants are increasingly common on commercial websites. In view of the benefits they offer to businesses for improving navigation and interaction with the consumers, researchers and practitioners agree on the value of providing them with anthropomorphic characteristics. This study focuses on the effect of the voice of the virtual assistant. Although there are some studies of human–computer interaction in this field, there is no work that addresses the topic from a marketing perspective and compares the effect of a human voice versus a synthetic voice. Our findings show that consumers who interact with a virtual assistant with a human voice have a stronger impression of social presence than those interacting with a virtual assistant with a synthetic voice. The human voice also builds trust in the virtual assistant and generates stronger behavioural intentions.


Sign in / Sign up

Export Citation Format

Share Document