scholarly journals Analysis of vertical sound image control with parametric loudspeakers

2017 ◽  
Author(s):  
Shigeaki Aoki ◽  
Kazuhiro Shimizu ◽  
Kouki Itou
Keyword(s):  
2015 ◽  
Vol 70 ◽  
pp. 1031-1034
Author(s):  
Kazuhiro Shimizu ◽  
Kouki Itou ◽  
Shigeaki Aoki
Keyword(s):  

2013 ◽  
Author(s):  
Kumi Maeda ◽  
Takanori Nishino ◽  
Hiroshi Naruse
Keyword(s):  

2017 ◽  
Vol 116 ◽  
pp. 164-169 ◽  
Author(s):  
Shigeaki Aoki ◽  
Kazuhiro Shimizu ◽  
Kouki Itou
Keyword(s):  

2012 ◽  
Vol 131 (4) ◽  
pp. 3217-3217 ◽  
Author(s):  
Kumi Maeda ◽  
Takanori Nishino ◽  
Hiroshi Naruse
Keyword(s):  

2013 ◽  
Vol 133 (5) ◽  
pp. 3363-3363
Author(s):  
Kumi Maeda ◽  
Takanori Nishino ◽  
Hiroshi Naruse
Keyword(s):  

2020 ◽  
Vol 11 (1) ◽  
pp. 110-113
Author(s):  
Smilena Smilkova ◽  

The proposed material examines the creative task of students majoring in Social Pedagogy at the University „Prof. Dr. Assen Zlatarov“ in Burgas, and studying the discipline Art Pedagogy – Part 1 – Music. In the course of the lecture course students get acquainted with the elements of musical expression, as a means of figurative representations and impact of music, with different techniques concerning individual musical activities, with the endless and diverse opportunities that music provides in the use of art pedagogy for social work teachers.Verbal interpretation of music is a necessary component when working with children with special educational needs, at risk and in the norm. Looking at Tchaikovsky’s short and extremely figurative piano piece „The Sick Doll“ from his charming „Children’s Album“, in the form of a short story, tale or essay, students express their personal vision, feeling and transformation of the musical image. The aim of the task is to transcribe the sound image into a verbal one. This requires speed, flexibility and logic in thinking, through imagination and creativity in its manifestation. Children love to listen, especially when they are involved. In search of the right way to solve problems and situations, future social educators could successfully benefit from the conversion of sound into words, according to the needs and deficits of the individual or group.


2010 ◽  
Author(s):  
Gabriel Pablo Nava ◽  
Keiji Hirata ◽  
Yoshinari Shirai

2020 ◽  
Vol 164 ◽  
pp. 10015
Author(s):  
Irina Gurtueva ◽  
Olga Nagoeva ◽  
Inna Pshenokova

This paper proposes a concept of a new approach to the development of speech recognition systems using multi-agent neurocognitive modeling. The fundamental foundations of these developments are based on the theory of cognitive psychology and neuroscience, and advances in computer science. The purpose of this work is the development of general theoretical principles of sound image recognition by an intelligent robot and, as the sequence, the development of a universal system of automatic speech recognition, resistant to speech variability, not only with respect to the individual characteristics of the speaker, but also with respect to the diversity of accents. Based on the analysis of experimental data obtained from behavioral studies, as well as theoretical model ideas about the mechanisms of speech recognition from the point of view of psycholinguistic knowledge, an algorithm resistant to variety of accents for machine learning with imitation of the formation of a person’s phonemic hearing has been developed.


2000 ◽  
Vol 84 (2) ◽  
pp. 1107-1111 ◽  
Author(s):  
Jörg Lewald ◽  
Hans-Otto Karnath

We investigated the effect of vestibular stimulation on the lateralization of dichotic sound by cold-water irrigation of the external auditory canal in human subjects. Subjects adjusted the interaural level difference of the auditory stimulus to the subjective median plane of the head. In those subjects in whom dizziness and nystagmus indicated sufficient vestibular stimulation, these adjustments were significantly shifted toward the cooled ear compared with the control condition (irrigation with water at body temperature); i.e., vestibular stimulation induced a shift of the sound image toward the nonstimulated side. The mean magnitude of the shift was 7.3 dB immediately after vestibular stimulation and decreased to 2.5 dB after 5 min. As shown by an additional control experiment, this effect cannot be attributed to a unilateral hearing loss induced by cooling of the auditory periphery. The results indicate the involvement of vestibular afferent information in the perception of sound location during movements of the head and/or the whole body. We thus hypothesize that vestibular information is used by central-nervous mechanisms generating a world-centered representation of auditory space.


Sign in / Sign up

Export Citation Format

Share Document