Advances in Multimedia and Interactive Technologies - Digital Tools for Computer Music Production and Distribution
Latest Publications


TOTAL DOCUMENTS

10
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By IGI Global

9781522502647, 9781522502654

Author(s):  
Dimitrios Margounakis ◽  
Ioanna Lappa

The industry of video games has rapidly grown during the last decade, while “gaming” has been promoted into an interdisciplinary stand-alone science field. As a result, music in video games, as well as its production, has been yet a state-of-the-art research field in computer science. Since the production of games has reached a very high level in terms of complication and cost (the production of a 3-d multi-player game can cost up to millions of dollars), the role of sound engineer / composer / programmer is very crucial. This chapter describes the types of sound that exist in today's games and the various issues that arise during the musical composition. Moreover, the existing systems and techniques for algorithmic music composition are analyzed.


Author(s):  
Panteleimon Chriskos ◽  
Orfeas Tsartsianidis

Human senses enable humans to perceive and interact with their environment, through a set of sensory systems or organs which are mainly dedicated to each sense. From the five main senses in humans hearing plays a critical role in many aspects of our lives. Hearing allows the perception not only of the immediate visible environment but also parts of the environment that are obstructed from view and/or that are a significant distance from the individual. One of the most important and sometimes overlooked aspects of hearing is communication, since most human communication is accomplished through speech and hearing. Hearing does not only convey speech but also conveys more complex messages in the form of music, singing and storytelling.


Author(s):  
George Tzanetakis

The playing of a musical instrument is one of the most skilled and complex interactions between a human and an artifact. Professional musicians spend a significant part of their lives initially learning their instruments and then perfecting their skills. The production, distribution and consumption of music has been profoundly transformed by digital technology. Today music is recorded and mixed using computers, distributed through online stores and streaming services, and heard on smartphones and portable music players. Computers have also been used to synthesize new sounds, generate music, and even create sound acoustically in the field of music robotics. Despite all these advances the way musicians interact with computers has remained relatively unchanged in the last 20-30 years. Most interaction with computers in the context of music making still occurs either using the standard mouse/keyboard/screen interaction that everyone is familiar with, or using special digital musical instruments and controllers such as keyboards, synthesizers and drum machines. The string, woodwind, and brass families of instruments do not have widely available digital counterparts and in the few cases that they do the digital version is nowhere as expressive as the acoustic one. It is possible to retrofit and augment existing acoustic instruments with digital sensors in order to create what are termed hyper-instruments. These hyper-instruments allow musicians to interact naturally with their instrument as they are accustomed to, while at the same time transmitting information about what they are playing to computing systems. This approach requires significant alterations to the acoustic instrument which is something many musicians are hesitant to do. In addition, hyper-instruments are typically one of a kind research prototypes making their wider adoption practically impossible. In the past few years researchers have started exploring the use of non-invasive and minimally invasive sensing technologies that address these two limitations by allowing acoustic instruments to be used without any modifications directly as digital controllers. This enables natural human-computer interaction with all the rich and delicate control of acoustic instruments, while retaining the wide array of possibilities that digital technology can provide. In this chapter, an overview of these efforts will be provided followed by some more detailed case studies from research that has been conducted by the author's group. This natural interaction blurs the boundaries between the virtual and physical world which is something that will increasingly happen in other aspects of human-computer interaction in addition to music. It also opens up new possibilities for computer-assisted music tutoring, cyber-physical ensembles, and assistive music technologies.


Author(s):  
Dionysios Politis ◽  
Miltiadis Tsalighopoulos

Speech science is a key player for music technology since vocalization plays a predominant role in today's musicality. Physiology, anatomy, psychology, linguistics, physics and computer science provide tools and methodologies to decipher how motor control can sustain such a wide spectrum of phonological activity. On the other hand, aural communication provides a steady mechanism that not only processes musical signals, but also provides an acoustic feedback that coordinates the complex activity of tuned articulation; it also couples music perception with neurophysiology and psychology, providing apart from language-related understanding, better music experience.


Author(s):  
Pedro Pina

Cloud computing offers internet users the fulfillment of the dream of a Celestial Jukebox providing music, films or digital books anywhere and when they want. However, some activities done in the Cloud, especially file-sharing, may infringe copyright law's exclusive rights, like the right of reproduction or the making available right. The purposes of the present chapter are to briefly examine how digital technology like p2p systems or Cloud computing potentiate new distribution models, how they allow unauthorized uses of copyright protected works and to point out solutions to reconcile the interests of rightholders and consumers so that the benefits from digital technology can be enjoyed by all the stakeholders in a legal and balanced way.


Author(s):  
Chrysi Chrysochou ◽  
Ioannis Iglezakis

This chapter describes the conflict between employers' legitimate rights and employees' right to privacy and data protection as a result of the shift in workplace surveillance from a non-digital to a technologically advanced one. Section 1 describes the transition from non-digital workplace surveillance to an Internet-centred one, where “smart” devices are in a dominant position. Section 2 focuses on the legal framework (supranational and national legislation and case law) of workplace surveillance. In section 3, one case study regarding wearable technology and the law is carried out to prove that national and European legislation are not adequate to deal with all issues and ambiguities arising from the use of novel surveillance technology at work. The chapter concludes by noting that the adoption of sector specific legislation for employees' protection is necessary, but it would be incomplete without a general framework adopting modern instruments of data protection.


Author(s):  
Eirini Markaki ◽  
Ilias Kokkalidis

While many scientific fields loosely rely on coarse depiction of findings and clues, other disciplines demand exact appreciation, consideration and acknowledgement for an accurate diagnosis of scientific data. But what happens if the examined data have a depth of focus and a degree of perplexity that is beyond our analyzed scope? Such is the case of performing arts, where humans demonstrate a surplus in creative potential, intermingled with computer supported technologies that provide the substrate for advanced programming for audiovisual effects. However, human metrics diverge from computer measurements, and therefore a space of convergence needs to be established analogous to the expressive capacity of musical inventiveness in terms of rhythm, spatial movement and dancing, advanced expression of emotion through harmony and beauty of the accompanying audiovisual form. In this chapter, the new era of audiovisual effects programming will be demonstrated that leverage massive participation and emotional reaction.


Author(s):  
Dimitrios Margounakis ◽  
Dionysios Politis ◽  
Konstantinos Mokos

The evolutional course of music through centuries has shown an incremental use of chromatic variations by composers and performers for melodies' and music sounds' enrichment. This chapter presents an integrated model, which contributes to the calculation of musical chromaticism. The model takes into account both horizontal (melody) and vertical chromaticism (harmony). The proposed qualitative and quantitative measures deal with music attributes that relate to the audience's chromatic perception. They namely are: the musical scale, the melodic progress, the chromatic intervals, the rapidity of melody, the direction of melody, music loudness, and harmonic relations. This theoretical framework can lead to semantic music visualizations that reveal music parts of emotional tension.


Author(s):  
Georgios Kyriafinis ◽  
Panteleimon Chriskos

The ordinary user of cochlear implants is subject to post-surgical treatment that calibrates and adapts via mapping functions the acoustic characteristics of the recipient's hearing. As the number of cochlear implant users reaches indeed large numbers and their dispersion over vast geographic areas seems to be a new trend with impressive expansion, the need for doctors and audiologists to remotely program the cochlear implants of their patients comes as first priority, facilitating users in their programmed professional or personal activities. As a result, activities that need special care, like playing sport, swimming, or recreation can be performed remotely, disburdening the recipient from traveling to the nearest specialized programming center. However, is remote programming safeguarded from hazards?


Author(s):  
Marios Stavrakas ◽  
Georgios Kyriafinis ◽  
Miltiadis Tsalighopoulos

Hearing disorders are quite common in our days, not only due to congenital causes, environmental factors abut also due to the increased rate of diagnosis. Hearing loss is one of the commonest reasons to visit an ENT Department both in the clinic and in the acute setting. Approximately 15% of American adults (37.5 million) aged 18 and over report some trouble hearing. One in eight people in the United States (13 percent, or 30 million) aged 12 years or older has hearing loss in both ears, based on standard hearing examinations. About 2 percent of adults aged 45 to 54 have disabling hearing loss. The rate increases to 8.5 percent for adults aged 55 to 64. Nearly 25 percent of those aged 65 to 74 and 50 percent of those who are 75 and older have disabling hearing loss. These figures depict the impact on patients' quality of life and the necessity for early and accurate diagnosis and treatment. It is important to mention that congenital hearing loss and deafness is also a condition that requires early diagnosis and hearing aiding in order to develop normal speech. Profound, early-onset deafness is present in 4–11 per 10,000 children, and is attributable to genetic causes in at least 50% of cases.


Sign in / Sign up

Export Citation Format

Share Document