granular synthesis
Recently Published Documents


TOTAL DOCUMENTS

40
(FIVE YEARS 3)

H-INDEX

6
(FIVE YEARS 0)

2021 ◽  
Vol 9 (2) ◽  
pp. 1-27
Author(s):  
Fernando Souza ◽  
Adolfo Maia Jr.

We show a method for Granular Synthesis Composition based on a mathematical modeling for the musical gesture. Each gesture is drawn as a curve generated from a particular mathematical model (or function) and coded as a MATLAB script. The gestures can be deterministic through defining mathematical time functions, hand free drawn, or even randomly generated. This parametric information of gestures is interpreted through OSC messages by a granular synthesizer (Granular Streamer). The musical composition is then realized with the models (scripts) written in MATLAB and exported to a graphical score (Granular Score). The method is amenable to allow statistical analysis of the granular sound streams and the final music composition. We also offer a way to create granular streams based on correlated pair of grains parameters.


2021 ◽  
pp. 414-448
Author(s):  
Victor Lazzarini

The principles of sound design within a computational context are demonstrated through a series of examples and techniques. These include additive synthesis, which is the focus of the earlier part of the chapter, and is followed by source-modifier methods, which are complementary to it. The more advanced approaches of granular synthesis and streaming spectral processing complement the discussion, which is fully illustrated with code examples and spectrogram figures. The chapter concludes with an overview of design approaches.


2020 ◽  
Vol 44 (4) ◽  
pp. 43-59
Author(s):  
Isabelle Su ◽  
Zhao Qin ◽  
Tomás Saraceno ◽  
Ally Bisshop ◽  
Roland Mühlethaler ◽  
...  

Abstract Three-dimensional spider webs feature highly intricate fiber architectures, which can be represented via 3-D scanning and modeling. To allow novel interpretations of the key features of a 3-D Cyrtophora citricola spider web, we translate complex 3-D data from the original web model into music, using data sonification. We map the spider web data to audio parameters such as pitch, amplitude, and envelope. Paired with a visual representation, the resulting audio allows a unique and holistic immersion into the web that can describe features of the 3-D architecture (fiber distance, lengths, connectivity, and overall porosity of the structure) as a function of spatial location in the web. Using granular synthesis, we further develop a method to extract musical building blocks from the sonified web, transforming the original representation of the web data into new musical compositions. We build a new virtual, interactive musical instrument in which the physical 3-D web data are used to generate new variations in sound through exploration of different spatial locations and grain-processing parameters. The transformation of sound from grains to musical arrangements (variations of melody, rhythm, harmony, chords, etc.) is analogous to the natural bottom–up processing of proteins, resembling the design of sequence and higher-level hierarchical protein material organization from elementary chemical building blocks. The tools documented here open possibilities for creating virtual instruments based on spider webs for live performances and art installations, suggesting new possibilities for immersion into spider web data, and for exploring similarities between protein folding, on the one hand, and assembly and musical expression, on the other.


Author(s):  
Micael Antunes ◽  
Danilo Rossetti ◽  
Jonatas Manzolli

This paper discusses a computer-aided musical analysis methodology anchored on psychoacoustics audio descriptors. The musicological aim is to analyze compositions centered on timbre manipulations that explore sound masses and granular synthesis as their builders. Our approach utilizes two psychoacoustics models: 1) Critical Bandwidths and 2) Loudness, and two spectral features extractors: 1) Centroid and 2) Spectral Spread. A review of the literature, contextualizing the state-of-art of audio descriptors, is followed by a definition of the musicological context guiding our analysis and discussions. Further, we present results on a comparative analysis of two acousmatic pieces: Schall (1995) of Horacio Vaggione and Asperezas (2018) of Micael Antunes. As electroacoustic works, there are no scores, therefore, segmentation and the subsequent musical analysis is an important issue to be solved. Consequently, the article ends discussing the methodological implication of the computational musicology addressed here.


2019 ◽  
Vol 5 ◽  
pp. e205 ◽  
Author(s):  
Chris Kiefer

Conceptors are a recent development in the field of reservoir computing; they can be used to influence the dynamics of recurrent neural networks (RNNs), enabling generation of arbitrary patterns based on training data. Conceptors allow interpolation and extrapolation between patterns, and also provide a system of boolean logic for combining patterns together. Generation and manipulation of arbitrary patterns using conceptors has significant potential as a sound synthesis method for applications in computer music but has yet to be explored. Conceptors are untested with the generation of multi-timbre audio patterns, and little testing has been done on scalability to longer patterns required for audio. A novel method of sound synthesis based on conceptors is introduced. Conceptular Synthesis is based on granular synthesis; sets of conceptors are trained to recall varying patterns from a single RNN, then a runtime mechanism switches between them, generating short patterns which are recombined into a longer sound. The quality of sound resynthesis using this technique is experimentally evaluated. Conceptor models are shown to resynthesise audio with a comparable quality to a close equivalent technique using echo state networks with stored patterns and output feedback. Conceptor models are also shown to excel in their malleability and potential for creative sound manipulation, in comparison to echo state network models which tend to fail when the same manipulations are applied. Examples are given demonstrating creative sonic possibilities, by exploiting conceptor pattern morphing, boolean conceptor logic and manipulation of RNN dynamics. Limitations of conceptor models are revealed with regards to reproduction quality, and pragmatic limitations are also shown, where rises in computation and memory requirements preclude the use of these models for training with longer sound samples. The techniques presented here represent an initial exploration of the sound synthesis potential of conceptors, demonstrating possible creative applications in sound design; future possibilities and research questions are outlined.


2018 ◽  
Author(s):  
Chris Kiefer

Conceptors are a recent development in the field of reservoir computing; they can be used to influence the dynamics of recurrent neural networks (RNNs), enabling generation of arbitrary patterns based on training data. Conceptors allow interpolation and extrapolation between patterns, and also provide a system of boolean logic for combining patterns together. Generation and manipulation of arbitrary patterns using conceptors has significant potential as a sound synthesis method for applications in computer music and procedural audio but has yet to be explored. Two novel methods of sound synthesis based on conceptors are introduced. Conceptular Synthesis is based on granular synthesis; sets of conceptors are trained to recall varying patterns from a single RNN, then a runtime mechanism switches between them, generating short patterns which are recombined into a longer sound. Conceptillators are trainable, pitch-controlled oscillators for harmonically rich waveforms, commonly used in a variety of sound synthesis applications. Both systems can exploit conceptor pattern morphing, boolean logic and manipulation of RNN dynamics, enabling new creative sonic possibilities. Experiments reveal how RNN runtime parameters can be used for pitch-independent timestretching and for precise frequency control of cyclic waveforms. They show how these techniques can create highly malleable sound synthesis models, trainable using short sound samples. Limitations are revealed with regards to reproduction quality, and pragmatic limitations are also shown, where exponential rises in computation and memory requirements preclude the use of these models for training with longer sound samples. The techniques presented here represent an initial exploration of the sound synthesis potential of conceptors; future possibilities and research questions are outlined, including possibilities in generative sound.


2018 ◽  
Author(s):  
Chris Kiefer

Conceptors are a recent development in the field of reservoir computing; they can be used to influence the dynamics of recurrent neural networks (RNNs), enabling generation of arbitrary patterns based on training data. Conceptors allow interpolation and extrapolation between patterns, and also provide a system of boolean logic for combining patterns together. Generation and manipulation of arbitrary patterns using conceptors has significant potential as a sound synthesis method for applications in computer music and procedural audio but has yet to be explored. Two novel methods of sound synthesis based on conceptors are introduced. Conceptular Synthesis is based on granular synthesis; sets of conceptors are trained to recall varying patterns from a single RNN, then a runtime mechanism switches between them, generating short patterns which are recombined into a longer sound. Conceptillators are trainable, pitch-controlled oscillators for harmonically rich waveforms, commonly used in a variety of sound synthesis applications. Both systems can exploit conceptor pattern morphing, boolean logic and manipulation of RNN dynamics, enabling new creative sonic possibilities. Experiments reveal how RNN runtime parameters can be used for pitch-independent timestretching and for precise frequency control of cyclic waveforms. They show how these techniques can create highly malleable sound synthesis models, trainable using short sound samples. Limitations are revealed with regards to reproduction quality, and pragmatic limitations are also shown, where exponential rises in computation and memory requirements preclude the use of these models for training with longer sound samples. The techniques presented here represent an initial exploration of the sound synthesis potential of conceptors; future possibilities and research questions are outlined, including possibilities in generative sound.


Sign in / Sign up

Export Citation Format

Share Document