solidallas.blogg.se

The supercollider book
The supercollider book












the supercollider book the supercollider book

In previous work, we have presented basic signal processing and machine learning building blocks for creating arbitrary creative workflows in creative coding environments (CCE). The framework can be seen as a conceptual abstraction of common data processing pipelines used in music information retrieval (MIR) and general data science that aims at facilitating experimentation by musicians. In this section, we present the proposed framework for visualization of large sound collections. A single value (typically the average over a sequence of frames) is relevant for a very short sound, but it will not be as useful for longer samples. Second, such descriptors are typically obtained from a frame-level representation, which means they may vary significantly over time. In general, each collection will most likely have its own sonic dimensions beyond a particular choice of descriptors. For example, a pitch descriptor may be irrelevant for a collection obtained from drums or environmental sounds. Moreover, there is no assurance that a given sound collection will have an interesting variation along a given set of descriptors. First, the direct use of sound descriptors often requires an understanding of concepts related with signal processing and psychoacoustics. These systems suffer from several limitations. Several works have implemented this idea in interfaces that allow the user to choose the descriptor for each axis.

the supercollider book

One basic strategy is to use two perceptual sound descriptors (for example pitch and amplitude) as the two axes of the plot. While the groupings obtained with these techniques may not directly map to the user’s expectations, they can be used to suggest new perspectives and creative possibilities with respect to how the sounds in the recordings may relate to each other.Ī good deal of research has focused on automatic audio analysis for creating interactive systems based on 2D plots of sound collections. Contrastingly, unsupervised algorithms such as data clustering or dimensionality reduction could be used to reveal groupings of sounds that are particular to the distribution of audio features in those recordings. A model pre-trained for conventional sound categories (say, different musical instruments) would not apply to that situation. As an example, a musician can easily create a database of recordings using one particular instrument or device. Supervised approaches are, however, of limited use in the early stages of creative processes like music: in these stages, everything is subject to change, and personal interpretations are often more relevant than established conventions. Most research has focused on supervised methodologies, where an algorithm learns from some known association of digital data to labels. In recent years, advances in signal processing and machine learning have improved our ability to interact with large quantities of digital information. Our results demonstrate the potential of unsupervised machine learning and visualization for creative applications in computer music. We present an implementation of the framework using the SuperCollider computer music language, and three example prototypes demonstrating its use for data-driven music interfaces.

the supercollider book

The proposed approach includes several novel contributions with respect to previously used pipelines, such as using unsupervised feature learning, content-based sound icons, and control of the output space layout. We propose a more general framework that can be used flexibly to devise music creation interfaces. We analyze several prototypes presented in the literature and describe their limitations. The proposed framework allows for a modular combination of different techniques for sound segmentation, analysis, and dimensionality reduction, using the reduced feature space for interactive applications. This paper describes a general framework for devising interactive applications based on the content-based visualization of sound collections. While audio data play an increasingly central role in computer-based music production, interaction with large sound collections in most available music creation and production environments is very often still limited to scrolling long lists of file names.














The supercollider book