Cinder Audio

2013

 

While working at The Barbarian Group, I took on the redesign of the Cinder audio API (ci::audio), with the goals of creating something both powerful and flexible enough to be directly combined with Cinder's graphics capabilities. We went with a modular design in the likes of Pure Data and Web Audio, maintaining the spirit of providing C++ tools to build the engines you need.

To the right is a basic overview of the modules included, taken from the cinder audio guide.

 

audio_layers

Library Features

Native Device Management

We wanted to have tight control over the audio processing at an OS level, so there is a hardware abstraction on each platform. This allows us to minimize dependencies and make tweaks when we need them even at production time. We support a number of platforms (Windows, OS X, iOS, Linux, Android), so I'd say this was the largest aspect of the project concerning development time. However once built, it's a great thing to have, especially when real-time low-latency is your target.

Built-in Nodes

Like any nice modular audio API, there are some nice things you can use out of the box to build custom audio engines or effects. For sample playback, there is the BufferPlayerNode (in-memory) and FilePlayerNode (file streaming) (see notes here). For waveform generation, there are both low level nodes (sinewave, triangle, phase, etc) as well as the band-limited GenOscNode that has presets for the common waveform types. Nodes for filtering and delay are also in there, as well as general math operations or just using a C++11 lambda to do some audio processing. More details here.

Nodes in ci::audio can be multi-channel for ease of use (default is they match their inputs, but this can be overridden). You use ChannelRouterNode to both separate channels from multi-channel nodes and to remap mono nodes, for example when designing a spatialized multi-channel audio engine.

Digital Signal Processing

At the core, there's an audio library for doing typical audio math, like vector math operations, windowing, sample-rate conversion, and FFT (Fast Fourier Transform). All the other components build on this highly efficient layer, although the cinder::audio::dsp namespace is also meant to be used in end projects when needed.

Sample Accurate Scheduling

One very nice feature in the ci::audio API is the ability to schedule events with sub-sample accuracy. This allows you to synchronize things running on other threads (commonly visuals, but can also be networking events, etc), by specifying a time in the future, usually within the next processing block. Audio params are controlled in a similar fashion although with a bit more control, using the Param mechanism. Most of the built-in nodes expose their parameters using these where it makes sense. Like the Web Audio API, you can also use other ci::audio::Nodes as inputs to Params.

 

Cinder Blocks (Add-ons)

Because of the modular structure and native C++ API, it's easy to extend ci::audio's built-in functionality by adding custom Nodes for synthesis, effects, custom processing, or adding other platform-specific backends. Here are some that are public on github.

Blocks extending ci::audio::Nodes:

  • Cinder-Stk - adds support for the Synthesis Toolkit, also wrapping many useful tools as Nodes for things like reverb, chorus, synth instruments, etc. Personally, I've used it in many projects for the FreeVerb implementation.
  • Cinder-HISSConvolver - adds a Node for convolution using the HISSTools Impulse Response Toolbox. Commonly used for convolution reverb, or just create interesting sound designs on real-time audio signals.
  • Cinder-PureDataNode - wraps Pure Data within a Node, allowing you to write embedded pd patches, while also allowing you to use cinder to handle difficult cross-platform things like hardware i/o, file i/o, sample-rate conversion, and other low-level DSP operations. Also mentioned at PdCon16 (proceedings, "lib pd: Past, Present, and Future of Embedding Pure Data).

Blocks using ci::audio::Nodes:

  • Cinder-SoundPlayer - from Red Paper Heart, provides higher-level audio file playback common in user interfaces. dev branch has some nice improvements (from me) for playback via a 'Buffer pool'. This is crucial in things like sound effects where you want them to be able to overlap.
  • Cinder-poSoundManager - from Potion Design, similar audio file playback tool, with some more features like pan, loop, etc.

Blocks adding extra Hardware Backends:

  • Cinder-PortAudio - allows you to use PortAudio as an additional audio hardware backend, selectable at runtime, notably used for adding ASIO / Dante support. More info on this forum post.

Applications

Other than being used for the run of the mill sample file playback, here are some projects that I have worked on using the ci::audio API.

Face Controlled Synth

This was a project I collaborated on with Rare Volume that used face movement and gestures to drive a combination of subtractive synthesis and studio audio compositions. Face joint positions and velocities were mapped to track volumes and parameters on a custom subtractive synthesis arrangement.  The choice of audio tracks was driven based on extracted 'mood', which was mapped to the level of various tracks.

Falling Gears

This is a sample that ships with cinder (source code) that demonstrates audio synthesis driven by physics using Box2D. Gears fall when you drag your mouse and collisions between gears and walls or 'islands' trigger sound generators (GenNode's) that are spatially arranged to make music chords. 

Symphonologie

Another Rare Volume project held at the Louvre in Paris, this time cinder::audio was used for real-time audio analysis in order to drive visuals. Five microphones were used to isolate different sections of the symphony, which were analyzed as amplitude envelopes and magnitude frequency spectrums. During the project, I added support for MSW low-latency mode, which is key to a tight and highly reactive music visualizer.

Selected Projects

Connections Wall at Northwestern Universitymulti-touch, multi-user interactive wall at NWU Visitor Center

Flow ParticlesGPU Particles with the Intel Realsense Depth camera and optical flow.

Glimpse Twitter Visualizationinterative visualization