Autonomous Virtual Instruments II

This is a short documentation excerpt from Autonomous Virtual Instruments as it was presented at the Masters of Aalto exhibition in Helsinki.

Since the last version the mechanisms that they use to control their pendular movement have been greatly improved. They are much more consistent now and have better control.

The audio signal analysis that the instruments use to listen to each other is significantly more sophisticated so that they are now listening to the timing and pitch information of their neighbours, not just overall audio energy.

The instruments are now able to adjust the radius of their bell component and have some additional control over the pendular swing. This provides each instrument with some control over their pitch and timing in order to respond the information they are hearing from their neighbours.

There are some superficial cosmetic changes.

Although the system is much more sophisticated overall and the instruments are listening and responding to their neighbours, the effect is still very similar to wind chimes: ambient and stochastic with variation, but not much perceivable organization. I am hopeful that another round of development will allow for some emergent patterns or properties to become apparent.

I would like to move away from the orbiting camera look, perhaps with several viewports and/or navigation. Also, the audio could be spatialized so the instruments are aurally place in the space as well as visually.

Autonomous Virtual Instruments

This is a short documentation excerpt from a work in progress.

The work is a generative music and realtime animation project realized in Max with Modalys from IRCAM and the Zsa.Descriptors from Mikhail Malt and Emmanuel Jourdan.

With this project I am investigating two ideas. One is to use a physics engine/physics based animation to control physical modelling sound synthesis. The other is explore how audio feature extraction can be used as the means by which the autonomous agents interact with their environment.

These gong like instruments play themselves, with all the dimensions and forces passing from the physics engine to the sound synthesis. They also listen to each other and what they hear influences their behaviour. The model for their behaviour is quite simple, and the effect is perhaps not much different from a stochastic process.

Future work includes a more sophisticated listening and behaviour model and different kinds of instruments based around other kinds of physical models in the sound synthesis such as strings or tubes.