HOME   |   ABOUT   |   RELATED WORKS   |   TEXTS   |   TERMS + CONCEPTS   |   DEVELOPMENT


auto things


Triggered initially by midi signals taken from random webcam artifacts / glitches in image, this is then put through a series of non-linear processes that produce different expression of a sound.

I'm not making this noise myself (see 'how I am using midi' post) A sample is broken down into multiple non-linear parts & triggered according to random variables.

The attempt is to eventually have generative, or emergent behaviour, similar to animal noises (constrained in 'voice' but not in expression), via hard/software synthesis. An animal will have 'rules' such as the threshold of its vocal range, resonance according to the physical shape of its body etc, but the sound itself is of course not pre-programmed, but is instrinsically linked to the dynamic environment.

The video here is for demonstration to see the artifacts being picked up.