This is a sample of what the user might hear as generated by the app. The structure is not fixed, as the app changes the combination of the samples, producing a different track on each usage.
The remit of this work was to make some fairly neutral music that would be generated in time with the user’s heart rate. In the app the heart rate was calculated using the device’s camera and flash, and this data was sent to the audio engine.
The audio engine was written in libpd to trigger different concatenations of samples each time, so that the music experience was different each time the app was launched.
This was one of the first times I worked with this audio engine, and the idea that I had to compose a piece entirely out of 4-bar samples! The audio engine is quite flexible, with text file scores to enable some control of structure.
This means that the samples for each sample player had to be interchangeable. There are a lot of interesting points to explore with these limitations. Does the piece still sound good at a fast tempo? It would be great to develope the engine to include key sensitivity, so that the music could modulate.
This is the promotional video for the company’s product. It shows people’s reaction to using the app on iPads, built into heart-shaped pods around the city of Melbourne: