I am a composer of adaptive music and love exploring its various applications and possibilities. So far I have worked on apps and games and devices that use aspects of user-experience to influence and modify parameters in the music.
My research looks to investigate methods for procedurally-generated composition, taking into account tempo variations, temporal ambiguity and the variation of textural and timbral structures, particularly in response to biometric data. Looking ahead, I am also looking to use data from machine learning algorithms in my compositions.
In my doctoral work at Royal Holloway, University of London, I built and embedded standalone generative and adaptive musical compositions into smartphone applications. Moving forward, I am exploring the use of the kinds of compositional frameworks I built for mobile apps in visual novels, web-based pieces and games, thus translating my dataflow programming experience into more industry-standard applications such as Wwise and FMOD.
I have a BA(Hons) in music and an MMus in electroacoustic composition from the University of East Anglia and enjoyed a career as a broadcast technician where I could indulge my love for all things technical, working with companies such as the BBC, Talkback Thames and Bloomberg Television.
After starting a family, I rejoined academia, gaining teaching experience as an hourly paid lecturer at Middlesex University, and as a postgraduate teaching assistant at Royal Holloway, University of London where I also qualified as an associate fellow of the Higher Education Authority.
I am grateful to have received a Francis Chagrin Award (2018) and earned commissions from Sound and Music (Discord / Utopia 2016) and the Sonic Arts Network (Sonimation 2001).