In an ongoing series exploring how artificial intelligence, machine learning and music-making continue to intersect in 2022, we’re speaking with leading industry figures to find out more about how AI powers their technology and what the future holds for AI in music-making.
Artificial intelligence and machine learning is growing increasingly prevalent in the music production world – in fact, you may already be using it. It seems as if it’s almost every day that a new AI-powered plugin pops up in our inbox, with these intelligent tools claiming to be capable of everything from drum sequencing and tape emulation to mastering and sample organisation.
Most of these tools use AI to do things that were already possible in a faster, more efficient or more effective manner. Some are taking things a step further, however, repurposing these advanced technologies in an attempt to create entirely new music-making paradigms.
That’s what CEO Edward Balassanian is hoping to accomplish with Aimi. A generative music platform that seeks to “fundamentally change the way the art form is created, consumed and monetized”, Aimi uses AI to generate endless and immersive compositions constructed from musical material provided by artists. They purport to offer a radically new form of listening experience, one which is infinitely evolving and adapts in response to the listener’s feedback.
Here’s how it works. A producer presents the software with a library of individual stems, which are analysed, organised and reassembled by the AI to create a generative musical ‘experience’. The source material is heavily manipulated and restructured by the software in real-time according to algorithms determined by Aimi and tweaked by the artist, resulting in a perpetually unfolding composition…