I made a little demo session using GrimesAI and some vocals from an unreleased track from my band, lunetta. I made this for a workshop that the London College of Music is running with some local schools, the idea being you can A/B between the original input and the GrimesAI output, with an instrumental and some vocal FX for context.

Some interesting observations I had working on this demo…

  • The AI seems to struggle with sibilance. This is obvious in the first section, at the line “and it leaves this feeling”. On the S sound in “this” you can hear the model stutter slightly, like a bad loop. I’m not exactly sure why this is, my leading theory is that Grimes’ music is very de-essed, and as a result of that the AI hasn’t had enough training on more natural sounding sibilance.
  • I also find it fascinating that the AI has shifted the octaves of some background vocals. The original vocalist has quite a low register, and I think the AI has shifted it to work better in Grimes’ range.
  • I also found myself pre-processing my original vocals to be more Grimes-esque. I used more auto-tune than I usually would, and went for a more air-y sound too, I found this led to better outputs from the model, as I suspect it was closer to the original input data that the AI was trained on. Obviously the backing track I made for this is very much inspired by the kind of music Grimes makes. I did this intentionally, as I felt it suited the GrimesAI vocals more than the original tracks instrumentation, but I also find it really interesting that I’ve subconsciously tried to match Grimes’ vocal production on the original vocals.
tmcq · Tell Me - OriginalVocals
tmcq · Tell Me - AIVocals