At this stage, neural networks can not only parody famous musical performers but also restore the voices of celebrities from the pre-digital era. Andrey, author of the YouTube channel “Ay, guitarist!”, spoke about the art of AI covers and the role of cryptocurrencies in the AI-music market.
ForkLog (FL): Andrey, your AI covers draw millions of views. Is this kind of art in demand now?
Andrey: I think so. I’ve built an audience that comes for nostalgia, for the voices of long-departed performers—Tsoy, Letov. They’re fans; perhaps not pop-scale multimillions, but still a meaningful number. For me it’s very special.
FL: How did you come to music?
Andrey: I’ve played guitar since I was 13. I started on my father’s old one, made at the Chernihiv furniture factory. And it stayed with me for life—as a hobby. Then came rocker hangouts, the underground. In life I did very different things, but over the past two and a half years I’ve returned to music in earnest.
FL: When did you start doing AI covers, and why do you most often turn to Letov’s voice?
Andrey: I always listened to rock, and Letov and Tsoy are recognised legends of Russian rock. My brother, who was a big fan, introduced me to Letov specifically; he brought cassettes from Moscow.
When I learned to play the guitar, the idea of singing as other performers came. Letov’s voice has a certain charm, a kind of hopelessness. If you perform songs from cartoons in his voice, they come out very heartfelt, even heart-rending.
In 2023 I began doing AI covers, and the first voice model I had was Letov. Around then he saw a second wave of popularity among young people, already after his death.
FL: How do you create a track?
Andrey: I first sing everything and sketch a draft. I choose a song myself or take one subscribers suggest in the comments.
Most of the process is on me. Then, in the RVC neural network, I apply the voice model, after which I process everything in the Ableton audio editor. I also use other neural networks for post-processing: ElevenLabs, Adobe Podcast.
My early AI covers used a publicly available voice model; for the past year and a half I’ve used one I assembled myself, trained on specific Letov albums. The delivery and phrasing still come from a live person; a neural net won’t handle that.
FL: What’s in your studio?
Andrey: The toolkit is entirely standard. I only recommend buying a good audio interface. I use an iCON Duo 22 Dyna, with inputs for guitar and a condenser microphone. The microphone and headphones are Superlux, inexpensive.
FL: Are AI covers effectively created in real time?
Andrey: RVC applies the voice model in real time. The key is to synchronise guitar and voice, because the graphics card needs about a couple of seconds for processing. I run a local RTX 3090, not the most powerful. For neural nets, VRAM is what matters; it’s 34 GB here—enough for me for a couple more years.
FL: What is happening in the AI-music market now?
Andrey: For over a year since the first Suno models, people have been generating their own music, uploading it to platforms and monetising it. This has triggered a number of big lawsuits between record labels and AI startups. The claim is that, to train their neural nets, Suno and Udio use original tracks whose authors and performers receive no royalties.
In future, I think we’ll end up with musicians receiving royalties—possibly 50/50—for the use of their voices.
In addition, routine studio processes will shift to neural nets. If I were labels, I’d already be rolling out these tools to ease the workload.
FL: You took part in public projects to restore the voices of celebrities from past eras. How did that work?
Andrey: I helped a team in Uzbekistan restore the republic’s golden voices—various announcers and singers. The project had state support. We took recordings from national archives; on their basis I created voice models and improved the sound quality.
Now I’m also involved in restoring Sergei Yesenin’s voice; Russia has dedicated 2025 to him. In his lifetime there was only one more or less decent audio recording, about four minutes of him reading his poem. Even though only a trace of the voice survives, I still tried to pull it out and build a voice model from it. The project will be presented in the autumn.
FL: Do you see ethical problems in using the voices of deceased performers?
Andrey: I’m against overstepping in the use of source material when creating AI songs. In my work I try to treat late musicians with respect. My songs are mostly re-interpretations.
FL: In Russia there is a proposal to protect a citizen’s voice as an object of intangible rights. Should this area be regulated, and how?
Andrey: That’s where we will inevitably end up. The Russian bill is aimed largely at combating the use of a person’s voice for fraudulent purposes. In that context it’s a very sound proposal.
I don’t do conversational models at all; I create parodic works, write music from scratch and state that everywhere.
Undoubtedly, the law in this area will see amendments. Europe and the US are very concerned about deepfakes. No industry needs chaos.
If this affects my work and I have to obtain permissions, I’ll still keep doing it, albeit with constraints.
FL: Do you follow any AI tokens?
Andrey: One interesting project is io.net. They provide cloud computing, which matters for neural nets when you don’t have proper hardware at home. With tokens you buy their compute and use it to train your models.
Another cool project is ChainGPT. They have various AI agents that help with crypto and check smart contracts.
I’m sure that, going forward, projects similar to ElevenLabs will appear with their own tokens, which can be used to pay for voice or music processing services. It’s very cool when technology and cryptocurrencies work in symbiosis.
FL: What do you think of tokenisation in this area?
Andrey: I’m all for it. I even had ideas to launch my own Telegram bot that would create music in Suno with token payments.
I consider tokenisation in AI music very useful, but there is not yet mass adoption for this to happen at a serious level. I think movement in this direction will come in part thanks to Telegram.
FL: Will artificial intelligence destroy humanity?
Andrey: I don’t think so. With the right approach, on the contrary, it will complement us.
Subscribe to the podcast:
