How AI is Changing Music Production for Better and for Worse

Cal Poly student David Fedor live on the mix (Credits: David Fedor)

Neon lights reflect off the faces of the dimmed San Jose crowd as they wait to see their favorite artist, breaking into applause as a 16-year-old girl with teal pigtails is projected onto the stage. 

Hatsune Miku, also named Character Vocal Series 01 (CVO1), is a Japanese popstar created by Crypton Future Media in 2007. Her voice is made up of synthesizing samples from voice actress Saki Fujita, which was developed into a voice bank for the multi-million dollar musical persona, accompanied by the animated mascot that has now toured across the globe.


Hatsune Miku’s popularity is a clear display of the rapid technological advancement in recent years and the effect it has had on the production of music throughout the world. 


The latest development has been the use of artificial intelligence in the music industry, with AI having capabilities ranging from being able to perform simple tasks at a faster rate to being able to generate entire songs from just one-sentence prompts on websites like Suno and Udio.


Some members of Cal Poly’s Music Production Union have begun to implement AI into their own musical production process, while others remain completely free of the technology to ensure they have complete creative control. 


David Fedor, a fourth-year materials engineering major, serves as an officer of Cal Poly’s Music Production Union and has been DJing since he was thirteen. He began implementing AI into his music through Serato Stems, a technology that can isolate the bass, drum, vocals and melodies of a track into different stems to then layer with other tracks. 


“We're doing the same things, but we can just do them way faster, way more efficiently and that allows us to take more creative liberty,” Fedor said. 


Isolating instruments through AI was a core piece of production in the 2023 release of the Beatles’ song “Now and Then,” which used a 1977 demo from John Lennon that was given to the band following his passing. Through AI, producers were able to separate Lennon’s vocals from the piano and ultimately produce the track over 50 years after the band’s last album. 


“Now and Then”  went on to be nominated for Record of the Year and Best Rock performance at the 2024 Grammys and was the first song that utilized AI to be nominated for a Grammy award. 


Many see AI as just the next step in the development of the music industry, similar to the switch from analog to digital production that happened in the 1970’s.


Local musician Ryan Marienthal is a fourth year electrical engineering major who got into producing while at Cal Poly, and has recently started his own recording studio. He prefers to be more connected to his own process, using no AI due to the detailed and subjective nature of his work.


“I think most of the value of art still happens for people in communities in real spaces,” Marienthal said. “What makes [art] compelling is other people's connection to it: the communities that it centers, the connection that it has to the artist, the way the artist expresses themselves. I think it's so much more than the actual technical work.”


With rising fears of what AI means for the music industry and the people within it, it remains imperative that people can create and listen to music they enjoy, whether that is the virtual popstar Hatsune Miku or local artists like Marienthal and Fedor. 


“AI is definitely the future. like it's here to stay, it's not going anywhere. and um, it's only gonna get better,” Fedor said.


Comments