The computer created the first musical composition on its own back in the 1950s. Algorithms today write soundtracks for games, create images in the style of famous artists, and adjust the music in the headphones to suit your mood.
With the assistance of advanced ML engineers, nearly any task can be optimized, according to serokell.io. But will artificial intelligence replace a real composer and is cooperation possible between them?
Hack the divine code: who and why uses AI in music
At the end of 2019, American singer and electronic woman Grimes stated that we seem to be living in an era of art decline – at least human. “As soon as a strong artificial intelligence appears, it will significantly surpass us in the creation of works of art,” – said the singer.
A year later, Grimes wrote an endless digital lullaby for the Endel app. Rather, it provided only a musical basis, and the application’s artificial intelligence now continuously changes it, adjusting to the time of day, weather, movement, and heart rate of a particular user.
“I think there’s something magical about it,” Grimes said in an interview with The New York Times about AI. – It is as if the meaning of the Universe is hidden in this. Sounds absurd, but do you know what I mean? Are we breaking the divine code or something like that? ”
Other artists not only use AI in their songs but also give it the right to express itself. Thus, the Iranian-British electronic engineer Ash Kush created the virtual singer Yona. She writes music and lyrics and then performs them herself.
As Time notes, most of her poems are vague and meaningless, but sometimes she gives outlines like The one who loves you sings lonely songs with sad notes.
According to the creator of Yona, even most people are not capable of such directness and “emotional nakedness”
But Yona isn’t the only AI musician. In the fall of 2019, she released two albums co-written with other digital artists. The music is also written by the virtual influencer Lil Miquela, who has three million followers on Instagram.
From the 1950s Computer Suite to the Bach-Writing Robot: A Brief History of AI in Music
However, all this is not so new. The first composition, composed of artificial intelligence, was released back in 1956. Two professors at the University of Illinois, Ledjaren Hillier and Leonard Isaacson used the university computer “Illiak” for this. Hillier and Isaacson laid down the rules based on which the machine-generated the code, which was then translated into notes. The experiment resulted in a four-part piece for strings, which Hiller and Isaacson called “Iliac’s Suite.”
In 1965, seventeen-year-old Raymond Kurzweil, a future inventor, and futurist, came to the American show I’ve Got a Secret, where the star jury guessed the secrets of the guests. He played the song on the piano – the secret was that the computer had composed it.
And in the early 1980s, composer David Cope, who was fond of programming, began work on the Experiments in Musical Intelligence (EMR) program. EMP was supposed to analyze existing music and create new works on its basis. It took seven years to develop, but when the work was over, EMP composed five thousand chorales in the style of Bach in one day.
In 1997, Cope presented three compositions to the audience. One was written by Bach, the other by EMP, and the third by music theory teacher Steve Larson. The audience had to guess who owned each of these pieces. The composition, composed by Larson, was mistaken by the public for the music of artificial intelligence. EMP music – for Bach.
Today AI writes music for video games, composes pop songs in the spirit of The Beatles, and even writes unfinished plays by dead composers. More and more startups are emerging that are developing AI services for music production (for example, Amper Music, Popgun, and AIVA). But how does musical artificial intelligence work?
Learning neural networks and human editors: how AI makes music
Most of these systems use deep machine learning, explains musician and journalist Denis Deal in an article for The Verge. After analyzing many existing compositions, the system creates its music.
“You feed the software tons of original material, from dance hits to disco classics, and it tries to find some patterns in it,” Deal writes. “These programs are sensitive to things like chords, tempo, note duration, and how they relate to each other.”
“The problem with this approach is that the resulting music lacks structure,” says Valerio Velardo, a musical AI expert and former CEO of Melodrive, which until recently used AI to create soundtracks for video games. “Music is extremely complex and multidimensional, so you can’t get a computer to write it just by training a neural network on ten thousand or even a million songs.”
Therefore, machine learning must be combined with music theory to create music, Velardo says. For example, if you want to create an algorithm that will write a piece for an entire orchestra, you should think about its structure, melody, instrumentation. Velardo adds:
“All these factors cannot be answered by a single algorithm.”
The same method is followed by the Luxembourg-based company AIVA, which composed the soundtrack for the video game Pixelfield. Different AI systems are responsible for different characteristics of a piece of music – for example, harmony, tempo, and melody – in AIVA.
“We asked ourselves: ‘What building blocks are needed to create a complete song?’ Says Pierre Barrot, head of AIVA. – You can start, for example, with a melody line. Then, based on this, you can create another model that will compose the instrumental accompaniment for that melody. When you take the whole apart, everything becomes much easier. ”
Nevertheless, the logic by which AI creates music is still different from humans. Nick Brian-Kinns, Music Machine Learning Specialist, explains:
“The problem is we’re trying to teach the AI to make music that we like, but we don’t allow it to make music that it likes.”
To make compositions written by artificial intelligence sound interesting to humans, AI still often requires live editors – at least for now. “You can feed the AI a whole catalog of Amy Winehouse songs, and you end up with a lot of music. But someone has to go and edit it, ”says Brian-Kinns. “Someone has to decide which parts they like and which AIs are worth working on a little longer.”
The future of AI in music
Imagine that you are riding in a crowded subway car and are angry because you are late for work. A tiny biometric gadget above your ear notices your anxiety and plays your favorite artist’s song, but modifies it to sound softer and more relaxed. Based on feedback, the gadget notices changes in your biometrics and continues to change the composition to make its effect even more beneficial.
This is the near future of musical AI, sees Anmol Saxena, head of Ashva WearTech, a startup that develops smart clothing such as a device to help maintain posture. “In my opinion, there is a strong possibility that the streaming industry will try to offer functions that will read body indicators such as heart rate, stress level, breathing rate, maybe even neurological signals,” says Saxena.
Based on this biometric and psychological information, the AI will change the genre, key, harmony, and other characteristics of the songs in your player to make you feel better. This will be a new era in the AI-powered personalization of music.