Since the invention of the computer and the Industrial Revolution in the 20th century, humans have been making scientific advances at a rapid pace. One of the most promising technologies of the 21st century is certainly AI. The internet’s translators, which were inadequate in the early 2010s, have improved and are now almost always accurate. Many people also use ChatGPT, one of the AI programs, to search for information, ask it to write their assignment for them, or even recommend their lunch menu. Along with these advances in AI technology, a topic that has emerged and is still being debated among artists is the aesthetic value of art through AI programs.
EMI (Experiments in Musical Intelligence), created by Professor David Cope, is a program that composes music based on the musical styles of existing composers. In use from 1981 to the early 2000s, the program composed many pieces imitating classical composers from J.S. Bach to I. Stravinsky.
As you can hear, the quality is surprisingly high, and it does a good job of capturing the mood and feel of the composer. If someone who hasn’t studied classical music in depth were to hear it, they’d probably think it was actually written by the composer. However, the important thing about this song is not how much it resembles the composer’s work or how good the quality is, but whether it has aesthetic value.
With the invention of the computer in the mid-20th century, many electronic instruments and virtual instrument programs were created, with the aim of making them sound as close as possible to traditional instruments such as the piano or violin. Of course, with the technology of the time, electronic instruments sounded mechanical and harsh, and were seen as something to be eliminated by the engineers who built them, but the public began to be attracted to and like the sound of electronic instruments, and so they became a different kind of instrument, carving out their own sound rather than moving towards imitation of traditional instruments.
However, the reason why electronic instruments did not replace traditional instruments was due to the technical limitations of the time. So, is AI incapable of making music that humans make due to technical limitations? Obviously not. Maybe it can make it more sophisticated and more contextualized than human music. If people only looked at the harmony, melody, and technique of music when listening to it, AI music would certainly put musicians out of work. But fortunately, we still make music, so it’s unlikely that people only enjoy the musical elements of music. Music has been with us for a long time and has evolved throughout history, including human history and art history. It has changed with the changing and developing technology, religious and political atmosphere, and the efforts of musicians to break through the limits. According to Professor Jaeho Chang, most people listen to music to perceive and feel the things that surround them through music.(1) (J. Chang, 2017)In other words, people like music created by humans because it is combined from these external factors and strives to surpass their limitations, even if it is more imperfect than music created by AI.
In conclusion, although AI music has already been greatly developed and has grown to be more sophisticated than human music, I do not think it can replace the music of artists yet, as it does not reflect non-musical factors such as the social landscape. Nevertheless, I believe that it can be distinguished as an artistic genre, and that it contains a certain beauty, albeit with different textures.
Journal
- Jaeho Chang. “The Musicality of Artificial Intelligence” Korea National University of Arts, Korea Institute of Arts, 2017.