Technion Researchers Use Artificial Intelligence to Generate Jazz Solos
These Solos are described as personalized and match human-specific preferences.
Students from the Technion’s Henry and Marilyn Taub Faculty of Computer Science in Haifa, Nadav Bhonker (already graduated) and Shunit Haviv Hakimi, together with their advisor Professor Ran El-Yaniv have shown that it is possible to model and optimize personalized jazz preferences using artificial intelligence.
Their paper on the research called the BebopNet project was published in the Proceedings of 21st International Society of Music Information Retrieval Conference.
To people involved in the development of artificial intelligence technologies this is great news. The idea that something artistic, especially a field as personalized as jazz, can be recreated by AI is a major breakthrough.
AI is a form of computer and as such can only really do what it has been pre-programmed to do and nothing as original as the composition of new and truly artistic sounding music.
To jazz purists this is horrible news. They will surely decry the idea that any machine could possibly create new jazz music which is true to the emotions behind the compositions of real jazz musicians.
But to most of the world jazz does not sound like a form of music which follows any design so they will probably think that it is no big deal for a computer to do such a thing.
The researchers say that learning to generate music is an ongoing AI challenge. An even more difficult task is the creation of musical pieces that match human-specific preferences.
In the BebopNet project, Bhonker and Haviv Hakimi, both amateur jazz musicians, focused on personalized, symbol-based, monophonic generation of harmony-constrained jazz improvisations.
So how did they do it?
They introduced a pipeline consisting of several steps: supervised learning using a corpus of solos (a language model), high-resolution user preference metric learning, and optimized generation using planning (beam search).
The corpus consisted of hundreds of original jazz solos performed by saxophone giants including Charlie Parker, Stan Getz, Sonny Stitt, and Dexter Gordon.
The researchers also engaged in a plagiarism analysis in order to ensure that any of the generated solos were truly new and unique and not a chain of linked segments from previously created music.
“While our computer-generated solos are locally coherent and often interesting or pleasing, they lack the qualities of professional jazz solos related to general structure such as motif development and variations,” said the authors.
Prof. El-Yaniv hopes to overcome this challenge in future research. Preliminary models based on a smaller dataset were substantially weaker, and it is possible that a larger dataset would make a substantially better model. In order to obtain such a large corpus, it might be necessary to abandon the symbolic approach and rely on audio recordings which can be gathered in much larger quantities.
“Perhaps one of the main bottlenecks in AI art generation, including jazz improvisation, is how to evaluate quality meaningfully. Our work emphasizes the need to develop effective methodologies and techniques to extract and distill noisy human feedback that will be required for effective quality evaluation of personalized AI art. Such techniques are key to developing many cool applications,” noted Prof. El-Yaniv.
So is this good news or not?
Well that would depend on your outlook. If you see this as a stage in the development of new tech which will be able to solve all manner of scientific problems, from physics to mathematical equations, or even cure diseases all on its own then this is of course great news. That is what artificial intelligence has been created to accomplish.
But if you are a music purist, or a purist of any form of art, then the idea that a machine could someday be able to create original compositions or paintings which are indistinguishable from those made by people is an anathema.