Christophe Robert is a permanent member of the G-Art group. He’s a musician and a specialist of algorithmic music. His website (in french) musiquealgorithmique.fr is a great introduction to algorithmic music and it also offers many detailed explanations on the subject. In 2023, we had many discussions, some excerpts of which are presented here in the form of an interview.
Jean-Louis Paquelin [jlp] — Hello Christophe. Today we’ve prepared an interview of sorts. But before we start, could you introduce yourself and talk about your practice?
Christophe Robert [cr] — My compositional practice focuses mainly on electroacoustic repertoires, and I’m interested in generative and algorithmic music. I’m also a doctoral student, working on the articulation between generative music and electroacoustic music, and on the question of form. I approach certain radical forms of generative music, usually more represented in the world of installation and sound art, but with a musician’s perspective.
[jlp] — How did you become interested in generative music?
[cr] — This is a subject I really discovered at the Conservatoire de Nice, in the Pure Data music programming course run by Gaël Navard who, even before getting us to work on sound synthesis, showed us how semi-random sequences of notes, rhythms and so on could be generated. I was immediately fascinated by the possibility of not composing a fixed musical phrase, but deciding on a certain number of parameters, setting the range in which the values will develop, and letting the virtualities play a full part in the creation. The idea of generating a virtually infinite flow of music, too: like many people who are interested in these subjects, at the outset there’s a slightly childlike wonder at the automaton.
On the question of origin, we’re a bit on the therapist’s couch here… I’ve done a lot of interviews on my website (https://musiquealgorithmique.fr/entretiens/) with musicians working on these subjects, and it’s a question I ask them all the time: “where does it come from?”. It’s a fascinating question, but one that’s quite difficult because the answer is always a bit speculative. So as I was thinking about your questions in advance, I thought of several things. First of all, there’s a phrase that’s always bothered me when I hear certain artists - writers, visual artists, musicians - say that what’s important in the first place is “to have something to say”. It’s something I’ve always had a problem with, because I’ve never recognized myself in it. Even as a teenager, when you create as you breathe, I never felt I had “something to say”. I’ve always had the impression that creation doesn’t necessarily have to do with this idea of individual expression, and limiting oneself to this point of view is to miss an important part of the creative phenomenon. Generative music places a great deal of emphasis on the self-referential side of art, and also on a certain disengagement from the self, from myself as the subject of creation, and this is undoubtedly one of the reasons why this subject interests me.
Besides that – and this has more to do with the history of my ear – the sounds and types of music that fascinate me since childhood, and structure my musical taste to some extent, are: on the one hand, bells (and long after childhood I discovered the combinatorics perfected by bell ringers, known as change ringing), and on the other, everything that is imitative counterpoint, fugues and canons. The word “canon” actually means “rule”, as this is music that can be summed up in its rule. So, imitative counterpoint has a very strong automatic and procedural dimension.
[jlp] — Before getting down to the nitty-gritty of generative music, can you give us a definition?
[cr] — The question of definition is a difficult one, and it would be better to use the plural and talk about generative musics. To begin with, there are two terms I use, algorithmic music and generative music, which are historically terms with distinct origins.
By convention, we start this problem at the end of the 50s. The first work is often said to be the Illiac suite by Lejaren Hiller and Leonard Isaacson, in 1957, the first computer-generated music, to put it bluntly. But there is a whole prehistory of algorithmic and generative repertoires, “prehistory” precisely because of this conventional date. In Arabic-speaking countries in the Middle East, in particular, there’s a tradition of musical automata dating back to the Middle Ages. In the Baroque period, there’s also a whole tradition of combinatorics, for example with the figure of Athanasius Kircher, a polymath Jesuit who devised a didactic tool for automatically generating musical sequences in simple counterpoint. Then there’s change ringing in 17th century England, which I’ve just mentioned, or the tradition of musical dice games in the Age of Enlightenment…
So there are these two terms: algorithmic and generative music, which don’t emphasize the same thing, and each has its own limits and historical connotations. Generative music comes from Brian Eno, an unclassifiable British pop musician who is also known as the inventor of the concept of ambient music. There are also conceptual links between these two subjects, notably on the question of reception. Brian Eno was the first to release music not as a fixed recording, but as a computer program. At the time, the medium was a floppy disk, which was executed each time with a different result. He thus emphasizes that his work is not one of the program’s outputs, but the program itself. This is the most radical yet narrowest definition of the concept of generative music.
I would tend to take a broader view of the subject, including, for example, the stochastic music of Xenakis, i.e. the idea of using algorithmic tools to generate musical sequences and then reworking them freely. The term “algorithmic music” better corresponds to this other approach. It’s a term that comes more from the French-speaking sphere, by a now largely forgotten composer, Pierre Barbaud, who was very interested in modeling music mathematically and computationally.
[jlp] — If we change the time scale, we can draw a parallel between melody and timbre. Is there any research into generative timbres as there is in generative music?
[cr] — Yes, absolutely. Historically, it came later. For obvious technological reasons, we started by generating sequences of notes, rhythms. That’s what Hiller and Isaacson did. I was talking about Xenakis, with Gendy 3 (1991), one of the first to create a piece in which the timbre itself is generative. We could also mention other figures, such as Roland Kayn, a musician who mobilizes concepts of cybernetics in his music, using modular analog synthesizers to produce very, very long pieces, which can last up to ten hours, with this idea of inter-modulation between lots of oscillators, creating a music that self-transforms with a feedback system in extremely stretched durations.
I think this generativity through “sound” is today the most active field of research in the field of experimental music, particularly in combining AI and audio-descriptors. And certain consumer tools based on machine learning, such as Riffusion, which uses spectrogram representation, are also concerned with generating sounds rather than notes. This raises fascinating and difficult questions about the formalization and representation of sound.
[jlp] — In a document you sent me in preparation for this discussion, you wrote that a negative feeling could arise in the informed listener of generated musical content because he or she knows how much the work of creation needs to be invested in it. Implicitly, the fact that an algorithm is substituted for the musician would devalue the production. And yet, visual artists often call on craftsmen to produce their work, and we don’t think this devalues the work. I say this because I’m a visual artist. As a musician and researcher yourself, what can you say?
[cr] — First of all, I’m not really interested in whether music generated by algorithms or AI is as good, or worse, or better than music composed in the “traditional” way. I just assume that these tools exist and are used. I’m interested in the effects they produce. I’m not projecting myself at all into a discourse where these tools would replace something that was there before.
That said, I’m very interested in the question of reception, because I think it’s part of what defines generative art. I think that if we try to characterize what makes these types of works singular and will never be completely comparable to, say, Mozart’s Sonata in C major, it’s notably in the effects of reception, because, particularly when we listen to music generated in real time by a program, there really is a very specific reception. Until I find a better term, I call it the kaleidoscope effect. What you look at in the kaleidoscope, once you make the slightest movement, is lost forever. In music generated in real time, there’s a particular concentration, because you tell yourself that what’s happening there, this combination of timbres, of sounds, will never be reproduced identically, and this produces a kind of acuity of listening, a very strong effect of presence, as if the music were for you alone, that you were the only one listening to it, forever.
And at the same time, after a while, there’s a sort of nausea, because things could go on forever. There’s a “generative fatigue” that defuses attention. So there’s a double tension. That’s why I was talking earlier about Brian Eno and ambient music: ambient music also plays on this ambivalence to some extent, and Brian Eno defines it as music that you can listen to fully or just perceive as background music while doing something else at the same time.
Concerning your other question, you were talking about craftsmen and why it doesn’t bother anyone that Damien Hirst has his works made by an army of assistants and why it’s more shocking in music. I don’t really have an answer. You probably know Johannes Kreidler, since you work on Pure Data and he wrote a manual on the subject. Kreidler did a piece in which he literally delegated musical creation to composers based in China and India, in order to make a political statement about the delocalization of work. What I’ve noticed, having spent a lot of time in conservatories and art schools, is that music hasn’t completely undergone its conceptual revolution. A certain number of things are taken for granted in art schools, notably the primacy of the idea. This is undoubtedly also linked to the difference between teaching music and the visual arts. In conservatories, there’s a very strong emphasis on technical learning and craftsmanship. First, you learn counterpoint, writing in the style of Fauré or Ravel, to train your ear, and these are technically demanding studies, before you get to composition. This conditions a representation of creation as a highly individual, highly artisanal activity.
[jlp] — Artificial intelligence, as it is generally presented today, seems to be all about imitation. We’ve all seen the images produced by all the artificial intelligences we see on the web with imitations, photographs transformed in the manner of Van Gogh. Can we do the same for music?
[cr] — Yes, totally. It’s even one of the main dimensions of algorithmic and generative music, including in the most recent research. In machine learning, there’s something called style transfer, which consists of trying to imitate a style by trying to understand what makes a style in a set of works, but this is actually a fairly old question. I was talking about Lejaren Hiller and Leonard Isaacson earlier, in their first book, Experimental Music, they were already talking about the problems this raised. And imitation is also the basis of the Illiac Suite, which isn’t really an actual piece, but a scientific experiment to begin with. The hypothesis of this experiment is literally “can we use the computer to generate music that will sound like music created by a classical composer”? In one of the experiments, they input all the rules of Fux’s counterpoint into the machine, and ask the question: if we ask a machine to produce a random score that respects all the rules of counterpoint as laid down in the 18th by Fux, will it sound like the music of the 18th?
Right from the start, there’s this idea of imitation. There’s also David Cope, one of the most important figures in generative music, whose entire work with EMI (Experiment with Musical Intelligence) consists of trying to imitate the style of Vivaldi, Rachmaninov, Mozart and so on. This is also predominant if you look at the scientific literature, and it’s a framing induced by the scientific method itself. In fact, in articles on generative music, the question is mostly not “how to make music with algorithms” (a question that is scientifically difficult to evaluate, as it requires a definition of what “music” is), but rather, more often than not: “how to make music that imitates existing music in an indistinguishable way”. And that’s much easier to measure.
I was referring to the long-standing problems raised by this question of imitation. First of all, what does a style mean? The example given by Hiller and Isaacson, if my memory serves me right, is Haydn. What are we talking about when we speak of Haydn’s style? Is Haydn’s style in his symphonies the same as Haydn’s style in his string quartets, or his sonatas, or his oratorios? Let’s concentrate solely on his symphonies. He composed around a hundred of them. Can we say that it’s the same style for all 100? For example, it might seem more reasonable to talk about the style of his so-called “Parisian” symphonies, or his “sturm und drang” symphonies. And in fact, if we continue with this logic, in the end, the style will be one movement of one symphony, because the style of an adagio cantabile is not necessarily comparable to a minuet or an allegro vivace. If there is a recognizable style, it’s always in the middle ground. When we try to define the notion of imitation, actually we always introduce the idea of a deviation considered acceptable.
In fact, there’s a whole field of research going on right now in AI-generated music, about how the algorithms can figure out for themselves to what extent what they’re doing is a new image or pure plagiarism. Because this question also arises on a legal level. Even in the most trivially imitative things (but we both know there’s nothing trivial about the underlying technology…), there’s always this idea of measuring some kind of acceptable gap between the new and the imitation of the old.
There is also a highly experimental approach to algorithmic music. I mentioned Xenakis, whose idea was to take concepts from mathematics or particle physics, and bring them into the musical field to generate unheard-of sound morphologies that he wouldn’t have found on his own. In a way, I can’t see any other way of looking at an algorithm when you’re a musician: either it helps you model pre-existing musical logics, or it brings its own heuristics and helps you find things that are, if not new, at least out of the ordinary.
[jlp] — When you quote Xenakis and say how he was looking for new timbres, you leave the field of artificial intelligence, you answer more broadly. But let’s come back to it: I’ve got one last question, perhaps a slightly forward-looking one: what can we expect or hope for from artificial intelligence in the field of music? Or put another way: in your opinion, in what direction should progress be made?
[cr] — I’m not going to answer with a prescription in mind, but I can put forward some hypotheses, to get away from the reactionary attitude, from the great fear of algorithms that will kill artists, etc. There are two main ways in which we could go about this. There are two main avenues. One would be that of tools: artists have always used various technologies in their creations, for example painters who used a camera obscura or a camera lucida, a perspectograph, etc., and now we can consider that Stable Diffusion or Dall-e will be part of their tools in the image construction process, among other technologies. And this will only shift a little the field of creativity towards other things, while some will be more willingly delegated to the algorithm.
And then there’s another way, very hypothetical at the moment, which would be to think that it’s going to happen a bit like photography. In the beginning, photography was a medium that was appreciated and practiced in the same way as painting. We did portraits and landscapes. And then it became a medium in its own right, which we don’t appreciate in the same way as a painting, which we appreciate in its own way. At some point, will AI-generated images, and perhaps even later, AI-generated music, be appreciated as an art form in their own right? Will we end up developing a particular sensibility for these kinds of creations? Will it give rise to genuinely specific forms?
In this respect, one might well ask whether it would not be better to abandon the use of the word “music” to designate these creations.