top of page

Markov Chains in Semi-Realtime Beat-Based Electronic Music Performance

In this project, I will create generative tools in Max/MSP that are flexible and reliable within the musical themes of an electronic music album. An interface to moderate the algorithmic output will also be designed, the combination of these techniques forming a generative electronic music performance and album. By developing tools to listen to future chains of musical events whilst a sparse introduction is in place, the user will be able to ‘select’ the most convincing future outputs, moderating the performance in real time with human ears. I will begin by focusing on the application of this to Markov chain interfaces, and investigate whether it will be necessary to explore other options such as the genetic algorithm or Bayesian filtering. Ultimately, I intend the sound content to be convincing and exciting in the musical context of electronic music albums being released today. Therefore, I will not limit myself to the complexities of sound design within Max/MSP as I have before, and will consider compatibility with VSTs, live vocals and instruments.

Although past generative music projects created in Max/MSP were successful to an extent, they often felt unconvincing, due to several elements. I was beginning to explore Markov chains at this point, creating generative chains of probability for melodic, rhythmic and sonic elements of the music. An example associated with this technique includes when I created Markov chains to vary the pitches found in a bass line. By studying the conventional arrangement of minor scales used in techno, I hoped that my melodic chains would sound as convincing as those found on the records being released today. It was important to include outlier and unique possibilities with lower chances of occurring in order to give way to variety; however, I found this often worked against the musical piece, where they were generated at times where it simply didn’t sound effective. I have concluded that the user has the option to either create exhaustively complex, high order probability chains to reach a greater level of depth and realism, or moderate the output of probability chains in a humanizing, selective fashion. Both of these approaches will be explored, however it may be necessary to consider alternative algorithms to use throughout the project.

Much generative music requires no human performer, other than to start the software. As already mentioned, this method does hold problems, as the realism of the output is limited to the analytical skills of the programmer. This problem can be addressed by exploring the idea of using a continuous flow of generated music, which will be heard ahead of time and worked with on headphones or as MIDI content whilst the performance continues. This will work in the same fashion that DJs use their headphones, skipping ahead to the correct cue point for the next track they’re going to transition into. For example, whilst the sparse introductory stages of the performance are beginning, the performer will be thinking forward into the piece, sorting the convincing from the non-convincing, thus organizing an electronic music performance in real-time and humanizing the output of the computer.

Max/MSP is a highly accessible programming language for representing music as data, however designing convincing sounds for electronic music can require a lot of understanding. Therefore, the musical outputs of generative music systems built in Max/MSP are limited to the creator’s knowledge of sound design. Consequentially, I often had to use samples downloaded from the internet to compensate for the lack of depth found in the sounds I was creating, paradoxically feeling less authentic due to not using my own sounds. I will combat this problem by using Max/MSP as a platform to create the tools to be used to generate sound content in Ableton, such as through the VST Serum. This will separate the mathematical nature of generative music from the sounds produced in VSTs within Ableton. Being more experienced in designing sounds within VSTs and DAWs, I feel that I will have a far higher level of control over the sounds I produce, whilst maintaining a mathematical interface by which the structure and distribution of events is controlled.

Considering that this project will result in a generative musical performance as well as a fixed piece, I believe that the generative elements will act as an augmentation upon the performer. As an analogy, I worked on a project in my second year of undergraduate studies, which is my only experience in performance based Max/MSP work. First, I created a structural backbone in the same way that I plan to in this project. Then, by using Max/MSP to map hardware MIDI controllers to musical parameters, I was able to influence the direction of the music and add important improvisations where they were needed. Not only did this piece feel more convincing musically, but the performance aspects gave it a lot of energy, fluidity and was generally far more effective than my ‘one-click’ Max/MSP pieces. I will be assessing techniques found in other generative performances as well as my own, and integrating this within my performance. I feel that, if done correctly, it will help capture the audience’s attention, and prevent it from becoming accessible only to an academic audience.

Overall, this project will focus on exploring and researching effective algorithms and algorithm interface creation for musical performance. It will cover different methods of utilizing these pieces of software in structural, algorithmic and improvised manners. However, my ultimate goal is to maintain a balanced and effective relationship between the code, and the sonic outcome. Too often have I seen myself and others focus on making a conceptually complex patch, which ultimately has not been accessible to music lovers outside of academia. This will obviously be down for interpretation and personal taste, but I believe that if I organize the sound content first, working code around these themes later, I can ensure the cohesive relationship between audio content and the algorithms producing it. This will ultimately result in a dynamic, high energy, musical experience, regardless of the methods behind it.


bottom of page