ScreenPlay Demo

Between 2013 and 2016 I was working solely towards my PhD thesis at the University of Salford; the focus of which was human-computer interaction (HCI) in music. Since then, however, I have struggled to find the time to post about the results of my research over the course of those 3 years amidst finding and starting a new job, and ongoing teaching responsibilities.

The focal point of my research has been the design and development of a unique and innovative interactive computer music system (ICMS), ScreenPlay.

There are 3 main approaches to the design of ICMSs: sequenced, generative, and transformative. Sequenced systems are usually tailored towards a lone user and afford the opportunity to orchestrate/arrange a predetermined composition from constituent parts/loops. These systems are excellent for novice users due to the coherency of their musical output and often simple and engaging user interfaces (UIs). However, they are often devoid of any meaningful influence over the generated musical output by the computer. Incredibox is an excellent example of a sequenced ICMS.

Generative systems rely on an underlying algorithmic framework to generate musical responses to the input of the user. Better suited to supporting multiple users simultaneously, the musical output of generative systems is often stylistically ambient, and there can be little discernible connection between the control actions/gestures of the user(s) and the resulting musical output – thus limiting the scope for user(s) to exert a tangible influence over the music. As a result, generative systems can struggle to engage users with a higher level of musical proficiency. Examples of generative systems include NodeBeat and Brian Eno’s Bloom and Reflection apps.

In most instances transformative systems are designed to respond to the incoming audio signal from a live instrument, and transform the sound of the instrument through various means of manipulation. Many early ICMSs were transformative in nature, with the process of design and implementation often being explicitly aligned with the composition and performance of a specific musical work. As a result, such systems are known as “score followers”; Pluton by Philippe Manoury being a prime example. The reliance of transformative systems on a level of instrumental proficiency means that contemporary examples are scarce, in particular in the context of electronic music.

A common trait among each of the three separate approaches to ICMS design outlined above is that the resulting systems often prioritise the affordance of influential control to the user(s) over one or two distinct musical parameters/characteristics, while at the same time ignoring the immense creative possibilities offered up by the multitude of other musical parameters/characteristics available. The systems mentioned above are just a few examples of ICMSs that each exhibit the characteristics of only one of the three overarching approaches to ICMS design: sequenced, generative and transformative. A common hindrance to the vast majority of ICMSs, ScreenPlay seeks to combat this exclusivity of focus through the encapsulation and evolution of the fundamental principles behind the three system design models in what is a novel approach to ICMS design, along with the introduction of new and unique concepts to HCI in music in the form of a bespoke topic­-theory-­inspired transformative algorithm and its application alongside Markovian generative algorithms in breaking routine in collaborative improvisatory performance and generating new musical ideas in composition, as well as providing new and additional dimensions of expressivity in both composition and performance. The multifunctionality of the system, which allows it to exist as both a multi-­user­-and­-computer interactive performance system and single-­user­-and­-computer studio compositional tool,­ including the affordance of dedicated GUIs to each individual involved in collaborative, improvisatory performance when in multi mode and the technicality of hosting up to sixteen separate users through a single instance of Ableton Live in order to achieve this,­ is another of ScreenPlay’s unique design features. The primary goal throughout the ScreenPlay‘s development cycle has been that the convergence of all these different facets of its design should culminate in the establishment of an ICMS that excels in providing users/performers of all levels of musical proficiency and experience with ICMSs an intuitive, engaging and complete interactive musical experience.

As previously mentioned, one of the most unprecedented inclusions in the design of ScreenPlay within the context of HCI in music is the topic-theory-inspired transformative algorithm. Topic theory, which was particularly prevalent during the Classical and Romantic periods, is a compositional tool whereby the composer employs specific musical identifiers – known as “topics” – in order to evoke certain emotional responses and cultural/contextual associations from the audience. In ScreenPlay the concept of topic theory is implemented in reverse, with textual descriptors presented to the user(s) via the GUI being used to describe the transformative audible effects had upon the musical output of the system by a variety of “topical oppositions”.

In total ScreenPlay affords the user(s) a choice of four “topical oppositions”, each of which is presented on the GUI as two opposing effectors at opposite ends of a horizontal slider; the position of which between the two poles dictates the transformative effect on the musical output of the system had by each of the oppositions. The four oppositions are “joy-lament”, “light-dark”, “open-close”, and “stability-destruction”, the first of which acts by altering the melodic and rhythmic contour of a musical phrase/loop to imbue the musical output of the system with a more joyous or melancholic sentiment respectively. The three remaining oppositions serve to transform the textural/timbral characteristics of the music in various ways. In order to achieve the desired effect as indicated by the position of the corresponding slider at the moment the transformation is triggered by the user, the algorithm which underpins the “joy-lament” “topical opposition” performs a number of probability-based calculations, the weightings of which change in accordance with the position of the slider, and, as a result, alter the relative transformative effect. These calculations include the application of specific intervals between successive pitches in a melodic line – with certain intervals being favoured more heavily depending on the position of the slider; the increase/decrease in the number of notes within a melody, note duration and speed of movement between notes; and the overall directional shape of the melody – whether favouring upward or downward movement. The respective positions of the sliders for the three textural/timbral “topical opposition” transformations – “light-dark”, “open-close” and “stability-destruction” – work by informing the parameter settings of a number of macro controls on Ableton Effect Racks, each of which is mapped to a multiple of parameters across numerous effects.

ScreenPlay‘s GUI is currently designed as a TouchOSC template, with the playing surface mimicking that of an Ableton Push. When in multi mode, each user (up to sixteen in total) is able to interact with the system through a single Ableton Live set via individual GUIs, which grant them influential control over a specific part within the arrangement of the musical output of the system. In single mode, the user can control up to sixteen distinct parts from a single GUI, with the interface updating in real time to display the status of the currently selected part. In order to achieve this, large swathes of the three Max for Live devices that constitute the system as a whole are dedicated to facilitating the two-way transmission of OSC messages between Ableton Live and TouchOSC. The assignment of pitches to the “pads” on the playing surface of the GUI, depending on the user-defined key signature/scale selection, is also currently undertaken by one of the three Max for Live devices. As such, I plan to develop a dedicated GUI in Lemur, which can process most of these tasks internally, thus reducing the demand on CPU of the central computer system, and will also facilitate the implementation of a wired connection between the GUI and the central system – if so desired by the user(s)/performer(s) – thus negating the impact of weak/fluctuating WiFi on the reliability and fluidity of the interactive experience. While it is already possible to bypass the playing surface displayed on the GUI and use any MIDI controller with which to play and record notes into the system, it is still necessary use the TouchOSC GUI to control the generative and transformative algorithms. Providing the option to bypass the GUI entirely by affording control over these aspects of the system directly through the Max for Live devices themselves is also an intended development of ScreenPlay.

ScreenPlay will undoubtedly be made publicly available at some point or another, although I am not yet sure when or in what form. As already pointed out, there are definite improvements to the design and functionality of the system that can be made – some more pressing than others – and I would like to take the time to refine some of these aspects of the system before releasing it.

Advertisements

Daft Punk | Tron: Legacy (End Titles)

This is a studio performance of Tron: Legacy (End Titles) by Daft Punk that I recorded all the way back at the end of 2015, but, due to being preoccupied with other things in the time since then, have only just gotten round to uploading. In it I’m using a MicroBrute for the bass, Volca Beats for the drums, a combination of six different patches on the Virus for the two lead sounds, and an Ableton Push and APC40 for controllers.

Volca CCtrl & MIDI Note Send/Receive Max for Live devices

image

image

Recently I performed a live set at an electronic music night in Manchester called Sound Visionaries – a fantastic audiovisual event showcasing an eclectic range of experimental and popular electroacoustic/electronic music from a host of great musicians.

I decided to take a simple setup consisting only of my laptop, APC40, Midi Fighter and Volca Beats; and then thought it would be useful to make a couple of fairly basic Max for Live MIDI Devices to provide me with more creative freedom and control when using the Volca.

The first of these was designed to facilitate the transmission of single MIDI notes from one track to another in Ableton; thus allowing me to alter the pattern of the kick drum as I was performing and have an instance of Cableguys Volume Shaper 4 fulfilling a sidechain-ducking action on the sub track that would respond accurately to the changes I made in real time. Consisting of two devices (one to send the chosen MIDI notes and the other to receive them), up to 15 separate MIDI notes can be sent from one track to another; while multiple instances of both send and receive devices can coexist happily within a Live Set providing no two send devices share the same active transmission channels – unless the intention is to have multiple MIDI triggers from different sources activating the same sidechain-enabled device.

The second device allowed me to assign up to three MIDI CCs to each of eight macro controls before outputting them to the Volca; therefore enabling me to control a variety of parameters from the APC40. I’ve since added further functionality that facilitates full velocity sensitivity for the Volca FM when played from an external MIDI controller.

I’ve tidied both devices up a bit and they are now available to download for free from maxforlive.com, here:

Volca CCtrl

MIDI Note Send/Receive

Midi Fighter Multi-Sample on DJ TechTools

Midi Fighter Multi-Sample screenshot

You can now read about my Midi Fighter Multi-Sample Max for Live patch in an article I have written over on the DJ TechTools website.

The article provides easy to follow, step-by-step instructions detailing how to set up and use the patch in conjunction with the Drum Rack in Ableton Live; whilst the patch is now also available to download for free direct from the DJ TechTools site (providing you have a user account).

Alternatively, the patch is still available to download for free from maxforlive.com; with the most recent update including clearer, more straightforward setup instructions.

Midi Fighter Multi-Sample Max for Live patch

Some time ago a friend showed me this video, in which multiple samples are being assigned to a single trigger button on a Midi Fighter. I’m not exactly sure how it’s working in this context but the Note Length MIDI Effect is present in the Drum Rack and is clearly responsible. Interestingly, if you listen for the moment when the camera shot sample is triggered, the beginning of one of the other samples assigned to that trigger button can also be heard. In any case, I figured I could throw together a small Max for Live patch to perform a similar job, while at the same time remove the audible overlapping of samples as identified in relation to the camera shot.

Having finally found a bit of spare time over the summer, I designed a patch which allows for up to three separate samples to be assigned to any of the trigger buttons on the Midi Fighter by applying different output velocities to the triggered MIDI note depending on the duration for which the button on the Midi Fighter is depressed. The velocity zone editor in a Live Rack’s chain list can then be used to set different samples to respond to specific output velocities from the Midi Fighter. The three velocities are all customisable, meaning that the patch can also be used to trigger the same sample at different velocities by bypassing the user of the velocity zone editor in a chain list; as is the response speed for the triggering of each of the three samples, with available options of fast and slow.

The patch can be downloaded here.

Any feedback would be greatly appreciated and l hope it will come in handy for some people.

ScreenPlay Test

A recent test of ScreenPlay, the interactive computer music system I have been developing as part of my PhD research into human-computer interaction in music at the University of Salford.

The TouchOSC graphical user interfaces hosted on the four iPads communicate with Ableton Live 9 via a series of Max For Live patches and function much in the same way as does the Ableton Push MIDI controller; allowing for the button-matrix-style, grid-based playing surface to be locked in a specific key signature/scale or played chromatically, and for any standard triad within the selected key/scale to be formed using the same hand-shape in any position on the grid.

The next stage in the development of the system will be the introduction of both generative and transformative algorithmic procedures reliant upon second order Markovian processes and the alteration of rhythmic and textural/timbral characteristics, as well as the pitch-classes of the source-material respectively.