Writing

Audio-Visual Media Performance
Technology, Aesthetics and Challenges of a Hybrid Art Form

A paper presented at the Creating and Performing Musical Media UnConference
Anglia Ruskin University
June 16, 2012

Ben Neill
Associate Professor of Music Industry and Production
Ramapo College
Office BC 324
505 Ramapo Valley Road
Mahwah, NJ  07430
917 301 5603

 

Hybridity has become a term commonly used in cultural studies to describe conditions in contact zones where different cultures connect, merge, intersect and eventually transform…In the case of digital environments, we must also address communicative interaction in the convergence of real and virtual spaces. Digital hybridity works across and integrates a diverse range of modes of representation, such as image, text, sound, space and bodily modes of expression.  (Spielmann and Bolter 2006 Leonardo)

Multiplicity is induced by two processes: the instantiation of particular compositional elements and the establishment of transversal relations between them.  The media ecology is synthesized by the broken-up combination of parts. (Fuller 2005)

I.  INTRO

My creative approach revolves around the hybrid and the notion of synthesis.  Hybridity and multiplicity infuse both my technical and aesthetic sensibilities, and have resulted in the development of my electro-acoustic instrument, the mutantrumpet, as well as a large body of musical and interdisciplinary works.  Over the years my approaches to composition and performance have developed in conjunction with new technological discoveries.  This process can be described as a self-amplifying loop that feeds back on itself.   When new technological possibilities emerge, creative expression is influenced by those developments, and at the same time new aesthetic ideas influence the invention and exploration of new technological paradigms.  The mutantrumpet is an example of a hybrid, an amalgam of trumpet parts, electronic software and hardware assembled into a new entity.

Another consistent manifestation of hybridity in my work has been the use of visual media in conjunction with musical composition and performance.  My earliest pieces for mutantrumpet and analog synthesizer were made in the early 1980’s with artist/lighting designer Jim Conti.  The pieces were performed in a light environment created by glass-bottomed reflecting tanks which projected dynamic patterns on the walls of the performance space.  Even in these early works I had a strong interest in the formal connections between sound and visual media.

Contexts and Definitions

The idea of a hybrid visual/musical art form is not new.  Sir Isaac Newton formulated equations between Western diatonic scale and primary colors in the 17th century.  Scriabin and Kandinsky both explored synaesthetic experiences in their pieces during the early 20th century.  The term multimedia began in the 1960s to describe works that combined music with visual media in new ways.  Many of the early forms of multimedia utilized some type of live manipulation or improvisation.  As technology continued to develop, the capability of playing visual media using technologies initially developed for musical performance became more widespread.  This has led to the new artform that can be described as Audio-Visual Media Performance (AVMP)

AVMP is truly a hybrid form that does not fit into any pre-existing genre or medium.  A primary place in which this new art form has been developed is in the club cultures of electronic dance music.  The visual material is always presented in conjunction with music, usually by a DJ and a VJ.  In these cases, often the VJ may create visual material that is affected by the music, however there is no guarantee that the artistic agendas of the two artists will coincide.

While the idea of AVMP is not new, its history is overshadowed by the development of music in theater and in cinema.  Sound became an important part of cinema in the early 20th century.  Michel Chion’s widely used text on cinema sound Audio Vision describes the relationship of the visual and aural in this way:

“Cinema is ‘a place of images, plus sounds.’  We classify sounds in relation to what we see.”

When combined with visual imagery, sound recedes to a secondary role.  This powerful archetype which informs the interface of time based visual media with sound is a challenge to AVMP which inherently relies on a more equal balance between sound and image.  Instead of narrative elements, AVMP frequently utilizes synchresis, “the spontaneous and irresistible mental fusion, completely free of any logic, that happens between a sound and a visual when these occur at exactly the same time.”  Chion

By nature, AVMP emphasizes synchresis in its execution, exploring a synaesthetic relationship of audio and visual media in which it becomes possible to see sound or hear color.  This is often what makes the performative aspect of it comprehensible to the audience.  However, this kind of approach can also come very close to what is known in film sound as “mickey-mousing,” where sound is used to reinforce a visual action by mimicking its rhythm exactly.

This approach is common in animated cartoons. If Mickey Mouse falls downstairs, the music will underline and exaggerate his comic fall, articulating every part of the event. Frequently used in the 1930s and 1940s, especially by Max Steiner, it is discredited today because of overuse. A typical example is found in Steiner’s score for Of Human Bondage (1934). Philip Carey, the hero, has a clubfoot. His limp is portrayed by a dragging, uneven rhythm on a descending second. Each time he walks with difficulty, it is to the accompaniment of that rhythm. Cartoons by Tex Avery were another example of the mickey-mousing effect where every movement on the screen is reflected by an accompanying sound.

II.  Process vs. Product, the issue of perceptibility, and some challenges

Synchresis and synaesthesia are important in AVMP as a way of articulating a vocabulary that cuts across the boundaries of sound and vision.  One of the challenges with all interactive computer performance is to enable the audience to comprehend the interactivity while also making a satisfying artistic product that is not overly didactic or mickey-mouse in nature.  If there there is not a direct connection between sound and image in AVMP, the performative element may go unnoticed.

Another big challenge in the field of live AVMP is that the tools are severely limited in comparison to the software and hardware used in technologically based visual media that does not utilize real-time control.  Unless the audience is engaged with the live interaction in a way that is similar to a musical performance, they may find the material lacking in the refinement of production that pre-recorded, produced video and motion graphics have routinely.  This is another reason why perceptibility is crucial in the AVMP medium.  Indeed, one of the reasons that AVMP has appealed to me is to help accentuate the interactive audio elements of my performances, making the relationship of acoustic/physical to electronic/virtual more perceptible.   All of these factors are combining to the emergence of a new type of intermedia performance practice for the 21st century, one that takes all of these constraints and challenges into consideration.

III.  Examples of my work with AVMP

My artistic output involving live audio visual performance has been quite diverse.  In the late 1980’s I created ITSOFOMO, a large scale music/video/text work, with the late artist and writer David Wojnarowicz.  This piece combined 4 video projections, spoken text and live acoustic/electronic music for mutantrumpet, percussion battery and interactive electronics.  ITSOFOMO used the phenomenon of acceleration as its primary structural device on many different micro and macro levels.  While the work did not use interactive visual material, we worked extensively on the relationship between the visual and audio elements, developing a vocabulary that cut across disciplines.  We created a score with narrative, music and images that employed complex ratiometric structures along with improvisation.  The work had a very angry, political edge that alternated with moments of deep reflection and sensitivity.

ITSOFOMO

In the 1990’s I continued to develop different approaches to working with visual elements in my musical performances.  I had found a new outlet for my work in the DJ/VJ culture of the early 1990’s, and the scene was very conducive to my pursuit  It has been my goal to create works in which the composition and performance synthesize sonic and visual elements on multiple levels through technological implementation.
My first work that incorporated interactive visual material was Green Machine, an installation and performance piece originally presented at Paula Cooper Gallery and later released as a CD on Astralwerks, the premiere electronica label of the time.  In Green Machine I was using the harmonic relationships of whole number ratios to create all aspects of the music; the idea was to model processes in nature that have been examined by complexity and chaos theorists.  My interest in tuning systems and mathematical ratiometric structures had grown out of my work with minimallist La Monte Young and numerous other downtown composers.  The newly capable mutantrumpet created at STEIM in 1992 gave me the possibility to introduce randomness and dynamics to the programmed elements, making each performance unique while still maintaining the overall artistic shape.  For the visual element, I worked with engineers build a MIDI controlled slide projection interface in which brightness and the advancing of slides were controlled by MIDI commands.  Working again with Jim Conti, we used the imagery of artist Chrysanne Stathacos, who at the time was making direct prints of plants.  Slides of Stathacos’ images were created and implemented in a multi-projector setup controlled entirely from MIDI.  In the gallery installation, foot pedals were provided for the visitors to introduce some random elements in a way that mirrored my control from the mutantrumpet.

As far as the programming of the visual material, there was not an emphasis on synchronization.  The same numerical relationships and permutations of 6, 7, 8, and 9 were employed, but not in direct sync with the music.  The idea was to juxtapose different numerical patterns and loops to create a kind of audio-visual counterpoint with a loose structure, similar to patterning in nature.

Documentation was a major issue with my work in AVMP up until the past few years.  The only way to document these works was by using a video camera to record the projected images, which did not do justice to the 35mm quality of the slide projections.  With new digital software, it’s possible to record the performance internally as it’s happening, which is a great feature.

Following on the Green Machine project, I began working with visual artist Bill Jones on a new set of pieces that incorporated MIDI controlled slide projections.  These pieces were created for my Triptycal album on Verve Records and the subsequent Sci-Fi Lounge tour with DJ Spooky and Emergency Broadcast Network VJ Gardner Post.  In these works we began exploring more direct control of the imagery by the trumpet, animating still images of light objects in real time.

Partially due to my desire to explore a more synchronized approach to sound and visual media, we also began working with MIDI controlled lighting devices which made it possible to use MIDI notes and dynamics to control brightness and articulation of incandescent lights.  This made it possible to explore the effects of synchronous sound and light composition directly in a way that did not utilize pictorial imagery.

I chose the series of integers 4, 6, 7, 8 for this piece, and the composition is purely algorithmic.  Long, gradual sonic crescendos mirrored by the brightness of the light objects were used to accentuate the way one data byte could manifest itself as both a sonic and visual event.  The result was the Pulse series of sound/light sculptural installations which were exhibited widely, including at Kettle’s Yard in Cambridge in 2000.  I also developed performances in the installation during which I would manipulate and improvise with the sequenced material, introducing chance or random elements based on my playing and control.

It was around this time that software applications began to appear which allowed MIDI control of digital video.  Jones and I began using Image/ine software from STEIM, and later the Modul8 VJ software.  The appeal of using interactive visual elements in performance increased as the technologies became more advanced.

The question then became, how do you play pictures and movies?  At this point my approach became more focused on the manipulation of images from the notes, dynamics and controllers of the mutantrumpet.  This is a composition from 1986 that was recently revisited with a live video component. Each bell of the instrument is associated with a different pitch set, timbre and visual element.  It’s a very basic and direct approach to AVMP.

For the next body of work the source material was a series of television advertisements for Volkswagen for which I created original music. In addition to expanding the 30 second audio clips into full songs, Bill Jones and I performed video remixes of the commercials as well. Automotive, a 2002 CD on Six Degrees Records with support from Volkswagen, showed that this kind of experimental artistic agenda could interface with advertising and branding.  This clip shows an excerpt of a show at the House of Blues, Chicago.  Bringing advanced technologies into the realm of popular music has been a theme of my work from the outset.

The work with commercials and a desire to apply the interactive approach to a more narrative-based project resulted in Palladio, a VJ movie with live music.   Based on the novel of the same name by NY Times writer Jonathan Dee, Palladio mixed samples of commercials with live action footage, all manipulated in real time, to tell the story of an idealistic, but ultimately ill-fated advertising agency/artist colony.
The center piece of Palladio is the Color Bar Remix, which used the luminance values of the video color bars as the structural basis for its sound as well as using the color bars as its material.  The video was played live by Jones from a MIDI keyboard.

As I have continued to refine my approaches to AVMP, a primary concern has been creating a more expressive approach to working with complex media systems.  This idea led me to produce a body of work which utilizes sampling of 19th century music in conjunction with new technologies, culminating in Persephone, a music theater piece presented at the BAM Next Wave Festival in 2010.  Persephone did not utilize interactive media performance, although video by Bill Morrison did play a big part in its presentation.

Posthorn, created with Bill Jones, is an exploration of the expressive possibilities of interactive performance.  This is a work that we developed continually for several years, it was a vehicle for many experiments with programming.  It is based on the “posthorn solo” from Gustav Mahler’s Symphony #3, a choice made to in order to emphasize an expressive, dynamic vocabulary.  The solo in the symphony is a pastoral meditation in which the movement of the piece stops and a distant, offstage trumpet is heard playing a simple, taps-like horn call.  Writers and conductors have written extensively about this rather extreme section of the symphony emphasizing its strong associations to memory, longing, and detachment.  In this work there are no pre-sequenced elements, the unfolding of both sonic and visual material is directly controlled by my performance.

For me, expressivity has continued to lead toward improvisation, which plays an important role in Posthorn. In many instances, interactivity seems to imply improvisation. The degree to which it is employed can vary widely, but some element of spontaneity is demanded by the new interactive interfaces that are being designed today in order to fully explore their potential.  With the advancement of new live performance software applications it is possible for artists to create structures that utilize a variety of media that are varied, enhanced and modified through improvisation. A composition becomes a set of possibilities rather than a fixed set of sounds and images. Posthorn employs this kind of open structure, while still adhering loosely to the melodic shape of the Mahler excerpt. The ideas of multiplicity and hybridity are central to the composition and performance of Posthorn. The MIDI information that is generated from my physical performance is applied to both the sonic and visual realms simultaneously in real time, with one set of data being interpreted by several different computer programs at once.  The feedback from the system pushes the piece into new areas each time it is performed.

The last clips I’ll show are from my 2009 CD Night Science, an album that was heavily influenced by my involvement with the emerging dubstep scene in New York City in the mid-2000’s.  Night Science is more in the realm of Green Machine, Triptycal, Goldbug and Automotive, combining improvisations on the mutantrumpet with electronic beats and basslines.  The video was created for presentation in clubs and music venues in conjunction with live performance.  The dynamics of the mutant are used to directly modulate the video, and the beats are programmed to trigger other visual events in synchronization with the music.  The connections are quite direct between sound and image.  In all of these later works the interaction with the video also impacts my choices of what to play sonically.  This is the kind of synthesis that I am always looking to create in which the audio and visual realms are completely merged.

In summary, while AVMP is a field that compels an international group of VJs and other live media artists, it faces significant challenges and questions about its future.  as Matthew Fuller states in Media Ecologies:

“The only way to find things out about what happens when complex objects such as media systems interact is to carry out such interactions – it has to be done live, with no control sample.” (Fuller 2005)

The process of collaboration across media and software platforms requires a tremendous amount of experimentation and tweaking of programming in order to perfect the integration of the disparate media.  As with all programming-based creative projects, a great deal of patience is required in order to get the various elements talking to each other in an effective way that is robust enough for live performance situations.

My experience with this medium leads to the following 4 points which seem crucial to the future development of AVMP:

1.  Hybridization; defying norms of cinema and video art
In cinema music has a secondary role to narrative and visual elements.  In video art sound is sometimes elevated to a stronger role with a lessened importance of narrative, but the primarily visual context of museums and galleries naturally tends to put more focus on the visual.  These archetypes of cinema sound are very difficult to break, as they represent patterns of expression that have gone on for decades.  New aesthetic paradigms must be asserted in which audio and visual elements are considered as equals.

2.  Defining context
What is the context for AVMP?  How is it best represented as a media artifact?  As a document?  An interactive iPhone app?  The non-narrative, abstract, ambient quality of much AVMP seems to work better in a club or other social atmosphere than as something to be passively viewed.  Will this ultimately doom the art form or will audiences evolve new ways of experiencing art?

Popularity of electronic music events today is promising, and AVMP is an essential part of these large scale electronica events, providing the visual component to DJ based performances

3.  Perceptibility and improvisation
There needs to be some element of visible connection between the visual medium and either a physical performance gesture or some other type of control, such as automatic synchronization with music.  This is the crux of the art form which differs it from studio video and film production.  Interactivity implies improvisation by nature, so an improvisational approach needs to be considered.

4.  More complex interactions
Multi-layered, complex interactions that can accommodate dynamics and subtle gestural control need to be developed to enrich the vocabulary of AV performance.  This will insure that the medium continues to develop and engage audiences rather than relying on oversimplified, predictable interaction.  Artists must forge new vocabularies of collaboration across media, using structural systems that have not been invented yet.

REFERENCES

Cooke, Grayson.  2010.  “Start Making Sense; Live Audio Visual Media Performance”.  International Journal of Performance Arts and Digital Media 6: 2, pp. 193–208.
Collins, Nick.  2003.  “Generative Music and Laptop Performance”.  Contemporary Music Review, 22:4, pp. 67-79.
Cott, Jonathan.  1974.  Stockhausen, Conversations with the Composer.  London: Picador, 120.
Fuller, Matthew.  2005.  Media Ecologies Cambridge: MIT Press, 1.
Fuller, Media Ecologies, 16.
Luening, Otto. 1968. “An Unfinished History of Electronic Music”. Music Educators Journal 55, no. 3 (November): p. 48.
Spielmann, Yvonne and Bolter, Jay David.  2006.  “Special Section Introduction: Hybridity: Arts, Sciences and Cultural Effects,”  Leonardo, 39(2): 106-107.

 

 

 

 

Inevitable Improvisations - article in Vague Terrain, an online journal

INEVITABLE IMPROVISATIONS

Submitted by Ben Neill on Sun, 10/11/2009 – 12:00

[Ben Neill & LEMUR / The Stone, New York City, 2008 / Photo: Joy Garnett]

Over my years of experience as a composer and performer I have often felt that the discourse of digital/electronic art has been too strongly focused on technical issues. While digital music will always be at least somewhat defined by the parameters of the systems being utilized, my attention has been more geared toward the aesthetics of digital art and particularly the relationship of the aesthetics to technology. The feedback loop is a good metaphor for this relationship; as new technological possibilities emerge, artists are influenced by those developments, while at the same time new aesthetic ideas influence the creation of new technologies. The question for me has been what are the most important aesthetic tendencies to emerge out of the recent landscape of digital musics and media?

While early electronic music was informed by the aesthetics of the classical avant-garde, the invention of synthesizers by Moog and Buchla opened the door to the broader realms of jazz and popular music, which frequently utilized improvisation. Even Columbia-Princeton composers Otto Luening and Vladimir Ussachevsky relied on improvisation in their early works:

‘Equipped with earphones and a flute, I began developing my first tape-recorder composition. Both of us were fluent improvisors and the medium fired our imaginations.’1

Luening describes their appearance on the NBC Today Show:

‘I improvised some [flute] sequences for the tape recorder. Ussachevsky then and there put them through electronic transformations.’2

The improvisatory approach to electronic music was utilized by a broad spectrum of musicians from the jazz, (Joe Zawinul, Sun Ra, Miles Davis, Herbie Hancock) rock (Keith Emerson, Pink Floyd, Pete Townsend, Soft Machine) and contemporary classical fields (Stockhausen, David Tudor, Musica Elettronica Viva, Sonic Arts Union). These artists all employed improvisation in their work with synthesizers and electronic processing.

[Ben Neill / mutantrumpet / photo: Eric Calvi]

In my formative years as a composer/performer I explored improvisation, but did not feel compelled to make it the central focus of my work. Since I had developed a high level of virtuosity as an instrumentalist, I felt that in improvisatory settings there was a tendency to fall back on familiar techniques rather than developing totally new approaches to performance, which was my goal in developing my hybrid acoustic-electronic instrument, the mutantrumpet. I was very attracted by the systematic musical structures of minimalist music, and I embraced a more conceptual attitude to composition, relying on various processes to take me away from playing what I wanted to play at any given moment. This was also due to my work with Petr Kotik’s S.E.M. Ensemble performing the music of John Cage, often with the composer’s supervision. With Cage’s music, it was all about not doing what you want to do, and following a predetermined directive that was specifically designed to take you away from your own desire-based choices about what to play at any given time.

Despite my propensity for notated pieces, improvisation did creep into my works from the very beginning. The reason for this is the subject of this paper. In my experience, improvisation is inherently the most compelling approach for performing live with interactive systems. The degree to which an artist improvises can vary widely, ranging from DJs whose improvisation consists of choosing which song to play next to interactive systems such as Wii controlled music that generate more random musical materials based on gestures. There are as many approaches to improvisation with electronic/digital media as there are artists employing it, and certainly predetermined composition can co-exist with music that is created on the fly.

Why does improvisation seem to make so much sense in this style of musical performance? Here are several reasons:

  1. Audience awareness: The improvisatory action has an intentionality that is often more readable by an audience than the interpretive act of performing a notated piece; the impulse of the performer is directly translated into the musical material and this can be made more perceptible through gestures. It becomes more theatrical or dramatic when it is spontaneous. Silence is also crucial in the improvisatory performance, it creates the articulation. The process of discovery in front of an audience is at the heart of the experience of interactive performance, and helps to break down the barrier between audience and performer.
  2. The nature of electronic performance: The processes of interactive performance require improvisation for full investigation and exploration. Only by playing with a new instrument can one truly find and explore its potential, not by executing prescribed material. Notation of interactive music is also problematic, limiting the possibilities that a performer might otherwise discover through spontaneous choices. Digital music systems have advanced to the point that responses can be highly complex, so improvisation with the machine creates a new form of narrative in which human and computer truly exist in a feedback loop, feeding off of each other in a one-to-one relationship.
  3. Freedom in music: With the advancement of new live performance software applications such as Ableton Live, Max MSP, and Supercollider, it is possible for a composer to create structures that are varied, enhanced and modified through improvisation. This approach seems to be the most evolved and contemporary way to create live electronic music, one that takes into consideration the musical developments of the last several decades. After a century of jazz and other improvised musics which have freed musicians from the tyranny of a conventionally notated score and rigid organizations such as orchestras with conductors, incorporating freedom into the live electronic medium seems to make total sense as a methodology. Perhaps because digital media has promoted freedom in so many other ways it is logical that an element of freedom would also be demanded in live performance.

Certainly others have expressed important ideas about improvisation in electronic music. The work of George Lewis in particular has been pioneering in this area, and I am grateful to him as well as to my collaborators David Behrman, Nicolas Collins, Eric Calvi, and David Rothenberg for helping to point me in this direction.