The VJacket: Enabling the VJ as Performer with Rhythmic Wearable Interfaces

The VJacket is a wearable controller for live video performance. It is worn by the performer or visual artist to control video effects and transitions, trigger clips or scrub frames with the output of the integrated sensor system. The sensors detect body movements like bending, touching or hitting, and can send OpenSoundControl or MIDI messages wirelessly to the various VJ programs, bringing the rhythmic movement of dance to computer interaction.

Author(s)

Introduction to VJing

A visuals jockey, or VJ, is the visual analog to the disk jockey, or DJ. As the DJ selects, combines and mixes music, the VJ uses tools to dynamically change the visual appearance of a space, often in alliance with the music and the crowd. Performers who are more affiliated with the art world than the electronic music scene may label themselves visualists or live cinema artists rather than VJs. VJing can be seen as an advancing wheel that provides a good platform for experimentation in material, forms, content or presentation formats and displays. [1] Although several festivals dedicate themselves to the VJing culture, clubs and nightlife remain its primary platform; in the club setting, VJs use video, film projections, lights and even smell to accompany a DJ’s music set and to interact with their audience. [2]

Often the VJ uses short clips from disparate sources like archival films, photography, and computer generated animations that can be looped, remixed, combined, and arranged in countless ways for unlimited visual possibilities. Like the DJ, the VJ operates live and manipulates different media formats according to the content mapping, the visual expression or temporal and spatial montage. [3]

Several different software packages are available that can handle a wide variety of visual material. Often VJs use MIDI controllers to more easily trigger their clips, presets or effect layers. These generally-used setups bond the VJ to the computer, restricting him mostly to a seated position behind one or more screens.

Enabling the VJ as Performer

VJing is a rhythmic manipulation of moving video and animation. A mouse and keyboard are not ideal rhythmic controllers because they are very rigid and unforgiving. It's common for a VJ to experience difficulty scrubbing a video rhythmically with a mouse because the mouse is too precise in position and too slow in response for gestures of rhythm, which are innately imprecise in position but very precise in tempo and response. Knobs, sliders and touchpads are more responsive than a mouse, but still require tiny movements for grand rhythmic gestures, and require precise position and attention (most knobs can only be turned by two fingers at once: a precise movement).

A more natural controller for such tempo-centric manipulation would mimic the movements of dancing, since dancing is the ultimate form of rhythmic expression. We see this concept in several new experimental VJ interfaces, including the maraca-like interface 'Rhythmism,' [4] and the 'WiiJ Video' interface, which uses the Nintendo Wii remote to control video through sweeping, rhythmic hand gestures.

We sought to extend the gestural concept, past just the hands to all parts of the body, to enable a VJ to control video simply by dancing. We embedded sensors into clothing to ensure minimal encumbrance from the interface. The VJacket allows for wide, imprecise movements with a precise rhythmic response. The VJ will not have to fumble for knobs and buttons or have to look at the screen to be sure he's clicking on the right thing – he will be freed to control the video using his body movements alone.

Because the VJacket is wireless, the VJ is free to interact with the audience and musicians on stage or even walking through the crowd – something which most hermit-like VJs do not usually experience, since they are often relegated to the back corner of the club behind the video inputs and lighting controls. With a wireless system, a VJ becomes not just an engineer behind the curtain, but an actual live performer – one whose movements are directly connected to the video projections. The audience will be able to see the VJ's gestures in connection with the video, and thus will become more interested in the performance itself.

Reeves et al. explain the importance of “expressive latitude” in a performance: the stylistic gestures a performer makes “that are not directly sensed by the instrument.” [5] Wearing the VJacket, the performer transforms these superfluous gestures into direct manipulations of the instrument – indeed reducing the expressive latitude, yet simultaneously merging that expression into the movements of the video performance. The bend of the arm, the slap on the stomach, and the wave of the hand are no longer purely showmanship but are given meaning in the visual context provided by the on-screen projections, synthesizing the performance of the body with the performance of the technology. “Such gestures are an essential element of deliberately performing interactions for others to see and appreciate, expressing skill and control and introducing an aesthetic component to the use of technology.” [2]

It is our ultimate goal to connect the VJ with the audience since doing so will create a more legitimate space for VJing within the performance community, and we hope to encourage more people to try VJing themselves. With video and consumer production becoming more ubiquitous, projectors becoming cheaper and smaller – even integrated into our cell phones and cameras – soon every rock band, DJ and karaoke bar will have its own VJ. If they don't, someone with a pocket projector and mobile VJ setup will guerrilla VJ anyway. This is the future we envision, and we are trying to shape it with the VJacket.

Free the people

In recent years, several research projects have focused on the controllability of live performance, such as the multitouch surfaces ‘reacTable’ by Sergi Joda [6] and the ‘Vplay’ by Stuart Taylor [7]. However, other projects spotlight the performative quality of the devices by freeing the artist from his ‘behind the screen’ position and allowing unfettered physical movement on a stage or within the audience. Representative examples make use of neural interfaces like the ‘Nervixxx’ by Tokuhisa, [8] or camera-based motion tracking like the ‘Kinect + Visual Synth’ project by VJ Fader.

We combined ideas from both percussion-based devices, such as the maraca-based ‘Rhythmism,’ [4] which transforms a simple percussion instrument into a VJ performance system, and more precise expressive instruments, such as the ‘Djammer,’ a handheld device that can be used for ‘air-scratching’ – giving the DJ the possibility to leave his turntables and move around in the nightclub [2] – to give the VJ a wide range of control possibilities: traditional scratching and fading combined with percussive elements for triggering effects or clips.  

The Dutch performance artist Eboman created the ‘SenSorSuit,’ a suit that he uses for staged body performances, mixing live video and a network of sensors spread over his body. The SenSorSuit is full of bulky sensors and other equipment which must be taped onto the body. In contrast, we tried to make the VJacket as unobtrusive as possible, enabling the performer to just slip it on and switch it on with very little preparation time, making spontaneous performance more feasible and more comfortable. Unlike the SenSorSuit, which is a complete performance instrument, the VJacket is meant to be used in conjunction with other interfaces: it can provide basic functions that the VJ uses most often (e.g. global effects, clip loading, mixing and scratching) and keep them easily accessible, leaving the more complicated, lesser-used functions to other controllers, such as a piano keyboard, sequencer, or even the bane of live performance: the mouse/keyboard on the laptop.

Hardware Considerations

The VJacket uses a Bluetooth Arduino microcontroller board to wirelessly relay sensor data to the computer. The same way the printing press enabled the masses to experience art, the Arduino and other DIY open-source movements have enabled the masses to make their own cybernetic art and personalized technology – creating “products for a market of one person... You don't need personal fabrication in the home to buy what you can buy because you can buy it. You need it for what makes you unique, just like personalization,” as Neil Gershenfeld said in his 2006 TED Talk. We wanted to see where other people could take the VJacket idea, so we made the hardware and software open source and wrote tutorials on how to make your own VJacket – from building the circuits to mounting the sensors on clothing to using the software to control any aspect – and posted them to various DIY communities, as well as gave workshops to urban youths to empower them as creative entrepreneurs in their communities. [9]

To encourage this extreme personalization, we made VJacket hardware modular, with the capability to choose exactly which kinds of sensors you use. The original version had bend sensors in the elbows to control visuals by bending your arms, a linear (ribbon) potentiometer in the lapel to offer finer finger control, and photoresitors and piezo hit sensors for more percussive movements, but the circuits and software were designed as to allow any combination of sensors.

For example, in a later iteration of a VJacket designed for step dancer Geoffrey Frimpong, we used only percussive piezo sensors, so he could slap different parts of his jacket during the dance for a dominantly rhythmic, staccato audiovisual performance. Kevin Brito’s dance style is more smooth and flowing, so he preferred the bend sensors and slide sensors during his performance. With such customizations, each sensor in a person’s VJacket is an extension of that person’s style: just as an expensive suit is tailored to follow the contours of an individual’s body, the VJacket’s sensors are placed to capture and accentuate the performer’s natural style, creating a highly personalized instrument.

Slayden et al. found during user-testing with experienced DJs that “the interface had to be operated in a manner where one could develop good proprioceptive sense for and development of muscle memory, much like any musical instrument.” [2] With the VJacket, the body becomes the instrument, bridging the proprioceptive gap and shortening the learning curve. We found during user testing that this learning curve is shortened even more when the user feedback is clear and expressive. Kids who tried on the VJacket picked up on the interface quickly; we noticed that when the sounds were off, the players had a harder time discerning which movements triggered a visual effect. With the instant and obvious feedback of sound combined with visuals, however, they quickly mastered the VJacket.

Software Considerations

Once you have the sensor data from the hardware, you need a way to control your performance software with it. There are many projects available that convert Arduino sensor information into messages that control various software programs, including Maxuino, Funnel, NETLab Toolkit, among others. However, they are mostly libraries that require programming experience, expensive development tools (e.g. Max 5 for Maxuino and Adobe Flash for NETLab Toolkit), and a good deal of development and testing time to use in a project. Consequently, many beginning users have a steep learning curve to overcome before they can start creating.

In order to make Arudino projects easier and faster to realize, we designed the Arduino2OSC software to accept and process multiple types of sensor input, using live sensor readouts to adjust the behavior on-the-fly. The basics are there: scaling input and output values, adding cutoff thresholds, as well as filters to smooth errant sensor data and envelope triggers to automate gestures. Once you have configured all the sensor inputs to produce the desired data, you can route the output as Open Sound Control or MIDI messages, two standards which are supported almost universally by VJ and audio software, such as “Arkaos Grand VJ, Max/MSP/Jitter, Reaktor, Abelton Live, Propellerheads Reason, Supercollider, Kyma, Processing, OpenFrameworks, etc. Using OSC, you can even send the messages over the LAN for networked performances.” [9]

Another important aspect of the software is to be able to adjust the sensor values on the fly, while preparing for or even during a performance. Sensor data can change slightly from performance to performance (variables include the strength of the battery, bend sensors slowly becoming permanently bent, ambient light in a venue that affects light sensors), or the behavior could change (the act could suddenly call for a “bleep” sound when there was a “bloop,” a cut when there was a fade, etc.), so the performer needs a way to adjust for unforeseen changes without opening up the development environment and changing code. We built the interface to be completely customizable while it’s running, and to have customizable presets that you can load for different performance sections.

Outlook

The VJacket was designed to be a completely mobile, wearable instrument that empowers performers and expands the performing space beyond traditional music venues, galleries and museums, thus merging the context for VJing from a performance space into an audience space. No longer is the VJ relegated to the back of the room or on stage; he can also be a participatory audience member projecting his own performance as a highly individualized interpretation of the music – an extension of dance enabled by fashion.

For this reason, we are developing a new version of the Arduino2OSC software, called Sensorizer, which will be available on mobile platforms (Android) as well as standard desktop platforms. Sensorizer will connect the VJacket with a mobile phone or tablet wirelessly, and use the same OSC/MIDI bridge to turn the phone into a completely mobile performance instrument: either projecting visuals via a pocket projector, or creating sounds and sequencing music. Since Sensorizer is based on the well-defined OSC protocol, other mobile developers can write audiovisual performance apps to create new uses for the VJacket and contribute to a growing resource for “guerrilla VJs.”

Developers are already beginning to write games and performance software using OSC to control all aspects of the functionality. Games like ‘Miserable Pile of Secrets’ and ‘Savestate’ [10] make possible a new avenue of interface exploration by connecting unrelated systems,  or codebending: “video games can play other video games, and become generative art. Oscillators can drive web browsers. Poetry can write music.” [10] Indeed, the VJacket can also be used as an input to these systems, enabling the wearer to write generative poetry by hitting the jacket to change the word choices, or play Tetris by bending his arms. We wait excitedly to see what other people will choose to control with their own VJackets, and hope that one day, personalized fashion technology and mobile performance computing will permeate our society in a community-driven collaboration on stage, in the streets, and at home.

References and Notes: 
  1. A. Dekker, “Synaesthetic Performance in the Club Scene,” Conference paper, Computational Semiotics, University of Teesside, Great Britain (2003).
  2. A. Slayden, et. al. “The DJammer: “Air-Scratching” and Freeing the DJ to Join the Party,” CHI 2005, Portland, Oregon (July 2005).
  3. A. Engström, M. Esbjörnsson, and O. Juhlin, “Mobile Collaborative Live Video Mixing,” MobileHCI 2008 (ACM, 978-1-59593-952-4/08/09, 2008).
  4. S. Tokuhisa, Y. Iwata, and M. Inakage, “Rhythmism: a VJ performance system with maracas based devices,” ACE '07 (ACM, New York, NY, USA), 204-207. DOI=10.1145/1255047.1255089, 2007)
  5. S. Reeves, S. Benford, C. O’Malley, and M. Fraser, “Designing the Spectator Experience,” CHI 2005 (ACM, 2005), 741-750.
  6. Jordà Sergi, et al. “The reacTable: Exploring the Synergy between Live Music Performance and Tabletop Tangible Interfaces,” TEI '07 (Baton Rouge, Louisiana, USA, 2008).
  7. S. Taylor, et al., “Turning the Tables: An Interactive Surface for VJing,” CHI 2009 (Boston, MA, USA, April 8th, 2009).
  8. S. Tokuhisa, “Nervixxx: A Video Performance System with Neural Interfaces,” ACHI '09 (2009), 156-163.
  9. A. Nigten, ed., Real Projects for Real People (Rotterdam: V2_ Publishing, 2010), 193-197.
  10. Paperkettle Web Site, http://www.paperkettle.com/ (accessed May 25, 2011).