top of page

HOW?:

Performance Publicas are realized in a space through the audience performing with a system.

If you want you can explore novelties of I3.0 

 

Technology employed:

Iteration 3.0 uses a system programmed by Ruben Bass called "Publica"based on MAX/MSP. It combines video grabbing, a game of life circuit that uses a logic equation developed by Orhun Caglidil, constant matrix pixel analysis, dependent randomness generators - with exessively tested conditional rules (resolving mathematical issues - reducing black screens by 80%), a clocking system with two different cycles, pixel change detection, a circular buffer of varying length, dynamic volume fading and limiting, real-time spectrogram to 3D position data, Ircam's Spatialiser, real-time interpolation of virtual-sound-sources, generation dependent buffer cropping, normalization to highlight and control sounds, a harmonic delay using regional change as chordal data and many more different data processing circuits. Its resolution is now variable up to 4096x4096 as of grand GPU redesign to increase performance.

 

This results in a dependency of the system and the audience; still, if one decides not to move, they create an input of position data which basically means it's impossible not to perform in front of the Publica system. Very broad outcomes are possible, and the limit in these mostly lies on the performers side.

Overview: the four main sections of Iteration 3.0:

 

The visual section receiving the camera input analyses the image regarding changes happening sends this information to other sections, and finally transforms the image depending on changing rules.

 

The audio section receives the microphone input of what is happening in the space, recording it by overdubbing it into a circular buffer, which loops the recorded material. Further, different circuits apply pitch shifting and delay to it, and the sound is spatialized in a 30by30 meter virtual space.

 

The clocking section has two cycles: recording and generation. These cycles change the behavior of all other parts. The recording cycle makes the audience able to see themselves while being able to record input into the system. The generation cycle is when the system generates with the rules determined in the Conway section and expands the image and sound with its own generation. The length of both cycles varies according to movements and sounds produced in the space.

 

The data processors/scalers sections are not a coherent part but rather many more small circuits with essential functions like "cleaning" the data or randomness generation depending on the audience, Interpolation etc. There are too many different ones to list here. 

link to the conceptual promo shown at

planetarium, Berlin (I. 1.2) (can be watched withsome VR devices)

Overview of the signal flow of Iteration 3.0:

 

The (specialised; svpro; ultra-low-light; machine-vision) camera is now sent to the visual part. The Conway logic section, determines through changing rules if a pixel will continue to remain on screen or if it will turn to black (0). The image is mirrored, but this does not mean the image is symetric, as the rules are applied to everything as one plane, new structures grow as the intersection of both halves. The result: broadly varying generative patterns are formed. The random generators which determine the numbers of the Conway logic equation are impacted by the sound and image present in the room at the time. This collective image of the audience, space, and system (Now entirely GPU based!) is then analyzed for at which position the changes are occurring. These positions are then formatted around the +-XYZ axes (Z is now constantly generated from a sphere warped FFT)  and sent to Ircams Spat which simulates a virtual acoustic space in which speakers are placed. In spaces with spacial sound systems it possible to move the sound through space inside the these venues by the audience and the system simultaneously in co-creation. 

 

At the same time, the microphones available record into a circular buffer loop. The buffers length now depends on the varying recording time, which itself depends on the change happening visually inside the room, and the volume of sounds, which depends on the rule generators which then finally depend on the audience... (+more dependencies). The buffer is now cleared, limited and normalized in equally cross-dependent conditions as its length, which reduces the possibility of just creating extremely loud noise, giving you the possibility to have a "fresh start" every other (dependent) minute. 

 

During this process, many other circuits analyze the change of sound and image and generate dependent, dynamic time for the clocking circuit, seeds for the randomness generators, "chordal information" for the harmonic delay, spectral analysis, volume limiting, dynamic volume fading… (+countless others). Most of the circuits are connected to at least two other circuits, ensuing crossdependency of the system. 

As mentioned, there are two cycles of the system determined by the clocking circuit. The recording cycle is the time when audio and video pass into the buffer and into the Conway logic circuit. When the recording cycle ends, the generation cycle starts, the Conway part generates new changes to be seen on the screens, and the harmonic delay generates different pitch allocations to be heard, while Spat changes the positions of the sounds heard dynamically according to changes seen on the screens as well.

Iteration 2 had the add-on of column scanning the input image and converting this into midi data which is sent to internal or external MIDI instruments - this functionaltiy had to be removed to improve performance - please get in touch if you are interested in integrating this into version 3.0 as well.

Continue to:

bottom of page