Performance Publica
DOCUMENTATION PORTALE PROJECT:
1 Initially and “reality”:
Was to expand and improve Performance Publica Iteration 2.0.
As of the failure of the planetarium “in the last second”, it was not possible to make Performance Publica happen. The planetarium could suddenly not handle any visual live inputs. As it is entirely based on the real time aspect of generating from the audience, a transversion to fixed media giving the illusion of influence would be entirely against the concept. As not to cancel (because for the cv of the performance itself the venue has much worth), an edit of Performance Publica (which was performed in 2022 at the House of Music in Budapest), was prepared to at least present the concept in some form.
2 The concept:
Short description (what):
Performance Publica is a type/format for performances where the audience becomes the performers. It is about and from the audience. The research process started with the conviction that the collective subjective processes of beholders added together become more relevant than the artwork itself. To establish the audience as Performers, an auto-generative-dynamic feedback system is employed that constantly co-creates with the them. The combined image of people and space and the sounds they emit are changing the system, which itself expands them on its own as well. Conway's game of life became especially interesting when looking for possibilities to have minimal rules in a system to leave the people as much freedom as possible. The idea of a cellular apparatus at its base, the system is expanded by varying rules dependent on the peoples inputs and matrix scanning to spatialise sounds through space in real time. Through its technical nature, Performance Publica enhances the beholder's capabilities. The environment becomes performable with space, light, sound, and the co-creation by the system.
Short why? the individual processes of the people taking part as the most important element
The research began in summer of 2022 with the artist statement:
“Art is not about the entity created by artists but first the reactive process of each individual who can perceive the artwork and finally the combined processes of each of these observers.” Ruben Bass
Or elaborately:
“Art is about the collective added metaphysical value to an entity by its beholders. The processes had by and with the entity inside the beholders are at some point greater than what the artists could ever initially imagine; as the collective thoughts, reactions, associations, felt emotions, and memories re- experienced, at some point become greater in amount and in each of the criteria’s factors than the creation process of the entity and the entity
itself.” Ruben Bass
As a piece of art is observed by many individuals, the value and reason why it would be art become far greater than any “genius-individual” could ever imagine, as the collective experience will always be more significant in volume and factors than the individual one.
If the personally added metaphysical value is seen as the main factor and core difference between art to other fields, it becomes of utmost strategical relevance. To create metaphysical value, as many personal subjective processes as possible must be enabled. Therefore, the main objective becomes to create as much openness for personal experience as possible.
It could be argued that the most relevant factor of art is the process of the beholders / “Betrachter” rather than the product of the artist. With this perspective on art, a need for a different form can be derived. A form that enables the beholders to take an immediate part in the very core process and thus the identity of art itself and make this process more accessible. A moment where you are in dialogues with the processes itself and able to impact/change what you experience.
Suppose you are interested in a deeper conceptual and theoretical exploration of Performance Publica. In that case, you can read the Paper on it here: https://www.rubenbass.com/performancepublicapaper
And the elaboration about why to pursue a specific form of inter-active and generative art here: https://www.rubenbass.com/cce-paper
how?
Performance Publica is realized in a space with the audience performing with a system. The system called "Publica" is based on Max/Msp language and combines video grabbing, a game of life circuit that uses a logic equation developed by Orhun Caglidil, matrix analysis, column scanning, dependent randomness generators, a clocking system with two different cycles, pixel change detection, a circular buffer of varying length, dynamic volume fading and limiting, spectrogram to 3D position data, Ircam's Spatialiser, a harmonic delay using pixel change as chordal data and many more different data processing circuits.
This results in a dependency of the system and the audience; still, if one decides not to move, they create an input of position data which basically means it's impossible not to perform in front of the Publica system. Very broad outcomes are possible, and the limit in these mostly lies in the performers will to collaborate.
The visual part receiving the camera input analyses the image regarding changes happening and sends this change to other sections, and finally transforms the image depending on changing rules.
The audio part receives the microphone input of what is happening in the space, recording it by overdubbing it into a circular buffer, which loops the recorded material. Further, different circuits apply pitch shifting and delay to it, and the sound is spatialized in a 30by30 meter virtual space.
The clocking system has two cycles: recording and generation. These cycles change the behavior of all other parts. The recording cycle makes the audience able to see themselves while being able to record input into the system. The generation cycle is when the system generates with the rules determined in the Conway section and expands the image and sound with its own generation. The length of both cycles varies according to movements and sounds produced in the space.
The data processors/scalers are not a coherent system but rather many more small circuits with essential functions like cleaning the data or randomness generation depending on the audience. There are too many different ones to list here.
The camera filming everything is mirrored (because of image warping so that all surface of the dome shows the audience moving in the space) and is sent to the Conway logic circuit, which then determines through changing rules if a pixel will continue to remain on screen or if it will turn to black (0), resulting in broadly varying generative patterns being formed. The random generators which determine the numbers of the Conway logic equation are impacted by the sound and image present in the room at the time. This collective image of the audience, space, and system is then analyzed for at which position the changes are occurring. These positions are then formatted around the +-XYZ axes and sent to Ircams Spat which simulates a virtual acoustic space which has speakers placed exactly the same way as the real Sound Dome in the House of Music. This makes it possible to move the sound through space inside the Sound Dome by the audience and the system simultaneously in co-creation.
At the same time, the microphone records into a circular buffer loop which has varying lengths depending on the number of generation cycles already completed by the system. In this way, the audience can layer sounds further and further, increasing length until the buffer is cleared so that the constant recording of the space does not turn into extremely crowded sounds. So new ideas can be developed with the system together. The circular buffer
During this process, many other circuits analyze the change of sound and image and generate dynamic time for the clocking circuit, seeds for the randomness generators, chordal information for the harmonic delay, spectral analysis, volume limiting, dynamic volume fading… Most of the circuits are connected to at least two other circuits, so there is a lot of codependency already in the system itself.
As mentioned, there are two cycles of the system determined by the clocking circuit. The recording cycle is the time when audio and video pass into the buffer and into the Conway logic circuit. When the recording cycle ends, the generation cycle starts, the Conway part generates new changes to be seen on the screens, and the harmonic delay generates different pitch allocations to be heard, while Spat changes the positions of the sounds heard dynamically according to changes seen on the screens as well.
-> describe parts
-> describe cycles
-> describe technical signal flow
3 The process:
3.1 Development towards version “3.0”:
The process of creating the next Iteration of Publica (3.0 Zeiss) started with investigations about 360 camera integration. The idea was to work with Equirectangular 360 video instead of using the input of conventional framed cameras, to deliver an image of the whole room to the system. Following the concept of creating a collective image of the room and the people inside it.
The brightness of the room was not a problem in previous functional versions, which took place in the House of Music, Budapest. The dome there employs cinema projectors, which have high brightness and therefore illuminate the room. Thus the image grabbed from an in house security camera yielded clear and clean results even at minimal brightness. Through the security camera it was possible to
I did not assume brightness to be a problem as of my experience in Dudapest, working in the dome of the House of Music. Where they use cinema projectors which have such high brightness that the whole room is illuminated by it. Therefore it was possible to grab the output of the security camera there and film the whole space from the top (at least all the people).
As of this experience 360 camera integration seemed like the logical next step to be able to work with different areas of the collective image.
Unreal engine experiments, the philosophy of using a far horizon and what this means for the image:
As of the idea to integrate 360 vision, several experiments inside unreal engine where conducted. When thinking about how 3dimnesional worlds work inside a dome, it became apparent that many contemporary approaches treat it similar to vr environments where vr glasses are worn, which gives provides the visual experience very close to the recipient. Such things displayed in a dome will yield drastically different results though: the image will be quite far away from a recipient, and they will no longer have this closeness but rather a feeling of watching the horizon, which starts quite far from them. In this sense the 3dimensional screen of the planetarium looses parts of its 3d as things are seen further away and therefore seem more flat. Further the concept itself does not need a 3dimensional world itself either. It deemed interesting to treat the screen in a different way and use it in way which also make use of its specific shape, rather than pretending that it would be just like a vr experience.
Bochum Hackaton: lessons learned
Dante worked without problems with all speakers. Live video input was passible through several ways. The most convenient was scaling the output image accordingly and sending it via NDI through ethernet.
Getting of the unreal horse:
The problem of using unreal quickly proved to be the impossiblity to integrate the complete system with all its functions.
It quickly became inherent that it would not yet be possible to completely integrate all the parts of the “Publica” system in Unreal, because it handles audio processing still rudementary when it comes to specific needed parts of the system.
Further the use of Unreal seemed less useful as the test in bochum revealed, that light levels are a severe problem. The Zeis velvet projectors mostly produce below 900lux. Therefore somehow still feasible 360* cameras such as the insta one R or other similar ones (which had been researched before) did not make sense to use anymore. This lead into again to using a more traditional camera which could handle the light situation and use the dome as a surface to augment with everyone inside it, rather than creating simulated space.
The difficult camera question:
With this the month long hunt for a ultra-low light camera began. after comparing 20+ different sensors, 3 different lens-mount types and 15+ lenses for these types, the choice fell on the sony starvis line. Communications where started with e-con-systems, one of the largest camera integrators.
After extensive conversations, I learned that the camera ideally needed a global shutter and with that sony Starvis IMX 700 +**, if trying to achieve the best result in ultra low light. The problem with this being that such a combination does not offer a suitable stream of 4k resolution. Therefore I was looking at cameras with the IMX 415 sensors, with a rolling shutter, able to shoot 4k 30fps, with extreme light sensitivity down to 0,2 lux. Many different models where compared. Night vision cameras where also considered but are not useful as of monochrome image; as well as heat cameras which are not producing enough image-resolution as of now. Since, e-con makes a lot of b2b and less b2c it was not possible to buy the exactly right camera from them as it would have required to buy several of the cameras for 250€ each. This lead to another extensive search, finding a company that offered a camera which actually met the requirements. This lead to finding Svpro and their usb/hdmi camera for factory atomisation and machine vision, which was still somehow affordable. The camera hosts the IMX 415 in the right configuration and an automatic infra-red filter, to greatly improve low light performance.
When the camera finally arrived I was making tests in my room at full darkness and with very little light. As the results where not satisfactory out of the box, I researched UVC compliant applications with which you can control the camera. After fiddling around with 3 different applications I found the “camera-controller”which lets you make precise adjustments and was thus able to record video of my room in “complete darkness”, the only light source being the screen of my MacBook. Obviously the image is not as crisp as in daylight, but the image sufficed to be used as an input for the Publica 3.0 system.
Technical advancements:
Working with 4k:
Iteration 2.0 was processing images based on the jit.matrix objects, this made sense as image analysis is best done on the cpu because of several factors. Also, the CPU load was not as heavy in previous Iterations as they did not work with real time processing of every pixel of a 4096x4096 matrix before. For reference, most discussions on dedicated jitter forums already report cpu problems arising when working with more than 1000x1000px matrix analysis.
This lead to a (weeks long) full redesign of the visual signal chain, using gl.objects and therefore shifting heavy processing to the GPU, completely rerouting the image processing therefore. This new Graphic card based chain had 20+ versions to test and improve its performance with 4k input and 4096x4096 output.
Feedback from showing it at Zwitschermaschine:
The system was installed in Zwitschermaschine during November to do further field tests on how people can interact with it.
A smart remark was that it would be great if the recording cycle was clearly being indicated for the people to understand in which cycle they are currently in. I was about to construct a light device which would indicate the cycles with an Arduino but stopped, when the message arrived that live input would be impossible.
Tests over 12+hours with people performing with the system had shown, that some mathematical errors in the conway equation (determining each neighbouring pixels state- eg. 7-0-0) could lead to white screens or black screens, with certain variables. Both scenarios appeared to be aesthetically prominent and possibly disorienting for an audience only being able to experience it for about 7 minutes. Thus a new luma-keying circuit was developed to ensure more prominent visual experience, while still keeping the possibilities open.
Per personal judgment it seemed useful to integrate further automatic exposition adjustments, to make dark corners visible, which did not give much input before.
Further adjustments to the interpolation and the speed of the sources moving in deemed useful as well.
3.2 WHAT SHOULD HAVE BEEN:
Without technical failure on the planetariums side, the system would have been installed with the Svpro camera on a mount at a specific angle so the whole floor of the planetarium could have been seen (Tests have been made). There would have been 4-8 microphones, which would have been fed into the main Buffer (and other sub buffers, on which development also halted after the message of no live, as it makes no sense in the standalone version). The Arduino indicator would have been put up in the middle, indicating recording and generation cycles through respective colors. Video input would have been 4k, mirrored and warped into 4096x4096 output, streamed by various options (namely NDI, as used by other domes and planetariums).
3.3 The compromise, what was shown at the planetarium:
As a fixed media version, the influence of the audience would be fake and thus this would entirely go against the concept. Therefore no adaptions to fake such where made. It therefore made sense to render documentation of Iteration 2.0 which was Working in Budapest in the House of Music and in Bochum in the Planetarium (with the exemption of the mentioned problematic light conditions). The
In this way at least the idea can be illustrated, though showing how an earlier iteration of the piece can work in places which are technically not incompetent.
3.4 To understand what should have been
Obviously the shown documentation of Iteration 2.0 can not fully represent the tremendous efforts made and hundreds of hours spent over 6 months into I3.0 on its own… Therefore it would make sense to download iteration 3.0 yourself and try it out. This will of course not give you the same experience as it should have been in the planetarium but maybe you can get a feel for it..
Version history:
The version history is meant to give a quick overview on the countless changes and additions that have been made, in order to make 3.0 ready for the Zeis planetarium.
Bochum:
Ircam generative music price
sounds about:
final zeis versions:
Readme / instructions of use:
Note: spat from ircam has to be installed! If the jit.ndi package is installed and the amateras domeplayer, it is possible to output per ndi to the dome player. This makes preview and a feel for how it should have looked / scaled in the dome possible.
A README is included in the max patches below (doubble click in the top left corner!)
Find the downloadable max patch of Iteration 3.0 as the piece that would have been below.
Make sure to use normal, if you dont want to use NDI (better for your CPU;). The NDI version can be used to preview the image "as in a dome" through amateras dome player. It requires about 30% greater cpu usage and might heavily challange your normal computer (this is for reasons of how jit.ndi was developed)