Creating Black Boxes: Emergence in Interactive Art

warning: call_user_func_array() [function.call-user-func-array]: First argument is expected to be a valid callback, 'phpmailer_preview_access' was given in /var/www/web/html/includes/menu.inc on line 454.

In the context of interactive art, emergence can be understood through a close analogy to an unpredictable black box. Cariani’s emergence–relative–to–a–model and his notions of combinatoric and creative emergence can be used as a guideline to analyze the presence of emergent phenomena in interactive art in general. A thorough understanding of these phenomena should allow for the creation of pieces that exhibit emergent interactive behavior.

Author(s)

Introduction

A black box is typically understood as any device, system or part of them that remains opaque to whoever tries to understand how it works. One can only know what comes in and what comes out of it, but not what happens on the inside. The idea was first developed in engineering, but it later on was generalized, mainly under the influence of systems theory discourse in early cybernetics.

In the first paragraphs of the usually considered founding text of the discipline, Arturo Rosenblueth, Norbert Wiener and Julian Bigelow defined their object of study, a behavioral approach to knowledge, in the terms of a black box–like entity: “Given any object, relatively abstracted from its surroundings for study, the behavioristic approach consists in the examination of the output of the object and of the relations of this output to the input (…) omitting the specific structure and the instrinsic organization of the object.” [1]

This was further elaborated by W. Ross Ashby, who generalized the idea to examples such as a child trying to open a door, while not being able to examine the connection between the handle and the latch which remains hidden inside the opening mechanism. The point is that, in fact, we are interacting with black boxes all the time: “In our daily lives we are confronted at every turn with systems whose internal mechanisms are not fully open to inspection, and which must be treated by the methods appropriate to the Black Box.” [2]

These methods consist, in a nutshell, first in the definition of both the inputs and outputs of the system under examination, and then in the experimentation with those in order to establish the relations among them (the protocol in Ashby’s terms). The goal is to find the regularities and repetitiveness in behavior that will inform the experimenter of the inner workings of the box.

Unpredictable Black Boxes

Both in its original engineering context and in general, black boxes need to be fundamentally reducible and predictable. Its interior will remain unexplored, but theoretically it can be understood in terms of the analysis of its parts and how they are connected, and the relationship between inputs and outputs has to remain the same over time. Otherwise the task of the experimenter testing it would be an impossible one. 

But there is another kind of black boxes which is of interest here. These would be much less predictable black boxes. That is, systems in which the relation of inputs and outputs is not fully foreseeable, and the inside of which is not only unknown but unknowable, not reducible to the analysis of its parts and connections upon an eventual opening of the box (or zoom in into the system).    

These black boxes are found at the heart of the cybernetic theory, in the form of adaptive devices. Most certainly not all black boxes and adaptive devices in cybernetic discourse are of this kind. In example above, the child doesn’t know the insides of the door but these are knowable once the door’s handle–latch system is inspected. The black boxes that Ashby found everywhere can be of both types. But what is important here is that both can be viewed as a central idea in the ontology of cybernetics.

According to Andrew Pickering, this ontology allows cybernetics to propose an image of the world that is performative rather than representational. A theory of knowledge which is largely built up through a performative relationship with black boxes, and many of them are of the unpredictable kind. Rather than about control in a classical sense, “the entire task of cybernetics was to figure out how to get along in a world that was not enframable, that could not be subjugated to human designs – how to build machines and construct systems that could adapt performatively to whatever happened to come their way.” [3]

This performative knowledge is the above mentioned behaviorist approach of early cybernetics, or what Ashby himself call an “ultimate practical purpose” [4] of his black box methodology.

The Designer’s Point of View

Another characteristic of the cybernetic approach to the black box is that it is not only about dealing with these systems, but also about creating them. Indeed, this is something that some cyberneticians did, like e. g. W. Grey Walter’s Tortoises, W. Ross Ashby’s Homeostat or Gordon Pask’s Musicolour Machine. [5]

The idea here is to create something that will appear as a black box to its own creator. That is, a device that will surprise its designer, in terms of its behavior and of the relationships between the inputs it receives and the outputs. That is, even though she has designed and programmed it, the relationship between what the system or the piece perceives (the inputs) and how it responds to it (the outputs) become unexpected.  

This is precisely where the ideas of the unpredictable black box and emergence can be linked. As will be explained below, emergence implies fundamental novelty, i.e. that the system creates something that was not explicitly built on it by its designer.

This is not something that a designer of a conventional computational system would desire, but it can be the case in digital art practices. In generative art and especially in Artificial Live (ALife) Art, it is often sought by the artist to create systems or processes which exceed her expectations. The idea is to do so not through some blind trial and error, but through emergent phenomena (or self–organization in cybernetics discourse): “The basic principle of emergence is that organization (behavior/order/meaning) can arise from the agglomeration of small component units which do not individually exhibit those characteristics”. [6] 

What is Emergence?

Emergence, in its many forms and contexts, is always related to fundamental novelty. It is often explained with the idea of a whole being ‘more’ than just the sum of its parts. That is, of being irreducible to the analysis of its conforming elements in isolation.

These explanations are usually articulated in terms of different levels of complexity, in which the lower or micro levels (the parts) generate processes which appear at the upper or macro levels (the whole) as emergent, i.e. not explainable with a classic cause–effect relationship.

This idea questions the traditional reductionism of science, since it implies that not everything is explained by studying smaller and smaller parts of whatever system is under analysis. Whenever emergence is present, reductionism is brought into question. A classical generic example would be to question, in the succession of orders of knowledge physics–chemistry–biology–psychology, if each one is fully reducible to the previous or if, instead, emergence occurs when a level of complexity increases.  

Emergence didn’t become a concern in the academic discourse until the mid–ninetieth century, when John Stuart Mill used the concept (not the term, which was introduced later by George Henry Lewes) to distinguish different types of causation, but it still remained a marginal concept. In Newtonian science emergence was unknown and unknowable. In fact, it is by definition inconsistent with a science that aims to reduce all possible phenomena to simple facts and laws, in which reductionism is an indisputable method.

It was not until the second half of the twentieth century that the work of some rather unorthodox scientists started to prepare the context for it to appear in its contemporary form. By the end of the century, it was already a central concern in the Complexity Sciences (ALife, dynamical systems theory, neural networks, etc.). [7]

Typical examples used to describe emergence include ant or termite colonies and their social complexity, the human mind understood as a product of the interconnectivity of neurons in the brain, chemical clocks in non–equilibrium thermodynamics, or the complexity generated from the simple rules of cellular automata.

Emergent Interactive Behavior

In the context of digital art, emergence has been mostly been experimented with in ALife Art. ALife art is the artistic arm of the scientific Artificial Life, a discipline which comprises “a range of (mostly) computer based research practices which sought, among other things, alternatives to conventional Artificial Intelligence methods as a source of (quasi–) intelligent behavior in technological systems and artifacts. These practices included reactive and bottom–up robotics, computational systems which simulated evolutionary and genetic processes, and are range of other activities informed by biology and complexity theory.” [8]

In this context, the idea of emergent interactive behavior is to create systems that respond and behave not in a predetermined way, reading responses from a database – or responding as if they did – but generating these responses through emergence: “Emergent interactive behavior would not be derived from a set or pre–determined alternatives. Rather, behaviors might arise through a contingent and unconnected chain of triggers.” [9]

This mid–nineties ideal has been rarely, if ever, completely achieved in interactive artworks. In fact, it depends on how we choose to understand emergence that determines whether or not it has been (see below).

Two of the most often mentioned examples of emergence in Artificial Life are Craig Reynold’s Boids and John Conway’s Game of Life. In both, a very simple set of rules produces astonishingly complex results when the systems are simulated. In Art, examples are scarce. Simon Penny’s 1995 Sympathetic Sentience is one of the most cited ones.    

Works in which Genetic Algorithms are involved are usually related, too, to emergence. Examples of these would be Christa Sommerer Laurent Mignonneau’s A–Volve, Ken Rinaldo’s Autopoiesis or Ruairi Glynn's Performative Ecologies, to name a few.

Creative Emergence

Just like Ashby was concerned with defining a black box, and claimed for a clear delimitation of what the inputs and outputs to be analyzed were, if we are to understand how emergence does really occur in interactive art, we need a method that delimitates what to observe and judge, in order to be used as a tool for analysis first, and creation later.

The method to be examined here finds its context in the literature of Artificial Life and, more generally, in cybernetics. It is Peter Cariani’s analysis of ‘percept–action systems’ (autonomous artificial systems and devices which perceive and act on their environment) and how they might exhibit emergent properties that would lead to changes (improvements) on their performance. [10]

Cariani’s approach is known as emergence–relative–to–a–model. Unlike other approaches to Emergence, Cariani does not accept the switching among levels of complexity (e. g. from the molecule to the pattern) in order to describe emergent phenomena: “For the purposes of judging whether an emergent event has occurred, we need to be careful not to shift frames of reference in these situations, from talking in terms of microstates and pixel states before and “higher level” features afterwards. If we start to observe the device in terms of individual pixels, we must continue to do so in those terms throughout. If we wish to include complex pixel patterns (e.g., cycles, waves, moving patterns which look to us like a horse galloping), they need to be in our state descriptions from the start, or they will remain in the realm of tacit, private observation, unrecognized by our public model.” [11]

He labels his approach as an “epistemological, observer–relative conception of emergence.” [12] An approach which is similar to that of Ashby’s concerning the black box problem, and which fits perfectly with the aim to find a use of emergences for the designer of an artistic interactive system. The concern, like in the early cybernetics and also in interactive art, is on how the interactants relate to the system performatively.

In order to discern whether or not emergence occurs in a system, Cariani proposes the construction of a model of it, very much like Ashby’s experimenter constructs a protocol from the examination of the relations among the inputs and outputs in the box.

This model is built after observation of the system, simulating it if necessary. It must contain all possibly observable system states and transitions, according to a predefined set of observable variables that explain its behavior. Once this is done, more observation is performed. It is in this observation relative to an observational frame that emergence either occurs or it doesn’t.   

Once more, this is contrary to many descriptions of emergence, which use precisely the moment of simulation of the system to account for emergence. In these descriptions, emergence occurs when the result of the simulation differs from the expectations that one would have after examining the individual elements and the rules to be simulated, as does happen in Reynold’s Boids or Conway’s Game of Life when one sees them for the first time.

Cariani’s model is based on a semiotic framework of syntactic, semantic and pragmatic operations, which correspond respectively to the computations, measurements and evaluations. [13]

The syntactic operations exist in the symbolic realm. They are logically necessary and governed by conventions (rules). In these operations, the system relates symbols among them generating the state transitions and, therefore, changing the system’s state through computations.

Semantic operations involve measurements and actions. That is, relations of the system or device with its environment. They are empirically contingent and materially governed. The possible measurements for the device (through its sensors) determine its epistemic capabilities. The variables in the world that the system can perceive and how it measures them are determined by the configuration of the sensors. This is where the symbolic part of the system is in contact with its non–symbolic environment. 

Finally, there are the pragmatic operations, which evaluate the sensor readings of the semantic level and computations in the syntactic level vis–à–vis the system’s goals. 

When emergence does happen, it can be either combinatoric or creative. Combinatory emergence consists in changes on the syntactic or computational operations. In this case, the set of primitives (building blocks) with which the device works doesn’t change, but the combinations of these primitives do, allowing for novelty to arise. Devices that fall into this category are computationally autonomous.

These systems are closed in the sense that their search space, no matter how big, is always finite. If the primitives are defined beforehand, the possible combinations among them are also predefined.

Creative emergence is a much more fundamental way for creating novelty. It consists in the introduction of new primitives in the computations, through changes in the semantic operations of the system (semantic adaptation). This happens when a new sensor is added (through evolution) in the system. It could also be the case of an increase of the sign–states in the system (the equivalent of creating new concepts in a human mind). Devices that fall into this category are epistemically autonomous.

As opposite to the closeness of combinatoric emergent systems, these are open–ended systems and devices, as their search space is ill–defined. If new primitives can be added through the introduction of new sensor capabilities, the creation of new behaviors (transitions between system states) is theoretically unlimited.

Emergence and Interactive Art

According to Cariani’s methodology, we can understand how combinatoric emergence might occur in ALife simulations. The key issue is to generate a large number of interactions among the system’s fundamental elements (Cariani’s primitives), which in certain cases will generate emergent phenomena observable through simulation. This would be, for instance, the case of genetic algorithms.

My own installation Digital Babylon from 2005, which used genetic algorithms, can serve here as an example. This piece represented a simple ecosystem with two species and a food element. Once all the rules were set and the simulation run, patterns of behavior appeared in some of the species and the system as a whole. Some of these would be emergent under some descriptions of the phenomena, but not under Cariani’s model.

What would fit his combinatoric emergence, though, were the changes in behavior that appeared after a rather long simulation of one of the prototypes of the piece. In it, one of the species had significantly changed its behavior due to the recombination of the characteristics in its individuals. E.g. they moved in small circles and very close to each other, which significantly differed from their initial behavior. [14]

Much more difficult is to account for creative emergence. The most evident way to do so would be for the device under analysis to evolve new sensors and expand its epistemic capabilities (its abilities to perceive its environment). But this, which has happened in natural systems, is extremely rare in artificial ones.

Another way would be to create autonomous objects capable of evolving new primitives through interaction of the cybernetic devices with humans. If the biological examples are the most clearly emergent, mixed artificial–biological systems should have its possibilities.

Humans beings, from a systemic point of view, can facilitate the creation of novel ideas (adding new primitives to expand their sign state–sets), allowing for creative emergence to occur. And this can be a way to open the closeness of the computer’s formal system: “Human–machine combinations can be open–ended systems that generate new primitives.” [15]

Thus, a door is opened here to the creation of fundamental novelty in the context of interactive art. With Cariani’s methodology, a system can be described in detail and designed so that it allows for the system–interactant relationship to create new primitives and, therefore, newness in the first or in both, by amplifying the possible sign–states.

Conclusions

If carefully defined and used, emergence can be a powerful concept in the creation of interactive art systems. From the artist’s point of view, it can be understood through the idea of the unpredictable black box. When creating an explicitly intended emergent system, the artist will know what the inputs to it will be, but hopefully the outputs will deviate from her expectations precisely because it is designed in such a way that its inner workings cannot be fully specified.

There is a clear paradoxical component in designing a device that should be emergent, since emergence is precisely the opposite of specification. But following a methodology like Cariani’s, and allowing interactivity into the system, we should be able to generate devices and systems which exhibit emergent interactive behavior.

References and Notes: 

  1. Arturo Rosenblueth, Norbert Wiener, and Julian Bigelow, “Behavior, Purpose and Teleology,” in Philosophy of Science 10, no. 1 (1943): 18-24.
  2. W. Ross Ashby, An Introduction to Cybernetics (New York: Wiley, 1956), 86.
  3. Andrew Pickering, The Cybernetic Brain (Chicago, IL: University of Chicago Press, 2010), 30-31.
  4. W. Ross Ashby, An Introduction to Cybernetics (New York: Wiley, 1956), 106.
  5. Andrew Pickering, The Cybernetic Brain(Chicago, IL: University of Chicago Press, 2010), 30-31.
  6. Simon Penny, “The Darwin Machine: Artificial Life and Interactive Art,” in New Formations, no. 29 (1996): 59–68.  
  7. Limited space precludes me from delving more deeply into matter.
  8. Simon Penny, “Twenty Years of Artificial Life Art,” in Digital Creativity 21, no. 3 (2010): 197-204.
  9. Simon Penny, “The Darwin Machine: Artificial Life and Interactive Art,” in New Formations, no. 29 (1996): 59-68.
  10. Peter Cariani, "The Semiotics of Percept–Action Systems,” in International Journal of Signs and Semiotic Systems 1, no.1 (2011): 1-17.
  11. Peter Cariani, "Emergence and Artificial Life" in Artificial Life II, eds. C. Langton, C. Taylor, J.D. Farmer and S. Rasmussen, 775-789 (Redwood City, CA: Adison-Wesley, 1992).
  12. Peter Cariani, "The Semiotics of Percept–Action Systems,” in International Journal of Signs and Semiotic Systems 1, no.1 (2011): 1-17.
  13. Peter Cariani, "Emergence and Artificial Life" in Artificial Life II, eds. C. Langton, C. Taylor, J.D. Farmer and S. Rasmussen, 775-789 (Redwood City, CA: Adison-Wesley, 1992).
  14. Joan Soler-Adillon, “Digital Babylon,” Joan.cat, http://joan.cat/project.php?id=1 (accessed July 8, 2011).
  15. Peter Cariani, “Strategies for Creating New Infor­mational Primitives in Minds and Machines,” Dagstuhl Research Online Publication Server, 2009, http://drops.dagstuhl.de/volltexte/2009/2192/pdf/09291.CarianiPeter.Paper.2192.pdf (accessed July 8, 2011).