Responsive Illuminated Architecture
This paper presents and discusses two academic projects that employ 2D and 3D projection mapping techniques that respond to real-time environmental sensors or interactive user input. We briefly summarise the technical background and then focus on implications of using these techniques in the context of architecture education.
Illumination of buildings with projectors or media facades has become a popular means of visual communication: in the art context, many festivals and curators around the world exhibit pieces, such as (Lonzano-Hemmer, 2008), and a growing number of applications can be observed in the commercial context (Starcom Amsterdam, 2010). As technology advances, the scale of illuminated surfaces increases, the visual perception of objects can be altered in real-time, and the use of sensors or smart phones enables the interaction between visuals and the environment. Parameters, such as temperature, movement or gestures drive the visuals and allow people to perceive their body and the environment in a new way.
In this paper, we present two academic projects, “Sensitive Tapestry” (Wipfli and Schneider, 2008) and “Projected Realities” (Schneider, 2009), which aimed at integrating above techniques into the architecture curriculum and at leveraging the architect’s knowledge for a more seamless integration of interaction and visualisation. The goal of both projects was to create a novel experience that arises from architecture augmented with digital information, and from architecture that acts a user interface to reveal information about the building, its occupants and its environment.
The foundation of both projects is formed by a mapped projection that is overlaid on an existing structure, such as a façade or a 3D object. One of the main challenges is the proper calibration of the projected image with the static surface. Currently, a practical approach employs interactive assignment of dedicated virtual points to real points in space, and allows for precise calibration within a few minutes. From there, the camera and projection parameters can be calculated using the methods described by Bimber and Raskar (2005, Section 5.2 and Appendix A). In our approach, we avoided computational surface estimation by using a 3D surface that was parametrically known in advance. For example, for Projected Realities, we used a plaster model of ca. 20x20x5cm, with a parametric surface given by
f(x, y) = 10.2 * sin(0.04 * x) + 10.2 * cos(0.04 *y).
Once calibration is completed, the objects can easily be augmented with content using standard 3D drawing techniques, e.g. using OpenGL. In our context, we used the Processing environment due to its low entry requirements for non-programming experts. The rendering core is then enhanced with real-time sensor input: In the case of the Sensitive Tapestry, we used a thermal imager manufactured by Testo AG. The imager delivers a video signal containing thermal information. The signal was processed in different ways using image-processing methods, and was either directly mapped and projected or used for additional extraction of features that were used for drawing of specific features.
Realisation within the Architecture Curriculum
Both projects, the Sensitive Tapestry and the Projected Realities were carried out as elective courses for undergraduate students at the Department of Architecture at ETH Zurich. The goal of the courses was to explore the possibilities of projected illusions, both from a technical as well as an architectural viewpoint, and especially in the combination of these two.
The technical implications are quite obvious: First, students are confronted with the difficult task of adjusting a projected image to a physical surface, a task that seems straightforward at first, but can not be properly achieved by just playing around. Thus, it requires a good understanding of projection setups, virtual camera parameters, and how techniques as described above are applied. Second, hardware issues and limitations of both sensing and projection devices need to be understood; for example even current high quality projectors have considerable colour issues when in bad lighting conditions. Third, the understanding needs to be transferred into real code, which must integrate different subsystems and operate in real-time – a non-trivial task for architecture students, but at task that turns out very rewarding as soon as first visible results are achieved.
From an architecture perspective, the main goal is to expand the basic understanding of a static built structure with function and dynamic behaviour: How can these usually invisible, but inherently important properties be made visible? How can they be communicated? And in particular how can connections between structure, function and behaviour be made visually accessible? Addressing these questions requires studying relationships within various specialisations within architecture, e.g. between building design and building technology. This helps obtaining a more complete picture of the many facets today’s modern buildings are comprised of. Ultimately, such an experience can be fed back into the concrete applications in the design process and may also result in considerations for future directions of building design.
The deeper study and understanding of architectural relationships also quickly results in a dedicated exploration of artistic possibilities. We believe that such a combination allows for the creation of artworks that goes beyond mere technical demonstrations of the possibilities of 3D projection mapping as it is often demonstrated. The inclusion of architectural knowledge allows for the emergence of visual content that does not misuse built structure as a mere projection surface, but augments a structure with usually invisible features, that in turn are supported by this structure. Therefore using these techniques aims towards convergence of the physical and the virtual by focussing on the connections between the two.
Conclusions and Future Work
The two courses received very positive echo, and were very satisfying in terms of student dedication and results. In particular, the opportunity to present the works to a broader audience in public space resulted in an additional boost.
For future work, we currently see two areas: First, it would be useful to establish a coding framework that incorporates the basic projection mapping and calibration functionality, and allows the students to focus on either experimenting with more advanced mapping techniques or on actual content and on more complex interaction techniques. Second, we consider tapping into building information systems and using this information directly to feed it into the projected content.
We thank Ramesh Raskar of the MIT Media Lab for the permission to use their projector calibration source code in the scope of the Projected Realities installation. In addition we thank the students who attended the elective course Projected Realities in spring 2010 for their contribution to this work. We also thank Urs Schneider and Testo AG Switzerland for offering their thermal imager for several months.
- Oliver Bimber and Ramesh Raskar, Spatial Augmented Reality: Merging Real and Virtual Worlds (A K Peters: 2005).
- Casey Reas and Ben Fry, Getting Started with Processing, O'Reilly Media (2010).
- Rafael Lozano-Hemmer, Under Scan, a Large-scale Public Art Installation, Trafalgar Square, London (2008).
- Christian Schneider, Projected Realities, a 3D Projection Mapping Installation, Digital Art Weeks Xi’an (2010).
- Starcom Amsterdam, Outdoor 3D Projection Mapping Commercial for Samsung LED 3D TV, Amsterdam (2010). http://www.youtube.com/Samsung3devent (Accessed: 2011-09-05).
- Sandra Wipfli and Christian Schneider, “The Sensitive Tapestry: Built Architecture as a Platform for Information Visualization and Interaction,” 13th International Conference on Information Visualisation (2009): 486-489.