Augmented Movement Vision : Moving, Seeing and Sensing

The embodied  Augmented Reality screen has the potential to alter and augment the dimensionality of our perceptual field through the form and content of the overlaid image. Such augmentation would affect the way our body habitually moves and navigates. This paper explores A.R. expanded spatiality and our body’s plasticity or flexibility to refigure and adapt to movement in space with augmented movement vision.




The embodiment of the virtual screen presents a situation in which information has to be organized in relation to the moving body. Conventionally, the mobile virtual space is employed as infospace that is structured around egocentric and/or allocentric spatial frameworks in relation to the body, and without the need to be in a continuous field of Cartesian space. [1] The virtual space, in this way, function as a presence or an absence feature that is aligned along with the structures of the physical but is not constrained by location and physical continuity. It is contended here, further, that structures or features in the virtual space need not at all be aligned within the logic of the Cartesian co-ordinate system  - that is, the virtual image can serve as a direct extension that transforms or augments the dimensionality of the actual space. The potentiality of the virtual screen lies, in part, to the fact that it is a null-space without the necessary constriction of physical laws. Its (screen) materiality consists of a medium through which contents and meanings are being projected from. The malleable virtual contents can function as simulation, representation, presence or mirror, and so forth.  Therefore, the embodied virtual space could extend not only spatiality but also, more radically, the user’s body frame. This implies that there are more potential within such trans-spatiality between the actual and the virtual than the conventional spatial habits and expectations of our body allow. Such spaces do not just present new forms of spatiality but challenges both the body plasticity or flexibility to re-adapt as well as our conventional body-space-time notions of directionality, positioning and orientation in spatial traversing.


Philosopher Elizabeth Grosz commented that rather than refiguring embodiment, virtual space is often employed in a manner that reaffirm Cartesian mind/body division. [2] There is good reason why (embodied) screen space, in spite of the potentiality of its materiality, is not deployed in more radical ways of interaction to push the capabilities of the body. The kind of augmentation proposed above, challenges deeply ingrained habitual ways of being, and therefore are physically discomforting to the body when implemented. Fracture made to the linearity of the physical space can be made coherent and manageable when the image space is seemingly external to the body, which means that the multiplication of image spaces do not break the singularity of the body’s perceptual frame. However, when augmentation starts to encroach onto the embodied space of the moving body and is deployed as a direct re-structuring of spatial dimensionality or as a re-embodiment of the body, they become highly unmanageable to the body. Which is to say there is a disparity of functional requirements between movement vision or vision for movement and locomotion versus vision for more inactive or stationary activities like reading or simply visually scanning the environment.

Enactive theories expound that perceptual representation is derived from actions. However, here, there is the reversed scenario in which perception precedes the possibility for action. Intuitively, one can perceive and analyse the augmented scene more readily, from an external standpoint,  than one can learn to re-coordinate one’s body to move fluently across the augmented embodied space. Just as, when lost, we stop and refer to our streetmap, to bring actions down to the minimal level, and using our cognitive skills to re-orient ourselves in space. This does not denounce the possibility that higher level off-line perception is inherently rooted in former sensorimotor experience and its memories. [3] It is to be argued in this paper that the challenges of an augmentation of movement vision could be (better) overcome along with the development of both cognitive as well as movement strategies for the body to re-learn, refigure and rehabituate, when the body’s usual perceptual relationship with space is augmented either from within or without.


Whilst the embodied screen has the capacity to simulate all kinds of scenarios and configurations, it possesses certain characteristics that are unique to its medium, in its actual relationship to the body and to the extended space. These could be directly translated into the kind of spatio-temporality that it can configure for embodied experience that is not shared by other set-ups. With its integration with the perceptual field of the body, its content, in fact, has no fixed locality within extended space and has no inherent situatedness – other than with its host, the body.


To explore such hybrid dimensionality, this author is currently developing a series of augmented reality art projects with the working premise of using AR strategies to present ways of perceiving and navigating through space that expands from the circumscription of our physical make-up.

In the “Mpov : xTread” (Fig. 1, Left)  series, the user moves around a site with the ability to control concurrently an additional moving point in AR space – which function as an autonomous doubling of her presence and movement in space, such that she is navigating from two positions at once. This project begins with the idea of an expansion of perception from the persistent single, frontedness of the human bipedal body, and investigates the navigation of space with an additional viewpoint. The body (through the multiplied viewpoint) creates a space of active geometry as it moves. In this work the body centredness and directionality is disrupted in that moving forward is not necessarily going forward, but backwards, leftwards etc.  

The “IsoThread” (Fig. 1, Right) series work with virtual forms that transcend from the regularity of directionality and orientation that our body experience when it traverses across the stable structures of the flat ground. The user navigates the actual space through the virtual topological reality as augmentation. The IsoThread project presents a situation in which the body is invited to reconcile the translation of its position and orientation between the physical environment and the form of the virtual model. In traversing through the virtual and actual space concurrently, the mapping of the virtual form onto the physical space is devoid of any fixed location and orientation in actuality. By mapping the topological with the flat plane, going forward loops back on a twisted axes. There is no going forward, backward, left or right.


The projects above are designed to explore the character of dynamicism that can be brought about with the embodied screen through the layering of realities between the virtual content and the actual space. Volumes of spaces could be nested and  juxtaposed, dynamically re-sized and morphed, becoming simulation or doubling the actual as re-presentation, all of these configuring space in a non-Euclidean manner. The situatedness of the body within the extended space becomes extended relational (xRelational) in such trans-spatiality. This is because the state of the moving body described here is not so much being relational to other bodies/structures in space, rather it has to be ready to extend from its embodied situatedness and adopt (or embody) a multiplication of positions, viewpoints and spatial reference frames – in the process rendering space as folded, heterogeneous, multiplied and informatics. Incongruous spaces and views inter-join and split apart, configure and reconfigure. The flipping in and out of viewpoints and perspective forming an inter-crossing of perspective (xPerspectival) creating ‘any-spaces-whatever’.

Such configuration produces active geometries, where the experiential space does not have the regularity of flow but consists of interpenetrating volumes which form sub-regularity of orientation, directionality and positioning as the body’s perceptual reference frame is decentered. Such virtualization of embodied space, virtualizes the body by making contingent the body’s borders, and making dissolute the supposed boundary of its exocentric and the egocentric spatial reference and altering the spatio-temporality logic of its movement in space. Subverting Euclidean linearity, the qualitative and differential takes on space structuring and geometricizing functions; with the body creating and configuring space as it moves.


The Augmentation of Movement Vision identified here occurs in two manners. Firstly, in the embodied augmented reality, visually led movement creates a disparity between the perceptual information that is received through the virtual screen and the information that is received through the other modalities of the body. Secondly, the augmented reality vision, in this case, forms a multiplication of spatial references and fracturing of the body’s supposed singular egocentric frame. The augmented vision forms an excess that results in the body’s needs to re-learn how it can represent and organize new forms of spatial information to facilitate coordination of its collective parts for actions. This is in contrast to what Brian Massumi terms movement-vision, which describes a proprioceptive state, with no division of subject and object. [4] Augmented Movement Vision, here, describes a level of experience which is closer to the state of internal representation, where a lack of singularity or incongruency of movement in space registered in relation to the single-directional vision of the moving body would break down the body’s capacity to manage movement.

Unlike phenomenological notion of consciousness as defining experience, Bergson  argues that consciousness is derived from the multiplicity of information the body receives; and that our conscious perception is a ‘necessary poverty’ (or diminution) of our image of matter. [5] For Bergson, conscious representation of the matter suppresses and filters away the information it receives of that which is of no interest for our bodily functions.  However, the body is inherently plastic and flexible. Functions of the body are not set in stone, they are open to change brought on by the necessity and demand for new forms of actions in heterogeneous environments. This plasticity of function underlies evolutionary theories of phylogeny and ontogeny (namely, the development found in a species and in an individual over time). Neuroscientist Daphne Bavelier’s research on the effects of multi-tasking in Gaming found that gamers who had to content with split-screens action scenarios for extended time starts to adapt and evolve new mental and vision speed and skills that enable the smooth and skilful means of managing the split-screen environment. [6] Bergson’s notion of a center of indetermination suggests such openness of the body. Information receives is at first unextended in the body, through training they become localized, thus what we experience is memory.

It is inferred here that perhaps the above suggests that the visual information in seeming excess of  the body’s usual functions can be re-embodied with the whole body through the implementation of interaction between the image and the body movement. That is, to re-figure the body movement to accommodate this excess. The idea for this hypothesis is drawn from the examples of experiments undertaken in the field of neuroscience. It has been shown in experiments that the Body Schema – our representation of our own body – show qualities of plasticity. Neuroscientist Angelo Maravita and his colleagues found that multi-sensory integration of visual, tactile and proprioceptive information in primate brain enables it to construct various body-part-centered representations of space; and that this representation shows plasticity for change as active tool-use extends the reachable space and modifies the representation of peri-personal space or the space within the arm-length of the body. [7] Separately, it is known that we can dislocate or project our bodily actions onto the video screen and maintain the integrity of our coordination just by following our actions on the screen – as it is commonly performed by surgeons. This follows that our body schema have the same potential to couple with the screen space as part of its own peri-personal space, and that ‘virtual tools’ could be employed in a similar manner to extend our body space with the virtual space.


In the A.R. project “Mpov” introduced above, the perceptual field in one of the eyes is partly overlaid with the space of the virtual – such that the eyes is looking at the two spaces at once. This may sounds like that the overlaid image would occlude much of the perceptual field. However, in practice,  our stereoscopic vision naturally merge the virtual image onto the actual with some degree of transparency.

In order for the body to efficiently move in such trans-spatiality, the body has to be coupled with the virtual space in some manner, such that the body can find new means of co-ordinated movement. When the virtual space stays outside of the movement space of the body, it puts on cognitive load on the brain. Embodying the virtual has the advantages of off-loading mental processes that would be otherwise be needed to make sense of the hybrid space. Through refiguring the body’s movement, the new movement patterning derived will off-load this into physical processes. Some neuroscientists would agree that mental and physical processes are not distinct but have integrative roles to play in our thoughts processes.

Further, from the neuroscience concepts of bodily path structure and subspaces, it is inferred that a possible method of implementation is to engage the use of a certain part of the body (one of the arms, for instance) to operate and interact with the virtual space. In this manner, the body space is segmented into two frames of realities, and the user can learn to reconcile the hybrid space through movement and sensing.

Path structure is the geometry rules in which our bodies described spatial structure, they determine the distance and direction trajectories for movement. [8] Each movable part of the body has its own path structure, and there are collectively a hierarchy of different path structures in which some belong in the sub-spaces of others. The rotation of the eyes is a path structure that is considered a subspace of the movement of the head. Subspaces working collectively together produces greater degrees of freedom of movement. The plurality and division of sensorimotor spaces suggest the potential for which the body is open to refiguration for more complex scenarios and to the modification of its internal representation of the extended space it maps. When one arm is tap to control and interact the virtual reality, the body is able to physically sense the virtual, as an extension, within the degrees of freedom of movement the arm allows.


Cognitive scientist Andy Clark points out that there is a robust finding that mental rehearsal can actually improve sports skills and the part of the brain called the cerebellum commonly known as the motor area. [9] He notes the discrepancies in the amount of time proprioceptive feedback information reaches to facilitate the action of smooth skilled reaching, which is between 200-500 milliseconds, in contrast to the mere 70 milliseconds the body could actually perform the same action, suggests that neural circuitry that had learnt the pathways involved in the act could trigger the same pathways on cue. [10] This shows that internal representation does play a role in our actions.

If this raises the case that, indeed, language and concepts aid our body movement, then the next question is: are concepts developed from bodily experiences, from our internal representations, or are they independent products. The Deleuzian ontogenetic notion of concepts argues that concepts do not arrive from experiences without creative and productive conditions. Concepts do not merely state the conditions in which identities are formed, rather they produce real knowledge that are creative analysis rather than facts that are representations of the world.

The challenge then is in creating new concept and language that can more adequately assist us to habituate and navigate the increasing complexities of the digital ecology.

References and Notes: 
  1. Frank Biocca, Arthur Tang, Charles Owen, Weimin Mou and Xiao Fan, “Mobile Infospaces: Personal and Egocentric Space as Psychological Frames for Information Organization in Augmented Reality Environments,” in Foundations of Augmented Cognition, ed. Dylan D. Schmorrow, 154-163 (New Jersey: Lawrence Erlbaum Associates, 2005).
  2. Elizabeth Grosz, Architecture from the Outside: Essays on Virtual and Real Space (Cambridge, MA: The MIT Press, 2001), 85.
  3. Jacques Paillard, “Knowing Where and How to Get There,” in Brain and Space, ed. Jacques Paillard, 461-481 (Oxford: Oxford University Press, 1991).
  4. Henri Bergson, Matter and Memory, trans. N. M. Paul and W. S. Palmer (New York: Zone Books, 1994), 188.
  5. Ibid., 171.
  6. Michelle Trudeau, “Video Games Boost Brain Power, Multitasking Skills,”, December 20, 2010, (accessed May 28, 2012).
  7. Angelo Maravita, Charles Spence and Jon Driver, “Multisensory Integration and the Body Schema: Close to Hand and Within Reach,” in Current Biology 13, no. 13 (2003): 531-539.
  8. Jacques Paillard, “Motor and Representational Framing of Space,” in Brain and Space, ed. Jacques Paillard, 163-182 (Oxford: Oxford University Press, 1991).
  9. Andy Clark, “Embodiment and the Philosophy of the Mind,” in Current Issues in Philosophy of the Mind: Royal Institute of Philosophy Supplement, Volume 43, ed. A. O’Hear, 35-52 (Cambridge: Cambridge University Press, 1998).
  10. Ibid., 22.