Touch Interfaces – Between Hyperrealism and Invisibility

In this paper current trends in mainstream multitouch interface design are analysed. Based on a critical review of Apple's iOS Human Interface Guidelines and on experience from teaching several multitouch design seminars, recommendations for design practice and some forward-looking statements concerning hyperrealistic interface metaphors will be derived.



Multitouch technology has existed for several years today. [1] While big multitouch tables have mostly been found in public places like exhibitions, small screen devices like multitouch smartphones have become an everyday phenomenon. In both cases the context of use has been different from the use of a desktop computer. Multitouch table systems often are designed for specific content, an individual location and fixed context of use. In contrast, smartphone applications are to be used in any context – due to mobility. With the emergence of medium sized multitouch devices like the iPad, more and more digital products, which are known from a work-related desktop context, are being redesigned for multitouch use. But just like the invention of the computer mouse was a prerequisite and an activator for the invention of graphical user interfaces and new software genres, [2] multitouch interaction is a prerequisite and activator for novel interfaces and the emergence of media formats and applications which are specific and typical for medium-sized multitouch devices.

Today two trends in multitouch interface design are already apparent: Photorealistic real live metaphors like wooden bookshelves on one hand and direct, touch-based interaction without any visible buttons or handles with content like maps on the other hand, or even sensor based interaction.

Realism and Learnability

The real-world metaphor approach has already a tradition in human-computer-interaction. The very first graphical user interfaces of the late 1970s where based on a visible real world metaphor. But due to technical limitations the visual style and the iconography of the desktop interface was quite abstract – black and white pixels only in low resolution. This relatively high level of abstraction helped to forget about the original meaning of these metaphors once the user’s learning phase was left behind and the meaning of interface elements had been internalised. When seeing "menus" in a software application today, no one thinks of a restaurant's list of dishes. The idea of a restaurant menu was helpful in the early years of GUI, but today's computer users would be rather distracted or even confused by a photorealistic imitation of a restaurant menu card with "cut", "copy" and "paste" listed on it.

A second wave of more realistic real world metaphors hit the interface design discipline in the early 1990s when "interactive multimedia" became popular. Abstract and text-based interface elements like menus, buttons and drop-down-lists were replaced by the display of everyday objects in everyday environments. These were again real live metaphors – now with a higher level of detail, showing greater similarity between the real world objects and the visual representation. In spite of significant usability problems the naive realism of these interfaces was a success in so called "edutainment CD-ROM" applications. Attempts to transfer this approach to standard software (like Microsoft BOB) at this time failed completely. [3]

Hyperrealism in Multitouch Interfaces

Today applications on an Apple iPad again look like real objects. E-books look like real books, software calendar apps mimic paper sheets, leather covers and even chrome-plated spiral binding. Compared to 1990s multimedia the level of photorealism and the aesthetic quality are obviously superior, but the concept is the very same. And also the theory behind real-world metaphors is still the same: They should help the user understand and learn how to use virtual artefacts by transferring knowledge from real world interaction to the computer world. In their Human Interface Guidelines for iPhone and iPad, Apple therefore recommend the use of real world metaphors as standard practice: "When virtual objects and actions in an application are metaphors for objects and actions in the real world, users quickly grasp how to use the app." [4] Addressing possible limitations of such an approach, Apple are worried only about possible shortcomings of the real-world antetype’s functionality: "The most appropriate metaphors suggest a usage or experience without enforcing the limitations of the real-world object or action on which they’re based. For example, people can fill software folders with much more content than would fit in a physical folder." [4] This is a quite one-sided view – only focussing on limitations of the real-world object and disregarding the limitations of the virtual object. When we see a book in the real world we exactly know what we can do with it, how we handle and navigate it and we also know what we are not able to do with it. With a photorealistic representation of a book on a screen this is different. Of course the resemblance to a book gives the user some clues how to possibly interact with the interface, but it is quite clear that the user can only interact in ways that are anticipated and implemented by the creator of the software. Probably it is possible to "flip pages". But there are several ways how to flip real books’ pages – where do you have to touch the page, what kind of movement is expected? Is it possible for users to write annotations? How? Does it allow to mark pages with dog-ears – and why not? Is it possible to rip out pages?

Based on everyday experience we know how we can interact with our environment and what we can do with the objects surrounding us. Due to this everyday experience we are even able to anticipate possible uses and interactions with artefacts we have never seen or touched before, just by looking at them. [5] We immeditely know how our body relates to the objects, for instance if we can sit on it or where we can put our fingers in. And we successfully anticipate possible handling and mechanical constraints of objects. Well designed artefacts stimulate these expectations by indices – visual cues communicating their handling – and by that make a product self-explanatory and easy to use. "Which parts move, which are fixed? Where should the object be grasped, what part is to be manipulated? […] What kind of movement is possible: pushing, pulling, turning, rotating, touching, stroking?" [6] Needless to say that the induced expectations should be met at the end. Elements that look moveable should be moveable in the expected way.

Can Interfaces Be Natural?

Most of this kind of everyday knowledge today is not "natural" but deeply rooted in technology driven culture. Interaction with light switches, bicycles or books may feel natural for us, but it is artificial. In any case there is not too much difference in figuring out how we can climb a tree (natural), or how we can use a knive (artificial). Both is based on experience, which implies that it has to be learned in the first place no matter if natural or artificial.

The same is true for virtual interfaces. We make asumptions about how they can be operated and controlled based on experience. This experience today primarily is experience with other virtual interfaces and only in the second place it is based on knowledge aquired while interacting with physical everyday objects. When test users of a gestural interface where asked what kind of gesture they would expect for accessing a selected item, the majority proposed pointing at it twice – a double tap in the air. [7] This is clearly not a natural gesture, but it has been internalised in years of performing double clicks in standard desktop interfaces. With more and more people growing up with digital media, the discrimination between knowledge from the analogue world and knowledge from the digital domain seems to be antiquated and obsolete. For so called "digital natives" a double click is more familiar and feels more natural than cracking a nut or peeling an orange.

When everyday objects are used as interface metaphors some interaction techniques will be anticipated and expected, but the intersecting set of possible interactions shared by real and virtual artefacts is actually rather small and is determined entirely by the software design. So there is are two gulfs to bridge in order to use such an interface effectively. One is the difference between what the real objects allows or affords to do and what the virtual one does not. The second gulf is the difference between what the virtual interface allows or affords and what the real thing does not (see figure 1).

Invisibility and Intuition

Actually the problem is not the difference between the two sets of interaction possibilities but the lack of knowledge about it. In interfaces with a hyperrealistic reproduction of everyday objects this lack of knowledge is mainly caused by a lack of visibility. The interface lacks visual cues of what is operable and what is not.
Despite this conflict between real life metaphors and visibility, Apple also recommend to pay attention to readily identifiable interactive elements: "Controls should look tappable. iOS controls, such as buttons, pickers, and sliders, have contours and gradients that invite touches." [4] Already a superficial analyses of iOs applications shows that this works fine in abstract interfaces where clickable elements are clearly discernable – by visibility and by convention. But real live metaphors often lead to inconsistencies. Shape and materiality of virtual objects "invite touches" where touching has no effect. Then again clickable and movable objects are not identifiable by the eye: paper pages do not look scrollable, telephone numbers do not look clickable.
For decades mobile device interaction lagged behind desktop software, mainly due to hardware limitations. Since the introduction of the iPhone in 2007 it has been the other way round: interaction techniques of mobile devices drive innovation in standard desktop interaction. Apple continue to implement multitouch gestures, which were developped for mobile touchscreens, to classic input devices like the trackpad and the "Magic Mouse" a mouse with a multitouch area on its upper surface. In the tradition of the "direct manipulation" interaction paradigm, this is said to make interaction more intuitive: "New Multi-Touch gestures […] let you interact directly with content on the screen for a more intuitive way to use your Mac." [8] Several different definitions of intuition exist in philosophy and psychology. It is probably easier to agree on what intuition is not: It is not a discoursive or conscious process of reasoning. It rather is a way of judging and decision making without analytical reflection, mainly based on tacit knowledge. Tacit knowledge is indeed unconscious. But it is also, like knowledge in general, based on experience – that means it has to be learned. For instance there is no "natural" way of interaction with a map, because using maps is already a cultural technique. Once we learned how to work with real maps this knowledge can be helpful to work with digital maps as well. Touching and moving maps around works intuitively indeed, but Apple offers more: "New gestures include momentum scrolling, tapping or pinching your fingers to zoom in on a web page or image, and swiping left or right to turn a page or switch between full screen apps." [8]
The popular two fingers "pinch" gesture to zoom maps, images and websites is not intuitive at all: Neither the interface shows any sign that would indicate "pinchability", nor does the idea of a real map or photograph suggest "zoomability". Again the problem is that the virtual artefact does not actively communicate what kind of interactions are possible in addition to our tacit knowledge from the real world. The pinch gesture is successfull, not because it is so intuitive – it simply isn't. It is merely easy to learn and easy to remember. Actually it is not based on a real life metaphor – in the physical world it is hard to find any example where objects can be scaled by simply moving two fingers. But still it is learned and remembered easily due to the simple analogy it is based on: A change in distance of the two fingertips is proportional to the change of the size of the touched object. Accompanied by direct visual feedback the logic of this interaction method is understood immediately.

But without knowing that one can "pinch" a map or a photograph hardly anyone one would try. This simple fact does not attract too much attention because seeing the gesture just once in one of Apple's TV commercials for the iPhone or the iPad will suffice to understand and remember. This leads to the conclusion that interaction does not need to be intuitive, but has to be learnable.


Merely copying reality does not necessarily lead to understandable interfaces. When using real life metaphors, designers have to be very conscious about interaction disparities between real and virtual objects.
Much more important than intuition is a good balance of learnability and effectiveness. What a good balance is of course depends strongly on the type of user and the context. Especially in professional software intuitive use and learnability do not have to be top priority. In the long run rather ease of use and effectiveness are crucial. For a software that is used on a daily basis and for years some learning effort for the sake of effectiveness will be worthwhile.

The terms "simple", "easy" and "intuitive" seem to work perfectly as marketing phrases. As general and universal goals in interaction design they should be refused. Often easy to use artefacts do not have too much potential and power. Just take a violin and a triangle (the percussion instrument) and consider their learnability and their potential – probably not everything in life should be about ease.

References and Notes: 
  1. Bill Buxton, "Multi-Touch Systems that I Have Known and Loved," March 2011, (accessed June 26, 2011).
  2. Bill Moggridge, Designing Interactions (Cambridge, MA: The MIT Press, 2007), 27-29.
  3. Knight-Ridder/Tribune, "Microsoft Bob Still Lives, At Least In Certain Spirit," Chicago Tribune, February 2, 1997 (accessed June 26, 2011).
  4. Apple Computer Inc., ed., iOS Human Interface Guidelines (San Francisco, CA: Apple Computer Inc., 2011).
  5. James Jerome Gibson, "The Theory of Affordances," in Perceiving, Acting, and Knowing: Toward an Ecological Psychology, ed. R. Shaw and J. Bransford, 67-82 (Mahwah, NJ: Lawrence Erlbaum, 1977). 
  6. Donald Norman, The Psychology of Everyday Things (Jackson, TN: Basic Books, 1988), 99, 197.
  7. Dema El Masri, "New Dimensions for compArt" (master's thesis, Bremen University, 2011).
  8. Apple Computer Inc., "Mac OS X Lion With 250 New Features Available in July From Mac App Store," Apple's official Web Site, June 6, 2011, (accessed June 26, 2011).