Serendipity is Dead....Long Live Serendipity

Serendipity, from the understanding of its role in research, knowledge acquisition to exchange within art, design and science has inspired new developments, products, technologies and practices on the web. From traditional information systems to digital culture, we explore emerging technologies in digital knowledge acquisition, and the future of design in transforming and presenting information and ideas.

 

Author(s)

Art in Serendipity and Network Culture

Mel Woods

Many scientific and artistic innovations have been attributed to serendipity, the faculty of making and recognising fortunate and unexpected discoveries by accident. However, while there is understanding of the definition, it is by its very nature ephemeral and highly subjective. There are no cohesive theories that explain the phenomenon, furthermore there is disagreement as to whether digital technologies promote or stifle serendipity, it is not surprisingly then that there is little understanding of the technologies that may facilitate it. However this does not prevent the widespread acknowledgement, almost celebration, that serendipity is a major contributor to innovation and there is a current zeitgeist that brings together interest across sectors in art, computing, HCI, research, information discovery, archival and library systems, to investigate the design of interactive systems as well as promoting innovation in business.

This paper references research to date from interdisciplinary RCUK funded SerenA - Chance Encounters in the Space of Ideas Project. It briefly provides an overview of understanding serendipity and the design of systems, technologies and spaces to promote or support serendipity.

The SerenA project focuses on: 1) building an understanding of serendipity through empirical studies, specifically within information discovery and research. 2) the development of a system to support and promote connections and information between people and ideas 3) implementing and evaluating technologies with novel approaches in digital and physical spaces.

We know the technological transformation that has occurred in recent decades, the rapid development and expansion of digital information systems and communication networks, has profoundly altered society. Dramatically changing our world and the way in which we discover information. New information technologies have triggered economic and societal changes that have led to a radical reorganization of work and production processes as well as of space and communication.

The development of new interactive systems, products and services are moving towards an increasingly personalized digital experience, and many search and discovery systems such as google are laying claim to a holy grail to ‘give us what we don’t know we need to know’. Recent interactive systems for mobile devices have looked to the key strategy of chance, and some have been influenced by artists’ investigations of the procedure in practice, using methods that seek to engineer serendipity or chance investigations.‘Surrealists’ and ‘Dadaists’ sought to juxtapose objects in new and unexpected ways. Guy Debord and the ‘Situationalists’ developed theories of the Derive or wander with improvisation theory and the gap between intention and outcome seen as crucial to the meaning of serendipity and chance in art. Recent mobile systems such as ‘Serendipitor’ and ‘Situationalist’ draw explicitly on these manifestos and theories from art movements in history. They rely on the users insight, and perception of value to create serenditpitous encounters.

Conclusions

However we need a better understanding of the phenomenon the serendipity space, in order to address the question of how to use our understanding of the concepts of chance, insight and value to inform the design of digital information resources? The big questions is whether in designing to encourage serendipity, as soon as we ‘engineer’ it into a system, the result is not considered to be serendipitous at all.

Future work

Ongoing work in the SerenA project is refining the understanding of serendipity, prototyping novel interfaces to support chance encounters and developing the underpinning technologies that are needed for identifying promising connections without overloading people with information. Engagment with practioners and the public with the SerenA model of serendipity will take place in ‘Serendipity Salons’ and develop linked open data with partners and Cultural Stakeholders. An iterative process of development and evaluation is underway, integrating human- and technology-centred approaches to deliver innovative serendipitous encounters mediated by technology.

Serendipity is Dead…..Long live Serendipity

 

Trapped in the puzzle

Clive Gillman

This presentation aims to explore the challenges we face in attempting to colonise the analogue world with digital tools. To problematise the intriguing attempts to map all unknowns and to assert the need to accept that, despite the relentless inevitability of Moores Law, there will always be an unknown that will be resistant to digitalisation and that this unknown is closer than we realise.

Abstract

Digital forms of technology such as wifi and smartphones can be said to be materials in the architecture of our contemporary realm, but they do not define it. An understanding of this contemporary technology can be a factor in helping to clarify the form of this realm, but relying on the marvels of technology to provide further meaning can obscure our ability to be aware of the limitations of our technological selves. In his book The Victorian Internet, which discusses the development of the technology of the telegraph in the 19th century, Tom Standage warns us of the dangers of ‘chronocentricity’, “the egotism that one’s own generation is poised on the very cusp of history”.

At this moment in time we are too often shaping our dreams of the future through an aspiration that our technology will resolve our relationship with this architecture of the unknown. We work with the hope that a concept such as serendipity can be subjected to a technological solution that will give us empirical power over such a poetic or lyrical concept. But while we remain enraptured by the challenge of connecting the contemporary realm with the untechnologicalised world, we should allow space to learn to respect and celebrate its inability to be bounded by our analysis. Without this respect we can easily find ourselves in danger of being trapped in the puzzle, locating both the known and the unknown where we want them to be solely in order to satisfy our own desire to generate resolution. But ultimately it is an analogue world and the values we seek are analogue in nature and our attempts to map and harness the resolutely analogue and closet it within the digital zoo we have built are always likely to be futile. Analogue values are perceived, not counted, and if we assume that they can exist anywhere in a binary absolute in which they can be captured, catalogued and put to service, then we simply engage in a futile resistance of our own complexity.

 

The Serendipity Engine

Aleks Krotoski and Katrina Jungnickel

Overview

The Serendipity Engine is a research project that describes the implications of existing web technologies for innovation and discovery, and seeks to unpack the ingredients that are involved in creating technological solutions for serendipitous outcomes.

Serendipity is important in the processes driving innovation and discovery (Johnson, 2010), but it is also an essential tool in the functions of services that aim to curate pathways of information across the World Wide Web. Connecting people with relevant information that they have not discovered before and when it is most useful is one of the single most important selling points for services like Google, and has emerged as one of the most important secondary functions of social networks like Facebook and Twitter.

Author Stephen Johnson describes the Web as the ultimate serendipity engine, connecting people with tangential information at the click of a button. Key are the connections that take him or her there. And the existence of these connections is a result of technological algorithms, human curation or - because of social graphs like Facebook and Twitter - processes of social influence.

But increasingly, theorists like Eli Pariser (author, The Filter Bubble) and Ethan Zuckerman (Global Voices, Berkman Centre for Internet & Society) are voicing their concerns about technologically-driven discovery solutions. They, like legal scholar Cass Sunstein (University of Chicago), information scholar Lada Adamic (University of Michigan) and researchers like Boston University's Marshall Van Alstyne and MIT's Erik Brynjolfsson who have studied the patterns of connections between people and ideas for almost two decades, recognise that the vast ocean of information online is increasingly navigated by packs of like-minded individuals: political liberals and conservatives flock together, talking in echo chambers that are pierced by unexpected social realities when real life outcomes reflect opposing viewpoints (Van Alstyne & Brynjolfsson, 2005; Adamic 2005, 2010; Sunstein, 2001); and people talk within national and linguistic borders, preferring local news to international stories rather than engaging in the global conversation predicted by the Web's ideological forefathers (Zuckerman, 2011, Media Standards Trust, 2011). This can lead to dangerous social outcomes beyond innovation stagnation: social psychological research over six decades indicates that inward-looking groups will have less tolerance for other groups, resulting in antagonism, balkanisation and the breakdown of communication. In other words, the way we currently traverse the online world may result in social division instead of social cohesion.

The consumer web is a motivator for generating this filter bubble: rather than open new ideas and opportunities up to consumers, commercially-motivated services like Amazon's Recommendation Engine or Google's search results reduce the consumer's interests to fields in a spreadsheet; Amazon uses historical behaviour to make almost certain connections between a buyer and a new product, while Google uses its database of intentions to serve relevance. Both are extraordinary marriages between technology and human psychology, but they also undermine the human by reducing him or her to binary points in a system.

Their current answer is to collect more data, in order to create a better picture and serve more personalised outcomes. Yet this increases the filter even more, delivering only what the customer wants, rather than facilitating serendipitous discovery. The answer to creating a true serendipity engine is not to learn more about the person seeking it, but to understand the process of serendipitous discovery itself.

Aleks and Kat propose to undertake a programme of social research to generate an understanding of the human and social processes that result in serendipitous discovery - from personal motivation and social influence to the design and construction of a technology that serves both enquiry and outcome - in order to enlighten and inform the development of the next generation of Web technologies that combines the commercial interests of online services, and global social progress.

 

Serendipity, Creativity and Cybernetics

The making of 21st-century creative systems

Geraint A. Wiggins

Historically, creative humans have generally been individuals—until, that is, the philosophy of science demonstrated how at least scientific creativity can be shared, and, according to its own values, be strengthened by that sharing. In general, creative individuals are somewhat self-contained: they may use tools, or express themselves through instruments, but the locus of responsibility for creative behaviour lies squarely with the poet, painter or pianist.

The arrival of computers powerful enough to play music, and to perform advanced graphical operations, has opened up a broad new range of creative possibilities, especially for individuals who do not have the benefit of specific training or who happen to be in a social milieu in which creative activity is not highly valued. The stereotypical “kid composer in the back bedroom” became much more real with the advent of computer sequencers and samplers, simply because the software was generally free of charge, and could be run on a general purpose computer. Here, though, the computer is still not much more than a tool— albeit a very powerful, knowledgeable and sometimes almost intelligent tool.

The arrival of the internet, now approaching ubiquity in first world societies, and hugely enabling elsewhere, has brought with it a range of opportunities which we are only just beginning to explore. Mere connectivity is itself hugely powerful, allowing communication between humans as well as between computers, as we have seen in the Arab world in 2011. But communication between humans and computers remains limited and specific: the affordances of a software tool or operating system are all that is available to the vast majority of computer users, and these are often restrictive of creativity, rather than encouraging it: the software does what its designer imagined, and is often difficult to subvert to the purposes of the user. The result of this can be that creativity of users is limited, or channelled in particular ways, resulting in a kind of feedback that leads to the restriction of ideas. I claim that this effect can be seen in the striking narrowing in style of popular music from the mid-1990s onwards (though the desire of large companies to “monetize” music is a culprit here too): people learn from music technology, and the next wave of technology is designed by the people who learned from the last, and thus a rut is deepened.

The key issue separating a human from a computer in terms of communication is meaning. Humans assign meaning wherever they can—sometimes in ways that can be misleading. Computers, on the other hand, confer no meaning at all on the symbols that they shuffle around: it is the human who does this, when the symbols are transduced as images of numbers or alien invaders or as audible drum beats.

While no way is yet known of having a silicon-based computer “understand” the meaning of words, mathematical formalisms exist which can be used to make it behave, to some degree, as though it did. What this requires, is a well defined set of terms, denoting context, some standard logical connectives such as “and”, “or”, “not” and (crucially) “implies”, and some rules, expressed in terms of these connectives. An example would be Socrates’ syllogism: “Socrates is a man” AND “All men are mortal” IMPLIES “Socrates is mortal”. These basic reasoning tools can be enhanced in various ways to admit more human-like, qualitative reasoning. Significant amounts of time and effort were applied to solving these kinds of problems, in the “Good Old-Fashioned Artificial Intelligence” (GOFAI) of the 1970s and 1980s. But reasoning, no matter how powerful, is useless without a solid and extensive basis of knowledge from which to reason, and this did not exist in the 1980s; this is why the catastrophic and ill-informed Lighthill Report condemned logic-based Artificial Intelligence to the dustbin of history in the 1990s.

In the past decade, a seismic shift has occurred in this situation. Tim Berners-Lee’s Semantic Web has provided a computational method whereby human-readable web pages (and related information) can be encoded so that their meaning is accessible to the computers that serve them, using the technology cited above.The key feature of the Semantic Web design is that it allows (in principle) every computer user on the planet to collaborate, seamlessly, in the construction of an arbitrarily large network of knowledge. As this network grows, computers can use it to infer their own information using GOFAI techniques, which can then be added to the network (perhaps after redaction by knowledgeable humans, à la Wikipedia). And thus a self-sustaining computational quasi- intelligence is created.

The key issue here is the collaborative nature of the development of the Semantic Web— alongside human efforts. The Semantic Web would have nothing to say without humans encoding knowledge, but humans can be supported, helped, delighted—and surprised—by the Semantic Web’s inferences, given the right software interfaces. So it is now reasonable to imagine a Semantic Web system which is capable of identifying information that humans have not noticed, and presenting it to them for evaluation, leading to an output which would be deemed creative if the human had done it alone. While it might be hard to argue that the computer system was itself creative here (for true creativity surely requires the ability to select one’s successful outputs), it is equally hard to argue that the creativity is not inherent in the hybrid system formed from the combination of the computer and its human.

Add in to this mix a logical representation of what the human knows and wants to know, and we are left with a computer system which is capable of identifying interesting outcomes for its human. But the computer system, at least in principle, has access to indefinitely more information than the human; it is capable of making reasoned inferences where the human could only win by chance action—stumbling on a result out of the human’s comfort zone. As such, its reasoning and knowledge base encodes the potential for events which are, from the human’s perspective, serendipitous. Equally, since the computer system cannot know the meaning of anything, and is capable only of stepwise symbolic reasoning, the human’s interactions with it give scope for the identification of helpful coincidence that it would otherwise miss.

Both of these events constitute a 21st century kind of serendipity, in a hybrid cybernetic system. I suggest that, in general terms of capacity for discovery and creativity, the hybrid is likely to be greater than the sum of its constituent parts. In the EPSRC-funded SerenA project, we aim to design and build the computational substrate that will enable the development of this new kind of hybrid creativity.

References and Notes: 

Johnson, S. (2010). Where Good Ideas Come From: The Natural History of Innovation. Allen Lane: London.

 

Adamic, L. A and Glance, A (2005, Aug 21). The Political Blogosphere and the 2004 U.S. Election: Divided They Blog. LinkKDD-2005, Chicago, IL.

 

Moore, M. (2010, Nov). Shrinking World: The decline of international reporting in the British Press. Media Standards Trust: London.

 

Pariser, E. (2011). The Filter Bubble: What the Internet is Hiding from You. Viking: London.

 

Sunstein, C. (2001). Echo Chambers: Bush c. Gore, Impeachment, and Beyond> Princeton University Press: Princeton, NJ.

 

van Alstyne, M. and Brynjolfsson, E, (1996). Electronic Communities: Global Villages or Cyberbalkanization?. ICIS 1996 Proceedings. http://web.mit.edu/marshall/www/papers/CyberBalkans.pdf [Retrieved 3 March 2009]

 

 

Zuckerman, E. (2011). CHI Keynote: Desperately Seeking Serendipity. My Heart's in Accra: http://www.ethanzuckerman.com/blog/2011/05/12/chi-keynote-desperately-seeking-serendipity/ [Retrieved 5 May 2011].

 

A full transcript of the panel and questions is available by request from Mel Woods