Are social media, social? (Part Four)

Heidi May has produced some important comments on the previous entries of Are Social Media, Social? May suggested a link to Network, A Networked Book about Network Art which is a fascinating example of the extensions that are possible when communities of interest establish a context to work together and collaborate. Heidi May also asks about the Diaspora project. Diaspora will attempt to build an open source version of Facebook. I wish them luck. This is an essential move to broaden the scope and expectations that we have about the role and usage of social networks, about privacy and most importantly about controlling the very code that governs how we relate within virtual spaces.

A good example of some of the challenges that we face within networked environments is what happened to the famous German philosopher, Jürgen Habermas. “In January, one of the world’s leading intellectuals fell prey to an internet hoax. An anonymous prankster set up a fake Twitter feed purporting to be by ­Jürgen Habermas, professor emeritus of philosophy at the Johann Wolfgang Goethe University of Frankfurt. “It irritated me because the sender’s identity was a fake,” ­Habermas told me recently. Like Apple co-founder Steve Jobs, ­Zimbabwean president Robert Mugabe and former US ­secretary of state Condoleezza Rice before him, ­Habermas had been “twitterjacked”.” Stuart Jeffries Financial Times, April 30, 2010.

As it turns out the hoax was removed but not before the individual was found and apologized. Subsequently, Habermas was interviewed and made this comment:

“The internet generates a centrifugal force,” Habemas says. “It releases an ­anarchic wave of highly fragmented circuits of communication that ­infrequently overlap. Of course, the spontaneous and egalitarian nature of unlimited communication can have subversive effects under authoritarian regimes. But the web itself does not produce any public spheres. Its structure is not suited to focusing the attention of a dispersed public of citizens who form opinions simultaneously on the same topics and contributions which have been scrutinised and filtered by experts.”

Habermas suggests that power resides with the State even when social networks bring people together to protest and demonstrate. The results of these engagements are contingent and don’t necessarily lead to change or to the enlargement of the public sphere.

The question is how does the public become enlightened? What conditions will allow for and encourage rich interchanges that will drive new perceptions of power and new ideas about power relations?

The general assumption is that social networks facilitate the growth of constructive public debate. Yet, if that were true how can one explain the nature of the debates in the US around health care which were characterized by some of the most vitriolic exchanges in a generation? How do we explain the restrictive and generally anti-immigrant laws introduced by the state of Arizona? The utopian view of social networks tends to gloss over these contradictions. Yes, it is true that Twitter was banned in Iran during the popular uprising last year to prevent protestors from communicating with each other. Yes, social media can be used for good and bad. There is nothing inherent in social networks, nothing latent within their structure that prevents them being used for enhanced exchange and debate. For debates to be public however, there has to be a sense that the debates are visible to a variety of different constituencies. The challenge is that the networks are not visible to each other — mapping them produces interesting lattice-related structures but these say very little about the contents of the interactions.

The overall effect could be described as mythic since we cannot connect to ten thousand people or know what they are saying to each other. At a minimum, the public sphere takes on a visible face through traditional forms of broadcast that can be experienced simultaneously by many different people. Twitter on the other hand, allows us to see trends but that may often not be enough to make a judgment about currency and our capacity to intervene. Is the headline structure of Twitter enough? Should it be?

The computer screen remains the main interface and mediator between the movement of ideas from discourse to action. And, as I have discussed in previous posts, networks are abstracted instances of complex, quantitatively driven relationships. We need more research and perhaps establishing a social network to do this would help, more research on whether social media are actually driving towards increasingly fragmented forms of interaction. A question. How many of your followers have you met? How many people leave comments on your blog and what is the relationship between hits and comments? Beyond the ten or so web sites that everyone visits, how many have settled into a regular routine not unlike bulletin boards of old?

The recent election campaign won by President Obama in which social media played a formidable role suggests that my questions may have no pertinence to his success. Consumer campaigns and boycotts made all the more practical and possible by social networks suggests the opposite of what I am saying. The potential intimacy of dialogues among strangers working together to figure out problems and meet challenges may contradict my intuition that these are variations on existing networks albeit with some dramatic enhancements.

A final thought. We often talk about the speed with which these phenomena develop without referencing their predecessors. For example, if the Web is just an extension of bulletin boards and hypercard systems then we need to understand how that continuity has been built and upon what premises. If Twitter is an extension of daily conversation and is helping to build the public sphere then we need more research on what is being said and actually examine whether Twitters translate into action.

Part Five 

Are social media, social? (Part Three)

Some non-profits are using Social Media for real results. They are raising the profiles of their charities as well as increasing the brand awareness of their work. They are connecting with a variety of communities inside and outside of their home environments. In the process, Twitter is enabling a variety of exchanges many of which would not happen without the easy access that Twitter provides. These are examples of growth and change through the movement of ideas and projects. Twitter posts remind me short telegrams and as it turns out that may well be the reason the 140 character limit works so well. Social networks facilitate new forms of interaction and often unanticipated contacts. It is in the nature of networks to create nodes, to generate relationships, and to encourage intercommunication. That is after all, one of the key definitions of networks.

Alexandra Samuel suggests: “But here’s what’s different: you, as an audience member, can decide how social you want your social media to be. If you’re reading a newspaper or watching TV, you can talk back — shake your fist in the air! send a letter the editor! — or you can talk about (inviting friends to watch the game with you, chatting about the latest story over your morning coffee). But the opportunities for conversation and engagement don’t vary much from story to story, or content provider to content provider. On the social web, there are still lots of people who are using Twitter to have conversations, who are asking for your comments on that YouTube video, who are enabling — and participating in — wide-ranging conversations via blog and Facebook. You can engage with the people, organization and brands who want to hear from you…or you can go back to being a passive broadcastee.”

These are crucial points, a synopsis of sorts of the foundational assumptions in the Twitterverse and the Blogosphere. At their root is an inference or even assertion about traditional media that needs to be thought about. Traditional media are always portrayed as producing passive experiences or at least not as intensely interactive as social media.

Let’s reel back a bit. Take an iconic event like the assassination of John F. Kennedy. That was a broadcast event that everyone alive at the time experienced in a deeply personal fashion. The tears, the pain, people walking the streets of Washington and elsewhere in a daze, all of this part and parcel of a series of complex reactions as much social as private. Or 9/11, which was watched in real time within a broadcast context. People were on the phone with each other all over the world. Families watched and cried. I could go on and on. It is not the medium which induces passivity, but what we do with the experiences.

So, Twitter and most social media are simply *extensions* of existing forms of communication. This is not in anyway to downplay their importance. It is simply to suggest that each generation seems to take ownership of their media as if history and continuity are not part of the process. Or, to put it another way, telegrams, the telegraph was as important to 19th century society as the telephone was to the middle of the 20th century.

In part one of this essay, I linked Twitter and gossip. Gossip was fundamental to the 17th century and could lead to the building or destruction of careers. Gossip was a crucial aspect of the Dreyfus affair. Gossip has brought down movie stars and politicians. The reality is that all media are interactive and the notion of the passive viewer was an invention of marketers to simplify the complexity of communications between images and people, between people and what they watch and between advertisers and their market.

For some reason, the marketing model of communications has won the day making it seem as if we need more and more complex forms of interaction to achieve or arrive at rich yet simple experiences. All forms of communications to varying degrees are about interaction at different levels. Every form of communication begins with conversations and radiates outwards to media and then loops back. There is an exquisite beauty to this endless loop of information, talk, discussion, blogging, twittering and talking some more. The continuity between all of the parts is what makes communications processes so rich and engaging.

Part Four

Are social media, social?

Warning: This is a long article and not necessarily suitable to a glance. (See below on glances.)

I have been thinking a great deal about social media these days not only because of their importance, but also because of their ubiquity. There are some fundamental contradictions at work here that need more discussion. Let's take Twitter. Some people have thousands of followers. What exactly are they following? And more crucially, what does the word follow mean in this context?

Twitter is an endless flow of news and links between friends and strangers. It allows and sometimes encourages exchanges that have varying degrees of value. Twitter is also a tool for people who don't know each other to learn about shared interests. These are valuable aspects of this tightly wrought medium that tend towards the interactivity of human conversation.

On the other hand, Twitter like many Blogs is really a broadcast medium. Sure, followers can respond. And sometimes, comments on blog entries suggest that a "reading" has taken place. But, individual exchanges in both mediums tend to be short, anecdotal and piecemeal.

The general argument around the value of social media is that at least people can respond to the circulation of conversations and that larger and larger circles of people can form to generate varied and often complex interactions. But, responses of the nature and shortness that characterize Twitter are more like fragments — reactions that in their totality may say important things about what we are thinking, but within the immediate context of their publication are at best, broken sentences that are declarative without the consequences that often arise during interpersonal discussions. So, on Twitter we can make claims or state what we feel with few of the direct results that might occur if we had to face our ‘followers’ in person.

Blogs and web sites live and die because they can trace and often declare the number of ‘hits’ they receive. What exactly is a hit? Hit is actually an interesting word since its original meaning was to come upon something and to meet with…. In the 21st century, hits are about visits and the more visits you have the more likely you have an important web presence. Dig into Google Analytics and you will notice that they actually count the amount of time ‘hitters” spend on sites. The average across many sites is no more than a few seconds. Does this mean that a hit is really a glance? And what are the implications of glancing at this and that over the period of a day or a month? A glance is by definition short (like Twitter) and quickly forgotten. You don’t spend a long time glancing at someone.

Let’s look at the term Twitter a bit more closely. It is a noun that means “tremulous excitement.” But, its real origins are related to gossiping. And, gossiping is very much about voyeurism. There is also a pejorative sense to Twitter, chattering, chattering on and on about the same thing. So, we are atwitter with excitement about social media because they seem to extend our capacity to gossip about nearly everything which may explain why Justin Bieber has been at the top of discussions within the twitterverse. I am Canadian and so is he. Enough said.

Back to follow for a moment. To follow also means to pursue. I will for example twitter about this blog entry in an effort to increase the readership for this article. In a sense, I want you the reader, to pursue your interest in social media with enough energy to actually read this piece! To follow also means to align oneself, to be a follower. You may as a result wish to pursue me @ronburnett.

But the real intent of the word follow is to create a following. And the real intent of talking about hits is to increase the number of followers. All in all, this is about convincing people that you have something important and valuable to say which means that social media is also about advertising and marketing. This explains why businesses are justifiably interested in using social media and why governments are entering the blogosphere and the twitterverse in such great numbers.

Here is the irony. After a while, the sheer quantity of Twitters means that the circle of glances has to narrow. Trends become more important than the actual content. Quantity rules just like Google, where the greater the number of hits, the more likely you will have a site that advertisers want to use. Remember, advertisers assume that a glance will have the impact they need to make you notice that their products exist. It is worth noting that glancing is also derived from the word slippery.

As the circle of glances narrows, the interactions take on a fairly predictable tone with content that is for the most part, newsy and narcissistic. I am not trying to be negative here. Twitter me and find out.

Part Two

Avatar, the Movie

It is always fascinating to read critical analyses of popular films when the writer actually dislikes popular culture, which begs the question, why write about something you hate? [James Bowman writes for the journal, [The New Atlantis and his pieces are generally anti-technology and anti-pop culture. His recent article on Avatar follows the usual arguments of critics disconnected from the culture they seem bent on critiquing. Bowman describes Avatar as a flight of fantasy, dangerous because as with all fantasy films of this genre, it is both escapist and dangerously full of illusions not only about society but also about the future. Interestingly, he claims that the film doesn’t follow the Western tradition of mimesis, that is, it makes no claim to imitate reality and because of this, has no merit as art.

Bowman also says that the only difference between Avatar and other films of the same type is the use of 3D as if the medium of film and its transformation is not part of an important aesthetic shift as well as an important shift in how stories are told. Bowman even criticizes James Cameron’s development of a new language for the indigenous people of Pandora, the Na’vi whom Bowman describes as monkeys. Here is what he says: "The natives of Pandora are giant blue monkeys with sophisticated fiber optics in their tails and the natural world they inhabit is filled with floating mountains, huge dragon-birds whom the inhabitants ride like horses, hammer-headed hippos the size of houses, and other fantastical creatures too numerous to mention and impossible to exist on Earth." Of course, the ‘natives’ are constructions and of course they don’t exist. As with all artifice they are the products of Cameron’s rich imagination, but in Bowman’s world imagination is actually a dirty word.

But, enough about a bad review. To answer a question that must be creeping into your mind, why write about something I dislike? Avatar is an experiment in 3D, that is an experiment with images that have a rather wispy feel like the brilliant disappearing Cheshire cat in Tim Burton’s, Alice in Wonderland. 3D creates an intense feeling of pleasure in viewers largely because it is so ephemeral, not because it approximates reality. I have watched viewers try and grasp the images that come close to them. But, the closeness is itself a function of the glasses we are wearing, a function of the desire to be in the image, and to be a part of the experiences the images are generating.


3D in its modern incarnation is about generative images, that is about depth, distance and a more profound sense of perspective. 3D continues the long tradition of exploring our rather human capacity and desire to enter into worlds entirely made of images. 3D extends the Renaissance exploration of line, shape and colour. That is why Avatar is so important. Sure, its story has been told many times, but crucially not in this way. The film is an exploration of a new frontier and aside from 3D, its real innovations lie in the use of motion capture technology to create not only a synthesis of the real and imaginary, but also synthetic worlds. Finally, we can be rid of the pretensions that all art must show in the most pedantic of ways some relationship to the real!! Painters rid themselves of this crisis when they explored entire canvases of one colour (Rothko), while filmmakers and film critics still think that a black screen goes against the essence of the cinema.

Of course, 3D is in its early days as a medium for exploring the power of storytelling. And, Cameron actually got much of his inspiration for Avatar from his underwater explorations of the wreck of the Titanic. Cameron is really interested in creating new languages for conventional ways of seeing and describing the world. He didn’t need to invent a new language for the Na’vi but he did. He didn’t have to shoot all those beautiful and magical scenes of Pandora, except that if you have ever swum off a reef, you would have noticed many of the same colours and shapes and why not recreate them if you can?

Bowman doesn’t talk about what the word avatar means. Yet, that is at the heart of the film. Avatars are about substitution, that is about substituting what is missing, be it a body or a mind or a story. Avatars don’t replace their progenitors. That is, unless you decide like Cameron did, that his main character had to be transformed from the two dimensional world of the screen into a Na’vi, through a death and rebirth ritual that actually happens to be at the heart of what nearly all major religions in the world proselytize about on an hourly basis.

Let me switch terminology for a moment and suggest that Avatar is actually a commentary on the illusions of religion and on the impossible dreams of immortality that have haunted humans since they began to paint on the walls of caves. Avatar is about that inner world, our inner world that we keep alive in order to stay alive. It is the reverse of the Platonic cave where those who are blind to reality need to be saved. Rather, the film explores those who have reconciled themselves to their fate and who have created a world that is a reflection of their weaknesses and strengths. In other words, the Na’vi are us when we dream and lest we forget, we spend a good proportion of our lives dreaming.

21st Century Student

I will call him Anthony. He arrived in Vancouver with a trunk full of DVD's. He uses SMS and a variety of social networking tools to communicate with friends and family. He uses a small video camera to record his everyday life and edits the output on a laptop and then uploads the material onto the Web. He is adept at video games, though they are not an obsession. Cell phones are expensive, but he finds the money. This sounds familiar; an entire generation working creatively with Facebook and Vimeo and Youtube and Flckr. He loves old movies, hence the DVD's. He knows more about films from the 1970's and 1980's than most film historians. He can quote dialogue from many films and reference specific shots with ease. He uses his expertise in editing to comment on the world and would prefer to show you a short video response to events than just talk about them.

Cultural analysts tend to examine Anthony's activities and use of technology as phenomena, as moving targets which change all the time, just as they saw pop music in the 1960's as a momentary phase or like their early comments on personal computers which did not generally anticipate their present ubiquity.

However, what Anthony is doing is building and creating a new language that combines many of the features of conventional languages but is more of a hybrid of many different modes of expression. Just as we don't really talk about language as a phenomenon, (because it is inherent to everything that we do) we can't deal with this explosion of new languages as if they are simply a phase or a cultural anomaly.

What if this is the new form and shape of writing? What if all of these fragments, verbal, non-verbal, images and sounds are inherent to an entire generation and is their mode of expression?

Language, verbal and written is at the core of what humans do everyday. But, language has always been very supple, capable of incorporating not only new words, but also new modalities of expression. Music for example became a formalized notational system through the adaptation and incorporation of some of the principles of language. Films use narrative, but then move beyond conventional language structure into a hybrid of voice, speech, sounds and images.

As long as Anthony's incorporation of technology and new forms of expression is viewed as a phenomenon it is unlikely that we will understand the degree to which he is changing the fundamental notions of communications to which we have become accustomed over the last century.

Anthony however has many problems with writing. He is uncomfortable with words on a page. He wants to use graphics and other media to make his points. He is more comfortable with the fragment, with the poetic than he is with the whole sentence. He is prepared to communicate, but only on his own terms.

It is my own feeling that the ubiquity of computers and digital technologies means that all cultural phenomena are now available for use by Anthony and his generation and they are producing a new framework of communications within which writing is only a piece and not the whole.

Some may view this as a disaster. I see Anthony as a harbinger of the future. He will not take traditional composition classes to learn how to write. Instead, he will communicate with the tools that he finds comfortable to use and he will persist in making himself heard or read. But, reading will not only be text-based. Text on a page is as much design as it is media. The elliptical nature of the verbal will have to be accommodated within the traditions of writing, but writing and even grammar will have to change.

I have been talking about a new world of writing that our culture is experimenting with in which conventional notions of texts, literacy and coherence are being replaced with multiples, many media used as much for experience as expression. Within this world, a camera, or mobile phone becomes a vehicle for writing. It is not enough to say that this means the end of literacy as we know it. It simply means that language is evolving to meet the needs of far more complex expectations around communications. So, the use of a short form like Twitter hints at the importance of the poetic. And the poetic is more connected to Rap music than it is to conventional notions of discursive exchange. In other words, bursts of communications, fragments and sounds combined with images constitute more than just another phase of cultural activity. They are at the heart of something far richer, a phantasmagoria of intersecting modes of communications that in part or in sum lead to connectivity and interaction.

A Torn Page…Ghosts on the Computer Screen…Words…Images…Labyrinths

Exploring the Frontiers of Cyberspace (extracts from a longer piece)

“Poetry is liquid language" (Marcos Novak)

“As a writer of fantasy, Balzac tried to capture the world soul in a single symbol among the infinite number imaginable; but to do this he was forced to load the written word with such intensity that it would have ended by no longer referring to a world outside of its own self…. When he reached this threshold, Balzac stopped and changed his whole program: no longer intensive but extensive writing. Balzac the realist would try through writing to embrace the infinite stretch of space and time, swarming with multitudes, lives, and stories." (Six Memos for the Next Millennium, Italo Calvino)

Is it possible to imagine a labyrinth without a defined pattern, without a center or exit point? What if we enter that labyrinth and wander through its hallways, endlessly opening doors which lead to other doors, with windows which look out over other windows? What if there is no real core to the labyrinth and it is of unknown size? This may be an apt metaphor for virtual reality, for the vast network of ideas which now float across and between the many layers of cyberspace.

“A year ago, I was halfway convinced that cyberspaces where you can experience the sensation of hefting a brick or squeezing a lemon probably won’t be feasible for another twenty or thirty years. A month ago, I saw and felt something that shook my certainty. When I tried the first prototype of a pneumatic tactile glove in inventor Jim Hennequin’s garage in Cranfield, an hour’s drive southwest of London, I began to suspect that high-resolution tactile feedback might not be so far in the future. The age of the Feelies, as Aldous Huxley predicted, might be upon us before we know what hit us." (Howard Rheingold, Virtual Reality, New York: Touchstone, 1992, p. 322)

Sometimes the hallways of this labyrinth narrow and we hear the distant chatter of many people and are able to ‘browse’ or ‘gopher’ into their conversations. Other times, we actually encounter fellow wanderers and exchange details about geography, the time, information gained or lost during our travels. The excitement of being in the labyrinth is tempered by the fact that as we learn more and more about its structure and about surviving within its confines, we know that we have little hope of leaving. Yet, it is a nourishing experience at one level because there are so many different elements to it, all with a life of their own, all somehow connected and for the most part available to us. In fact, even though we know that the labyrinth has borders, it seems as if an infinite number of things could go on within its hallways and rooms. It is almost as if there is too much choice, too much information at every twist and turn. Yet, this disoriented, almost chaotic world has a structure. We don’t know the designers. They may have been machines, but we continue to survive in part because we have some confidence in the idea that design means purpose, and purpose must mean that our wanderings will eventually lead to a destination. (This may be no more than a metaphysical claim, but it keeps the engines of Cyberspace running at high speed.)

In order to enter a virtual labyrinth you must be ready to travel by association. In effect, your body remains at your computer. You travel by looking, by reading, by imaging and imagining. The eyes are, so to speak, the royal road into virtuality.

“Cyberspace — The electronic frontier. A completely virtual environment: the sum total of all [BBSes], computer networks, and other [virtual communities]. Unique in that it is constantly being changed, exists only virtually, can be practically infinite in “size" communication occurs instantaneously world-wide — physical location is completely irrelevant most of the time. Some include video and telephone transmissions as part of cyberspace." (A. Hawks, Future Culture — December 31, 1992)

In the labyrinth of Cyberspace, design is the logic of the system. Cyberspace reproduces itself at so many different levels at once and in so many different ways, that the effects are like an evolutionary explosion, where all of the trace elements of weakness and strength coexist. The architecture of this space is unlike any that has preceded it and we are consequently grappling with discursive strategies to try and describe the experiences of being inside it. The implication is that there is no vantage point from which you can watch either your progress or the progress of others. There isn’t a platform upon which you can stand to view your experience or the experience of your neighbours. In other words, the entire system doesn’t come into view — how could you create a picture of the Internet? Yet, you could imagine the vast web-like structure, imagine, that is, through any number of different images, a world of microelectronic switches buzzing at high speed with the thoughts and reflections of thousands of people. The more important question is what does this imagining do to our bodies, since to some degree Cyberspace is a fiction where we are narrator and character at one and the same time? What are the implications of never knowing the shape and architecture of this technological sphere which you both use and come to depend on? What changes in the communicative process when you type a feeling onto a computer screen, as opposed to speaking about it? What does that feeling look like in print? Does the computer screen offer a space where the evocative strength of a personal letter can be communicated from one person to another?

Jaron Lanier and The Hazards of Online Collectivism

Jaron Lanier, who is famous for having coined the term virtual reality and the concepts that go with it, wrote an essay in late May that has provoked discussion all over the internet. Here is a quote from the piece. The complete article can be found at the EDGE website. The essay is entitled, "Digital Maoism: The Hazards of the New Online Collectivism."

The problem I am concerned with here is not the Wikipedia in itself. It's been criticized quite a lot, especially in the last year, but the Wikipedia is just one experiment that still has room to change and grow. At the very least it's a success at revealing what the online people with the most determination and time on their hands are thinking, and that's actually interesting information.

No, the problem is in the way the Wikipedia has come to be regarded and used; how it's been elevated to such importance so quickly. And that is part of the larger pattern of the appeal of a new online collectivism that is nothing less than a resurgence of the idea that the collective is all-wise, that it is desirable to have influence concentrated in a bottleneck that can channel the collective with the most verity and force. This is different from representative democracy, or meritocracy. This idea has had dreadful consequences when thrust upon us from the extreme Right or the extreme Left in various historical periods. The fact that it's now being re-introduced today by prominent technologists and futurists, people who in many cases I know and like, doesn't make it any less dangerous.

The EDGE also has 28 pages of responses to what Lanier says.

The essence of his argument is that collaborative work on the net has become increasingly hive-like. This leads to a "group mentality" approach to ideas and the notion that the "collective is all-wise." The result is a tyranny of the majority with a simultaneous loss of value both to intellectual depth and the way democracies operate. He is particularly critical of wikipedia— the online encyclopedia which is being built by individuals from all over the world in much the same manner as open source software. I have commented on wikipedia before. Some of Lanier's fears are well-founded, but for the most part, his comments don't explain or clarify why networked forms of knowledge contruction are any more hive-based than most intellectual projects. Generally, irrespective of the type of knowledge or information produced, there are communities of interest that define and reinforce the concepts, categories and arguments that they support. This has been discussed in great depth by people like Bruno Latour and Elias Canetti wrote an important "Crowds and Power," in 1962 on the phenomenon of mass hysteria and the tendency to a kind of viral effect when large groups of people operate in tandem.

Lanier's points need discussion, not the least because networked forms of interaction on the scale that we are seeing at the moment are still very new. That said, there is not much to his analysis of conventional media. He is too skeptical of Popular Culture and gives too much weight to the role of sites like Wikipedia. His concern, that the aggregative role played by the many sites that are about sites is overstated. He is worried that these meta-sites will play an overly powerful role as arbiters of taste and choice. I think in this, he underestimates the intelligence of Internet users. Nonetheless, an important article to read.

The context for learning, education and the arts (5)

(Please refer to the previous four entries for this article. (One, Two, Three, Four, Five)

My point here is that although computers are designed by humans, programmed by humans and then used by humans, this tells us only part of the story. The various dimensions of the experience are not reducible to one of the above instances nor to the sum total of what they suggest about computer-human interaction. Instead, most of what makes up the interaction is not predictable, is full of potential errors of translation and action and is not governed by simple rules of behaviour.

Smith puts it well: “…what was required was a sense of identity that would support dynamic, on-the-fly problem-specific or task-specific differentiation — including differentiation according to distinctions that had not even been imagined at a prior, safe, detached, “design time. (Smith: 41)

“Computational structures cannot be designed in anticipation of everything that will be done with them. This crucial point can be used to explain if not illustrate the rather supple nature of machine-human relations. As well, it can be used to explain the extraordinary number of variables which simultaneously make it possible to design a program and not know what will be done with it.

Another example of this richness at work comes from the gaming community (which is different from the video game community). There are tens of thousands of people playing a variety of games over the internet. Briefly, the games are designed with very specific parameters in mind. But what gamers are discovering is that people are grouping themselves together in clans to play the games in order to win. These clans are finding new ways of controlling the games and rewriting the rules to their own specifications thereby alienating many of the players. In one instance, in response to one such sequence of events, a counter-group got together and tried to create some semblance of governance to control the direction in which the game was headed. After some months the governing council that had been formed grew more and fascistic and set inordinately strict rules for everyone. The designer of the game quit in despair.

This example illustrates the gap, the necessary gap between the “representational data structure (Smith: 43) that initially set up the parameters of the game and the variables that were introduced by the participants. But it also points out the limitations of the design process, limitations that cannot be overcome by increasingly complex levels of design. This is in other words a problem of representation. How can code be written at a level that will be able to anticipate use? The answer is, that for the most part, with great difficulty. It is our cultural investment in the power of the computer that both enhances and changes the coding and the use. We have thus not become extensions of the machine but have acted in concert with it, much as we might with another human being. This is hybridity and it suggests that technology and the practical use to which we put technology always exceeds the intentional structures that we build into it.

It is within and through this excess that we learn. It is because of this excess that we are able to negotiate a relationship with the technologies that make up our environment. And it is the wonder, the freshness, the unpredicability of the negotiation process that leads us to unanticipated results, such as, for example, Deep Blue actually beating Kasparov!

The Practice of Interdisciplinarity in Design and New Media (Final)

Please refer to the last three entries for the context for this series.

The NewMic collaboration began with two major reference points, Palo Alto and MIT’s Media Lab. Again, this was not unusual. Other projects in Montreal, Melbourne, Dublin and Germany referred to and attempted to reflect the successes of MIT and Xerox. In the beginning the mandate of NewMic was described as follows:

To accomplish its mission, NewMIC is focused on the following objectives:

• Attracting and retaining outstanding faculty and graduate and undergraduate students in new media research and in art and design areas.
• Building excellence in new media innovation.
• Creating more skilled IT staff and industry clusters.
• Developing better industry-university-institute collaboration for the purposes of technology transfer.
• Encouraging the transfer and commercialization of technology through incubation support.
• Attracting more venture capital to the new media industry. (March 2001)

The industrial design component was incorporated into the vision by default under the rubric of New Media. This proved to be an error because so much of New Media is driven by the cross disciplinary relationship among interface design, product design and inclusive design. Ultimately, the goal was to frame the experience of users of New Media within a product-oriented set of research pursuits. Ironically, so many of the lessons that designers have learned over the last two decades, the importance of detailed ethnographic inquiry, the need to think about the relationship between product and user, the flexibility that is necessary to make interfaces work for many diverse constituents, the fact that design is really about people. See a recent speech by Dr. Stefano Marzano, CEO & Chief Creative Director, Philips Design and the knowledge that inclusivity cannot be attained without understanding how people live, was not directly applied to the research in New Media.

The emphasis on innovation, technology transfer and commercialization, although necessary, cannot be accomplished in a context that is entirely oriented towards applied research with short timelines. This is a conundrum because it is completely understandable that industry would want to see some results from their investment, but the essence of collaboration is that it takes time. In fact, one of the crucial lessons of the NewMic experience is that developing designs that are environmentally sensitive and inclusive requires not only that people from different disciplines participate, but that time be given over to the development of shared communities of interest. Interdisciplinarity is as much about a coming together as it is about recognizing differences.

Here are some examples of the discussions that were held on various projects:

Scenario 1:

Setting: World Trade Organization demonstrations in Seattle
Technology: Wireless devices
1. Two organizers need to stay in constant contact. They need to gain access to information quickly and efficiently.
2. Their wireless devices have to have access to a mapping program that allows them to constantly track each other,
3. They run into unexpected problems including some demonstrators destroying public property, additional police blockades and more passive demonstrators who want to march peacefully but find themselves caught up in the action.


1. Telephonic
2. Exchange of information
3. Mapping
4. Ability to connect to other organizers
5. Ability to send video images quickly to confirm events
6. Ability to allow other organizers to join their private network
7. Ability to gather in snippets of news broadcasts for additional overviews of the information
8. Instant messaging
9. Use of icons to show location and intention

The distance between the devices determines connectivity and peering relationships are established and can change as circumstances permit. An important feature would have to be the ability to identify hostile as well as friendly “connects.��?

Living Archive:

The living archive becomes an adaptable software component of P2P. As the demonstrations develop, the LA brings all of the data into a series of predetermined categories. Then, using AI, it begins to prioritize the input and change the order to reflect moment-to-moment changes in events.


1. Memory cells
2. Visible icons for the cells
3. Input tracing
4. Output tracing
5. Cells can be rearranged and edited in much the same way as a series of images
6. As different memory cells are attached to each other, the program maps the history
7. Images and sounds form one of the sources for the cells

The key to a successful communications network will be the ad hoc nature of the usage. There will have to be enough elements to allow for changes on the spot.