Digital Culture Notes: Part Two

E-Books, iPads and Digital Things

 

Much has been made of the iPad’s possible influence on the future of reading and writing. Many of the fears about the disappearance of physical books are justified just as the worries about the future of newspapers needs to be taken very seriously. There is no doubt that we have entered an unstable period of change as various traditional forms of media shift to accommodate the impact of the Internet and digital culture in general.

However, the idea that books will disappear or that newspapers will lose their relevance because they may have to shift to devices like the iPad is naïve at best and alarmist. After all, books are really just pages and pages of discourse sometimes fictional, often not. All the genres that make up what we call the modern novel are not dependent on the physical boundaries established by traditional book production. In fact, an argument can be made that the process through which books have been brought to the attention of the reading public (ads, publicity campaigns and so on) are more in danger of dying than the books themselves. There is only one way in which books will die, and that is if we cease to speak or if we shift so dramatically to an oral culture that the written word becomes redundant.

An argument could be made that people inundated by many different forms of media expression will relegate books to the attics in their homes and in their minds. And a further argument could be made that the decline of reading has been happening for some time, if we look at the number of books sold over the last decade. There is a real danger that books and the reading public will shrink even further.

Nevertheless, my sense is that reading has morphed onto the Web and other media and that reading is more about glances and headlines than in-depth absorption of texts. We now have a multimedia space that links all manner of images with texts and vice-versa. The nature of content is shifting as are the venues in which that content can be read. The design of graphical spaces is often more important than words. Texts on the iPad can be embedded with moving images and sounds and so on, in much the same manner as we now do with web pages. However, this phantasmagoria of elements is still governed by language, discourse and expression.

Matt Richtel has an article in the New York Times that examines the interaction of all of these divergent media on users. “At home, people consume 12 hours of media a day on average, when an hour spent with, say, the Internet and TV simultaneously counts as two hours. That compares with five hours in 1960, say researchers at the University of California, San Diego. Computer users visit an average of 40 Web sites a day, according to research by RescueTime, which offers time-management tools.” Richtel suggests that the intensity of these activities and the need to multitask are rewiring the human brain. I am not able to judge whether that is true or not, but irrespective it would be foolhardy not to recognize that all of this activity increases the speed of interaction. Clearly, reading a non-fiction book is not about speed and books in general cannot be read in the same way as we read web pages, especially if we are looking at book content on mobile phones.

The same can be said for newspapers, which over the years have been designed to entice readers into reading their pages through headlines in order to slow down the tendency to glance or scan. This tells us something about the challenges of print. We tend to assume that the existence of a newspaper means that it is read. But, there has always been a problem with attention spans. Newspapers are as much about a quick read, as are web pages. Newspapers are generally read in a short time, on buses or trains — talk about multitasking.

As it turns out this is very much the same for many genres of the novel from thrillers to the millions of potboilers that people read and that are not generally counted when reference is made to the reading public. In fact, the speed of reading has accelerated over the last hundred years in large measure because of the increased amount of information that has become available and the need to keep up.

This is where e-books and the iPad come in. E-books are an amazing extension of books in general, another and important vehicle for the spread of ideas. The iPad will make it possible (if authors so desire) to extend their use of words into new realms. Remember, when the cinema was invented in 1895 among the very first comments in the British Parliament was that moving images would destroy theatre, books and music. Instead, the cinema has extended the role of all of these forms either through adaptation or integration. Writers remain integral to all media.

 

Are Social Media, Social? (Part Nine)

The ties that bind connect people, families and communities but those ties remain limited and small in number however richly endowed they may appear to be within the context of discussions about social media. As I mentioned in my previous post, this is a fragile ecology that assumes among other things, that people will stay on top of their connections to each other and maintain the strength and frequency of their conversations over time.

It also means that the participatory qualities of social media can be sustained amidst the ever expanding information noise coming at individuals from many different sources. Remember, sharing information or even contributing to the production of information doesn’t mean that users will become more or less social. The assumption that social media users make is that they are being social because they are participating within various networks, but there is no way of knowing other than through some really hard edged research whether that is really the case.

One of the most fascinating aspects of social media is what I would describe as statistical overload or inflation. “There are now more Facebook users in the Arab world than newspaper readers, a survey suggests. The research by Spot On Public Relations, a Dubai-based agency, says there are more than 15 million subscribers to the social network. The total number of newspaper copies in Arabic, English and French is just under 14 million.” (viewed on May 25, 2010). I am not sure how these figures were arrived at since no methodology was listed on their website. The company is essentially marketing itself by making these statistics available. There are hundreds of sites which make similar claims. Some of the more empirical studies that actually explain their methodologies still only sample a small number of users. Large scale studies will take years to complete.

The best way to think of this is to actually count the number of blogs that you visit on a regular basis or to look at the count of your tweets. Inevitably, there will be a narrowing not only of your range of interests but of the actual number of visits or tweets that you make in any given day. The point is that statistics of use tell us very little about reading, depth of concern or even effects.

The counter to this argument goes something like this. What about all those YouTube videos that have gone viral and have been seen by millions of viewers? Or, all the Blogs that so many people have developed? Or, the seemingly endless flow of tweets?

Jakob Nielsen at useit.com who has been writing about usability for many years makes the following claim. “In most online communities, 90% of users are lurkers who never contribute, 9% of users contribute a little, and 1% of users account for almost all the action. All large-scale, multi-user communities and online social networks that rely on users to contribute content or build services share one property: most users don't participate very much. Often, they simply lurk in the background. In contrast, a tiny minority of users usually accounts for a disproportionately large amount of the content and other system activity.” (viewed on May 25, 2010) Neilsen’s insights have to be taken seriously.

The question is why are we engaging in this inflated talk about the effects and impact of social media? Part of the answer is the sheer excitement that comes from the mental images of all these people creating, participating, and speaking to each other even if the number is smaller than we think. I see these mental images as projections, ways of looking at the world that more often than not link with our preconceptions rather than against them.

So, here is another worrying trend. When Facebook announces 500 million people using its site(s), this suggests a significant explosion of desire to create and participate in some form of communications exchange. It says nothing about the content (except that Facebook has the algorithms to mine what we write) other than through the categories Facebook has, which do tend to define the nature of what we exchange. For example, many users list hundreds of friends which becomes a telling sign of popularity and relevance. It is pretty clear that very few members of that group actually constitute the community of that individual. Yet, there is an effect in having that many friends and that effect is defined by status, activities and pictures as well as likes and dislikes.

None of this is necessarily negative. The problem with the figure 500 million is that it projects a gigantic network from which we as individuals can only draw small pieces of content. And, most of this network is of necessity virtual and detached from real encounters. This detachment is both what encourages communication and can also discourage social connections. This is why privacy is so important. It is also why the anti-Facebook movement is gathering strength. The honest desire to communicate has been supplanted by the systematic use of personal information for profit.

Part Ten Follow on me Twitter @ronburnett

 

Reblog this post [with Zemanta]

 

Are Social Media, Social? (Part Eight)

The Ties that Bind……the appearance of portable video in the late 1960's and early 1970's led to a variety of claims about the potential for community media. The most important claim was that video in the hands of community members would allow people in various disenfranchised communities to have a voice. This claim was always stated in contrast to mainstream media which were viewed as one-way and intent on removing the rights of citizens to speak and be heard.

Keep in mind that communities are variously defined by the ties that bind people together. Cities are really agglomerations of villages, impersonal and personal at the same time. Urban environments are as much about the circulation of information as they are about the institutions that individuals share, work in and create. Cities are also very fragile environments largely dependent upon the good will of citizens at all levels of activity. So, communities change all of the time as do the means of communications that they use. There is a constant and ever widening and profoundly interactive exchange of information going on in any urban centre. The buzz is at many levels, from the most personal and familial to the public context of debate about local, national and international issues.

In the post 9/11 world, the two way flow of information and communication has become even more central to urban life. It is not just the appearance and then massive increase in the use of mobile technologies that has altered what communities do and how they see themselves, it is the non-stop and incessant commentaries by many different people on their own lives and the lives of others and on every aspect of the news that has altered both the mental and physical landscape that we inhabit. All of this however, is very fragile. In a world increasingly defined by the extended virtual spaces that we all use, social media platforms define the ties that bind.

In my last entry, I ended with the statement that only eleven percent of internet users actively engage with Twitter on a daily basis. Take a look at [this visualization ](http://informationarchitects.jp/) and you will notice that there are 140 people or organizations that dominate Twitter usage. This doesn't mean that everyone else is not twittering, it just suggests that the community of relationships developed through twitter is not as broad as one might imagine, nor is it as local as the notion of community would suggest. This idea of an extended space lengthens and widens the reach of a small number of people while everyone else essentially maintains the village approach to their usage. The key difference to earlier historical periods is that we imagine a far greater effect to our own words than is actually possible.

From time to time, such as during the Haiti crisis, the best elements of this new and extended social world comes to the fore. However, if you take a hard look at some of the research on news blogs you will discover that the vast majority link to legacy media and get most of their information from traditional sources. Even the categories used by bloggers retain the frameworks and terminology of the mainstream media.

Part of the irony here is that in order for blogs to move beyond these constraints, they would actually have to construct organizations capable of doing research and distinguishing between what is true and what is false. At the same time, the controlled anarchy of the Web allows information to seep through that might otherwise have been hidden or restrained. The total picture however is not as diverse as social media advocates would have us believe.

Part Nine 

Reblog this post [with Zemanta]

 

Are Social Media, Social? (Part Seven)

First let me say that I have really appreciated all of the carefully thought out comments sent in by readers. The last two entries including this one directly and indirectly reference your input.

Go to this site to follow the latest local news in your area. Much like Twitter, FWIX lets you follow the local news based on your interests. This is not dissimilar to the aggregate approach taken by many blogs like the Huffington Post and The Daily Beast. The core difference is that news sites select their bloggers, while FWIX relies on entries produced by locals. A site like NowPublic which started in Vancouver but was bought out by a Denver based investment firm, also relies on public participation although there is a good deal more vetting than on other social news sites.

To what degree is the news different on these aggregate sites and what does this say about the use of media? What is the difference between traditional broadcast news and social news? Or, have we all become journalists, writers and commentators on the communities we live in and on the broader political stories that we share?

Part of what makes social media distinct is the *strength* of the ties between people and the stories and messages they exchange. The suggestion that living in a city makes you an expert on local stories depends on many factors not the least of which is what community you belong to, what your work is and where you live. There is no guarantee that being a local confers any greater depth upon a writer or observer. In fact, in some instances the opposite claim can be made. I would suggest that social news broadens the base of potential stories but that the vast majority of what is published is essentially hearsay. In general, with some exceptions, social news sites become a reflection of a small number of users and writers who effectively take on the job for the community of readers.

Digg uses submissions from readers to build a picture of the importance of some topics over others. Numbers count. In 2006 it became apparent that a small number of writers were manipulating the ratings in order to dominate not only the trends of the time, but also to promote their own blogs. An investigation showed that thirty users had taken over.

The internal picture that we have of the Internet makes it appear as if everything we do and say within its confines will have an audience. The network is so large, that news aggregation in particular gives off the impression of connectivity and currency. There is no obvious way of testing these claims other than through a quantitative analysis of visitors and some in-depth studies of usage patterns and learning experiences. Rating a story is not good enough. Feedback is essential to the lifeblood of social news but in reality only a few sites attract the traffic to make them relevant.

This is where Twitter comes in. The brilliance of this short messaging system was all too obvious during the crisis in Iran last year. It has also been very useful in other crisis situations in Africa and Asia. No claims are made to journalistic truth. Twitter entries are newsy without all the baggage of the news attached to them. Recent events in Thailand bore this out, as protesters were able to keep track of their own and the police's movements throughout Bangkok and news agencies used the Twitter entries to explain what was happening.

However, let's delve a bit more deeply into this. The following quote may articulate some of the complications here:

While the standard definition of a social network embodies the notion of all the people with whom one shares a social relationship, in reality people interact with very few of those "listed" as part of their network. One important reason behind this fact is that attention is the scarce resource in the age of the web. Users faced with many daily tasks and large number of social links default to interacting with those few that matter and that reciprocate their attention. For example, a recent study of Facebook showed that users only poke and message a small number of people while they have a large number of declared friends. And a casual search through recent calls made through any mobile phone usually reveals that a small percentage
of the contacts stored in the phone are frequently contacted by the user.
(Bernardo A. Huberman, Daniel M. Romero and Fang Wu Social Computing Lab, HP Laboratories, arXiv:0812.1045v1 [cs.CY] 4 Dec 2008)

In the same article, the authors talk about how after analyzing thousands of Twitter users they came to the conclusion that even with a large following, the central motivating factor in most tweets is to keep friends and family updated on both personal and public news. Their analysis also showed that the number of friends and family involved in the exchanges were quite small. Once again, the overall size of the network as a whole is making it appear as if more is actually going on than is possible given the daily habits of most users. As it turns out, a tiny number of Twitter personalities and sites gather in most of the usage. As with the news, over time, readers will default to a small number of acceptable sources.

A December, 2008 PEW study showed that eleven percent of Americans who are online use Twitter. The mental image we have is of something far larger going on and guess where that has come from? Broadcast media, in other words, television, and the twenty or so most visited news sites on the web which are also the most traditional.

More on this in my next posting. Follow me on Twitter @ronburnett

Part Eight 

  • 5 Innovative Websites That Could Reshape the News (mashable.com)
  •  

     

    Reblog this post [with Zemanta]

     

    Are social media, social? (Part Four)

    Heidi May has produced some important comments on the previous entries of Are Social Media, Social? May suggested a link to Network, A Networked Book about Network Art which is a fascinating example of the extensions that are possible when communities of interest establish a context to work together and collaborate. Heidi May also asks about the Diaspora project. Diaspora will attempt to build an open source version of Facebook. I wish them luck. This is an essential move to broaden the scope and expectations that we have about the role and usage of social networks, about privacy and most importantly about controlling the very code that governs how we relate within virtual spaces.

    A good example of some of the challenges that we face within networked environments is what happened to the famous German philosopher, Jürgen Habermas. “In January, one of the world’s leading intellectuals fell prey to an internet hoax. An anonymous prankster set up a fake Twitter feed purporting to be by ­Jürgen Habermas, professor emeritus of philosophy at the Johann Wolfgang Goethe University of Frankfurt. “It irritated me because the sender’s identity was a fake,” ­Habermas told me recently. Like Apple co-founder Steve Jobs, ­Zimbabwean president Robert Mugabe and former US ­secretary of state Condoleezza Rice before him, ­Habermas had been “twitterjacked”.” Stuart Jeffries Financial Times, April 30, 2010.

    As it turns out the hoax was removed but not before the individual was found and apologized. Subsequently, Habermas was interviewed and made this comment:

    “The internet generates a centrifugal force,” Habemas says. “It releases an ­anarchic wave of highly fragmented circuits of communication that ­infrequently overlap. Of course, the spontaneous and egalitarian nature of unlimited communication can have subversive effects under authoritarian regimes. But the web itself does not produce any public spheres. Its structure is not suited to focusing the attention of a dispersed public of citizens who form opinions simultaneously on the same topics and contributions which have been scrutinised and filtered by experts.”

    Habermas suggests that power resides with the State even when social networks bring people together to protest and demonstrate. The results of these engagements are contingent and don’t necessarily lead to change or to the enlargement of the public sphere.

    The question is how does the public become enlightened? What conditions will allow for and encourage rich interchanges that will drive new perceptions of power and new ideas about power relations?

    The general assumption is that social networks facilitate the growth of constructive public debate. Yet, if that were true how can one explain the nature of the debates in the US around health care which were characterized by some of the most vitriolic exchanges in a generation? How do we explain the restrictive and generally anti-immigrant laws introduced by the state of Arizona? The utopian view of social networks tends to gloss over these contradictions. Yes, it is true that Twitter was banned in Iran during the popular uprising last year to prevent protestors from communicating with each other. Yes, social media can be used for good and bad. There is nothing inherent in social networks, nothing latent within their structure that prevents them being used for enhanced exchange and debate. For debates to be public however, there has to be a sense that the debates are visible to a variety of different constituencies. The challenge is that the networks are not visible to each other — mapping them produces interesting lattice-related structures but these say very little about the contents of the interactions.

    The overall effect could be described as mythic since we cannot connect to ten thousand people or know what they are saying to each other. At a minimum, the public sphere takes on a visible face through traditional forms of broadcast that can be experienced simultaneously by many different people. Twitter on the other hand, allows us to see trends but that may often not be enough to make a judgment about currency and our capacity to intervene. Is the headline structure of Twitter enough? Should it be?

    The computer screen remains the main interface and mediator between the movement of ideas from discourse to action. And, as I have discussed in previous posts, networks are abstracted instances of complex, quantitatively driven relationships. We need more research and perhaps establishing a social network to do this would help, more research on whether social media are actually driving towards increasingly fragmented forms of interaction. A question. How many of your followers have you met? How many people leave comments on your blog and what is the relationship between hits and comments? Beyond the ten or so web sites that everyone visits, how many have settled into a regular routine not unlike bulletin boards of old?

    The recent election campaign won by President Obama in which social media played a formidable role suggests that my questions may have no pertinence to his success. Consumer campaigns and boycotts made all the more practical and possible by social networks suggests the opposite of what I am saying. The potential intimacy of dialogues among strangers working together to figure out problems and meet challenges may contradict my intuition that these are variations on existing networks albeit with some dramatic enhancements.

    A final thought. We often talk about the speed with which these phenomena develop without referencing their predecessors. For example, if the Web is just an extension of bulletin boards and hypercard systems then we need to understand how that continuity has been built and upon what premises. If Twitter is an extension of daily conversation and is helping to build the public sphere then we need more research on what is being said and actually examine whether Twitters translate into action.

    Part Five 

    Are social media, social? (Part Two)

    Okay. Lots of responses to my previous entry. Like I said at the end of the article, I am not trying to be negative. I am actually responding to the profoundly important critique of the digitally induced and digested world of communications that Jaron Lanier distills in his recent book, You Are Not a Gadget.

    Mashable, a great web site has an article entitled, 21 Essential Social Media Resources You May Have Missed. Most of what the article describes is very important. This is truly the utopian side of the highly mediated universe that we now inhabit. But, as Lanier suggests, mediation does come with risks not the least of which is a loss of identity. Who am I in the Twitterverse or even within the confines of this Blog. And, why would you want to know?

    According to Lanier, "A new generation has come of age with a reduced expectation of what a person can be, and of who each person might become." (I can't give you a page number because my Kindle doesn't show page numbers! Location 50-65 whatever that means.) The Mashable article would seem to contradict Lanier describing as it does many instances of Social Media use that have genuinely benefitted a pretty large number of people. What Lanier is getting at goes beyond these immediate examples. He talks at length about a lock-in effect that comes from the repeated use of certain modes of thought and action within the virtual confines of a computer screen.

    He is somewhat of a romantic talking about the need for mystery and asking what cannot be represented by a computer. This is an important issue. The underlying structure of the web and the social media that piggyback on that structure is pretty much the same as it was when Tim Berners-Lee transformed the old Apple Hypercard system into something far grander.

    UNIX is core to the operating systems of most computers and its command line references have not evolved that much since the 1980's. Open up the Terminal program on a Mac and take a look at it. Lanier's point is that this says something about how we use computers. Most people cannot change the underlying system that has been put in place. That is why open source programming is so exciting. But even open source is developed by very few people.

    Could we for example develop our own Twitter-like client? Could we, should we become programmers with enough savvy to create a new and less commercially oriented version of Facebook? Even the SDK for the iPhone and the iPad requires a massive time investment if you want to learn how to develop an App. Yes, you can follow a set of instructions, but no you cannot recreate the SDK to make it your own.

    Now, some would say that the use of this software is more important than its underlying language. However, imagine if you applied that same principle to speech and to creativity? This is not about tools. This is about the structure, the embedded nature of the mechanisms that allow things to happen. And, as Lanier suggests, most people have been experiencing digital technology without understanding how that structure may influence their usage of the technology.

    Part Three

    Are social media, social?

    Warning: This is a long article and not necessarily suitable to a glance. (See below on glances.)

    I have been thinking a great deal about social media these days not only because of their importance, but also because of their ubiquity. There are some fundamental contradictions at work here that need more discussion. Let's take Twitter. Some people have thousands of followers. What exactly are they following? And more crucially, what does the word follow mean in this context?

    Twitter is an endless flow of news and links between friends and strangers. It allows and sometimes encourages exchanges that have varying degrees of value. Twitter is also a tool for people who don't know each other to learn about shared interests. These are valuable aspects of this tightly wrought medium that tend towards the interactivity of human conversation.

    On the other hand, Twitter like many Blogs is really a broadcast medium. Sure, followers can respond. And sometimes, comments on blog entries suggest that a "reading" has taken place. But, individual exchanges in both mediums tend to be short, anecdotal and piecemeal.

    The general argument around the value of social media is that at least people can respond to the circulation of conversations and that larger and larger circles of people can form to generate varied and often complex interactions. But, responses of the nature and shortness that characterize Twitter are more like fragments — reactions that in their totality may say important things about what we are thinking, but within the immediate context of their publication are at best, broken sentences that are declarative without the consequences that often arise during interpersonal discussions. So, on Twitter we can make claims or state what we feel with few of the direct results that might occur if we had to face our ‘followers’ in person.

    Blogs and web sites live and die because they can trace and often declare the number of ‘hits’ they receive. What exactly is a hit? Hit is actually an interesting word since its original meaning was to come upon something and to meet with…. In the 21st century, hits are about visits and the more visits you have the more likely you have an important web presence. Dig into Google Analytics and you will notice that they actually count the amount of time ‘hitters” spend on sites. The average across many sites is no more than a few seconds. Does this mean that a hit is really a glance? And what are the implications of glancing at this and that over the period of a day or a month? A glance is by definition short (like Twitter) and quickly forgotten. You don’t spend a long time glancing at someone.

    Let’s look at the term Twitter a bit more closely. It is a noun that means “tremulous excitement.” But, its real origins are related to gossiping. And, gossiping is very much about voyeurism. There is also a pejorative sense to Twitter, chattering, chattering on and on about the same thing. So, we are atwitter with excitement about social media because they seem to extend our capacity to gossip about nearly everything which may explain why Justin Bieber has been at the top of discussions within the twitterverse. I am Canadian and so is he. Enough said.

    Back to follow for a moment. To follow also means to pursue. I will for example twitter about this blog entry in an effort to increase the readership for this article. In a sense, I want you the reader, to pursue your interest in social media with enough energy to actually read this piece! To follow also means to align oneself, to be a follower. You may as a result wish to pursue me @ronburnett.

    But the real intent of the word follow is to create a following. And the real intent of talking about hits is to increase the number of followers. All in all, this is about convincing people that you have something important and valuable to say which means that social media is also about advertising and marketing. This explains why businesses are justifiably interested in using social media and why governments are entering the blogosphere and the twitterverse in such great numbers.

    Here is the irony. After a while, the sheer quantity of Twitters means that the circle of glances has to narrow. Trends become more important than the actual content. Quantity rules just like Google, where the greater the number of hits, the more likely you will have a site that advertisers want to use. Remember, advertisers assume that a glance will have the impact they need to make you notice that their products exist. It is worth noting that glancing is also derived from the word slippery.

    As the circle of glances narrows, the interactions take on a fairly predictable tone with content that is for the most part, newsy and narcissistic. I am not trying to be negative here. Twitter me and find out.

    Part Two

    Learning in a Participatory Culture: A Conversation About New Media and Education

    by Henry Jenkins, Professor at USC.

    An important and timely discussion that explores the growing interdependence of learners with digital media and the need to examine how these media are working, what their influence is and how to teach in this new environment.

    Jenkins interviews, Pillar Lacasa, a Spanish researcher. His first question is: "Children and young people like to spend their free time in front of the screen. Could you give us some good reasons to that could persuade educators to introduce new media and screens in schools." Read more……

    The Literate Future

     

    At the conclusion of a short piece on text, literacy and the Internet, Nicholas Carr suggests the following about the digital age: "Writing will survive, but it will survive in a debased form. It will lose its richness. We will no longer read and write words. We will merely process them, the way our computers do."

    I want to take issue with this pessimistic prediction. At every stage of technological change since the invention of the printing press, similar claims have been made. Most often, these claims originate with those people more likely than others to be both literate and dependent on traditional forms of explanation and exposition. The appearance of the telephone in the 1850's led to predictions of the death of conversation. The growth in the distribution of books and magazines in the 19th century led to predictions that writing, both as process and creative activity would be debased. More recently, the growth of digital tools and their pervasive use led to predictions that creative practices like painting would disappear. (The reverse is true. There has been a renaissance in interest in painting in most Art Schools and a significant rise in attendance at museums showing both contemporary works as well as paintings from different historical periods.) The invention of the cinema in the 1890's led both politicians and critics to suggest that the theater was dead.

    In most cases, the advent of new technologies disrupts old ways of doing things. Equally, the disruption builds on the historical advantages conferred upon the medium through its use and modes of distribution. Text is everywhere in the digital age, and while it may be true that attention spans have decreased (although research in this area is very weak), that says nothing about how people use language to communicate whether in written or verbal form.

    The example that is most often cited as evidence that there has been a decline in literacy is text messaging. What a red herring! Text messaging is simply the transposition of the oral into text form. It is a version of speech not of writing. It neither indicates a loss of ability nor an increase in literacy. Rather, and more importantly, text messaging is another and quite creative use of new technologies to increase the range and often the depth of communications among people.

    The beauty of language is its flexibility and adaptability. The various modes of conversation to which we have become accustomed over centuries have a textured and rich quality that depends on our desire to communicate. That desire crosses nearly every cultural and political boundary on this shrinking earth. Rather than worry about whether text messaging will undermine literacy, we need to examine how to use all of the new modalities of communications now available to us to enhance the relationships we have with each other. That is the real challenge, quality of exchange, what we say and why and how all of that translates into modes of expression that can be understood and analyzed.

    Up In The Air with Avatar

    "Being in the air is the last refuge for those that wish to be alone." Jason Reitman) There are profound connections between Avatar and Up in the Air. Both movies come at a time that can best be described as dystopic. From Afghanistan, Iraq and other countries mired in war to the deepest and most serious recession since the 1930's, to the ongoing crisis of climate change, the first decade of the 21st Century has been characterized by waves of loss, violence and instability.

    What then allows any individual to compose their identity and to maintain their sense of self as the air around the planet gets thinner and thinner? How does the imagination work within a dystopia?

    Up in the Air explores the tropes of loneliness and travel -- the in-between of airports and hotels, those places that are not places but nevertheless retain many of the trappings of home without the same responsibilities and challenges. There are consequences to being on the road 300 days of the year and among them is the construction of an artificial universe to live in like the metal tubes we describe as airplanes. One of the other consequences is that frequent travelers have to build imaginary lives that are fundamentally disconnected from intimacy and genuine conversation.

    Ironically, Avatar imagines a world that is for a time dragged into the dystopia of 21st century life and where at the end of the day, a new vision is constructed. Avatar's use of 3D will be the subject of another article soon, but suffice to say that the worlds James Cameron constructs through motion capture and animation are among the most beautiful that the cinema has ever seen.

    Hidden behind both films is a plaintiff plea for love and genuine relationships. Avatar explores this through tales of transmigrating spirits and animistic notions that transform animals and nature itself into a vast Gaia-like system of communications and interaction. The N'avi are a synthesis of Cameron's rather superficial understanding of Aboriginal peoples, although their language is a fascinating blend created by Paul Frommer from the University of Southern California.

    The flesh of avatars in the film are not virtual but as the main character, Jake Sully discovers, the N'avi are the true inheritors of the planet they live on, a exotic version of early Earth called Pandora. In Greek mythology Pandora is actually derived from 'nav' and was the first woman. The Pandora myth asks the question why there is evil in the world which is a central thematic of Avatar.

    Up in the Air asks the same question but from the perspective of a rapacious corporation which sends its employees out to fire people for other companies or as the main character, Ryan Bingham says to save weak managers from the tasks for which they were hired. The film also asks why there is evil in the world and suggests that any escape, even the one that sees you flying all year doesn't lead to salvation.

    Both films explore the loss of meaning, morality and principles in worlds both real and unreal. Avatar provides the simplest solution, migrate from a humanoid body and spirit to a N'avi to discover not only who you are but how to live in the world. **Up in the Air** suggests that love will solve the dystopic only to discover that casual relationships never lead to truth and friendship.

    These are 21st century morality tales. Avatar is a semi-religious film of conversion not so much to truth but to the true God, who is now a mother. Up in the Air teaches Ryan that life is never complete when it is entirely an imaginary construction.

    It is however, the reanimation of the human body in Avatar that is the most interesting reflection of the challenges of overcoming the impact of this first decade of the 21st century. Jake Sully is able to transcend his wheelchair and become another being, now connected to a tribe. He is able to return to a period of life when innocence and naivete enable and empower — when the wonders of living can be experienced without the mediations of history and loss. This of course is also the promise of 3D technology, to reanimate images such that they reach into the spectator's body, so we can share those moments as if we have transcended the limitations of our corporeal selves.

    James Cameron's digital utopia, full of exotic colours, people, plants and animals suggests that escape is possible in much the same way as Ryan Bingham imagines a world without the constraints that are its very essence. 3D technology promises to allow us to transcend our conventional notions of space and time but it cannot bring the earth back to its pristine form nor reverse engineer evolution or history. At the same time, Avatar represent a shift in the way in which images are created, in the ways in which we watch them and also in the potential to think differently about our imaginations and about our future. (Imagine a 3D film about the destruction of the Amazon!)

     


     

    Huffington on New(s) Journalism

    A superb piece by Arianna Huffington on journalism on the Web with many references to the rather superficial claims of traditional newspapers that their content is being stolen through sites that aggregate the news. The paradox is that aggregation is exactly what newspapers and journalists have always been practicing, out of necessity. No one and certainly no organization can be everywhere at once. Associated Press is an aggregator and radio journalists have always borrowed from their cousins in other media. Information in the 21st century is not information as it was in the 20th century. Multiple sources may not be great journalism, may not even be accurate journalism, but inevitably through the cloud, through aggregation, truth and insight become integral to the process. Traditional news sources want to charge for their content. They have to survive. But, the very foundations for how to make money on the Web have not been built. New models will appear over time and during this interim period, the model developed by Huffington, aggregating revenue through targeted advertising will have to suffice. Read her post

    Can Images Think?

    It is perfectly legitimate to ask the following question: How can an image think?

    And the answer, which should come as no surprise to the reader, is that images cannot think.

    However, the power of images is such that we need to think very carefully about the many different ways in which we relate to them. For example, when we say, “that is not a picture of me,” are we claiming that the picture is not a likeness or that the image cannot contain or express the subjective sense that we have of ourselves? Do we expect the image to contain, hold or embrace who we are?

    churchill.jpg

    The most famous portrait of Winston Churchill.

    Let's explore the following example. A photographer snaps an image of Jane and when Jane sees it, the photographer says, “I took that photo of you!” It appears as if the image can not only stand in for Jane, but will be used by the photographer to illustrate Jane’s appearance to a variety of different spectators, including her family.

    wp4360e6b2.jpg

    This is an image found on the Internet. What does it mean to say that?

    In a sense, the image separates itself from Jane and becomes an autonomous expression, a container with a label and a particular purpose. For better, or for worse, the photo speaks of Jane and often, for her.

    The photograph of Jane is scanned into a computer and then placed onto a web site. It is also e-mailed to friends and family. Some of Jane’s relatives print off the image and others place it in a folder of similar photos, a virtual photographic album.

    In all of these instances, Jane travels from one location to another and is viewed and reviewed in a number of different contexts. At no point does anyone say, “this is not a picture of Jane.” So, one can assume that a variety of viewers are accepting the likeness and find that the photo reinforces their subjective experience of Jane as a person, friend and relative.

    The photograph of Jane becomes part of the memory that people have of her and when they look at the photo a variety of feelings are stirred up that have more to do with the viewer than Jane. Nevertheless, Jane appears to be present through the photo and for those who live far away from her, the photograph soon becomes the only way that she can be seen and remembered.

    Picture this scene. The photograph is on a mantel and when Jane’s mother walks by, she stares at it and kisses it. Often, when Jane’s mother is lonely, she speaks to the image and in a variety of ways thinks that the image speaks back to her. Jane’s mother knows that the photograph cannot speak and yet, there is something about Jane’s expression that encourages the mother to transform the image from a static representation to something far more complex.

    It is as if the language of description that usually accompanies a photograph cannot fully account for its mystery. It is as if the photograph exceeds the boundaries of its frame and brings forth a dialogue that encourages a break in the silence that usually surrounds it.

    Where does this power come from? It cannot simply be a product of our investment in the image. To draw that conclusion would be to somehow mute the very personal manner in which the image is internalized and the many ways in which we make it relevant to ourselves.

    Could it be that we see from the position of the image? Do we not have to place ourselves inside the photograph in order to transform it into something that we can believe in? Aren’t we simultaneously witnesses and participants? Don’t we gain pleasure from knowing that Jane is absent and yet so powerfully present? Isn’t this the root of a deeply nostalgic feeling that overwhelms the image and brings forth a set of emotions that cannot be located simply in memory?

    What would happen if I or someone else were to tear up the photograph? The thought is a difficult one. It somehow violates a sacred trust. It also violates Jane. Yet, if the photo were simply a piece of paper with some chemicals fixed upon its surface, the violence would appear to be nothing. How does the image exceed its material base?

    This question cannot be answered without reflecting upon the history of images and the growth and use of images in every facet of human life. Long before we understood why, images formed the basis upon which human beings defined their relationship to experience and to space and time. Long before there was any effort to translate information into written language, humans used images to communicate with each other and with a variety of imaginary creatures, worlds and gods. The need to externalize an internal world, to project the self and one’s thoughts into images was and is as fundamental as the act of breathing. Life would not and could not have continued without some way of creating images to bear witness to the complexities of the human experience. This wondrous ability, the magic of which surrounds us from the moment that we are born, is a universal characteristic of every culture and every social and economic formation. We know that this is the case with language. We need to fully understand and accept the degree to which it is the same with images.

    Images are one of the crucial ways in which the world becomes real and it should come as no surprise to discover that words on a page are also images, although of a sort that is different from photos.

    It is therefore the case that images are one of the most fundamental grounds upon which we build our notions of [embodiment](http://www.thegreenfuse.org/embodiment/). It is for that reason that images are never simply enframed by their content. The excess is a direct result of what we do with images as we incorporate them into our identities and our emotions. Images speak to us because to see is at one and same time to be within and outside of the body. We use images as a prop to construct and maintain the legitimacy of sight. It is as if sight could not exist without the images that we surround ourselves with and as if the activities of seeing are co-dependent with the translations and representations that we produce of the world around us.

    We need perhaps to consider changing the ways in which we relate to objects in general. Bruno Latour the great French writer has commented on this issue at length and will be the subject of my next blog entry.

    Video Presentation: Is New Media New?

    Is New Media New? The disciplines that constitute Art and Design have developed into rapidly evolving research domains that include sound, image, video, digital media, mixed media (including print media), new forms of visual expression, interactive games, multimedia art, multimodal environments and many other areas. In addition, the more traditional disciplines have been profoundly affected by these new technologies to the extent that areas such as painting, film, photography, drawing, sculpture and printmaking now all intersect with and often depend on digital tools to create works of art. Yet, is all of this really new?

    This video was recorded at REFRESH! THE FIRST INTERNATIONAL CONFERENCE ON THE HISTORIES OF ART, SCIENCE AND TECHNOLOGY - September 28 - 0ct 1, 2005 and placed onto the Web in 2008 as a peer-reviewed scholarly work chosen for inclusion.

    The Practice of Interdisciplinarity in Design and New Media

    Keywords: Inclusive Design, New Media

    This essay examines the history of a multi-disciplinary Centre for Design and New Media developed over a period of three years in Vancouver, Canada. I explore the challenges of developing research models that make it possible for a variety of investigators and practitioners in the areas of Design and New Media to link their work to that of engineers and computer scientists.

    In 2000, the New Media Innovation Centre (NewMic) was started in Vancouver, Canada under the aegis and with the support of five post-secondary academic institutions, industry and the federal and provincial governments. Approximately, nineteen million dollars was invested at the outset mostly from industry and government. I was one of the leaders in the planning and development of NewMic, in large measure because I have a long history of involvement in teaching and researching, as well as producing new media. (The industry members included, Electronic Arts, IBM, Nortel Networks, Sierra Wireless, Telus and Xerox Parc.)

    One of the foundational goals of NewMic was to bring engineers, computer scientists, social scientists, artists, designers and industry together, in order to create an interdisciplinary mix of expertise from a variety of areas. The premise was that this group would engage in innovative research to produce inclusive and new media designs of a variety of products, network tools and multimedia applications. The second premise was that the research would produce outcomes that could be implemented and commercialized in order to produce added value for all of the partners.

    I spent a year at NewMic as a designer/artist in residence in 2002 and was also on its Board of Governors from 2000-2003 until it was closed down late in 2003. There are a number of important features to the history of this short-lived institution that are important markers of the challenges and obstacles facing any interdisciplinary dialogue that includes artists and designers working with engineers and computer scientists. Among the challenges are:


    • The tendency among engineers, designers and computer scientists to have an unproblematic relationship to knowledge and knowledge production
    • Lack of clarity as to the meaning, impact and social role of inclusive and new media design products;  
    • Profound misunderstanding of the relationship between inclusivity, user needs and technological innovation; 
    • Conflicting cultures and discourses;
    • An uninformed and generally superficial understanding of the differences between the cognitive sciences and ethnographic explorations of human-computer interaction; 
    • Focus on a false distinction between pure and applied research.

    Underlying some of these challenges was an apprehension that without interdisciplinarity, it would be impossible to be innovative. The artists and designers from Emily Carr Institute who participated in NewMic and whose concerns were centred on community, creativity, outreach, inclusivity and the ethical implications and effects of new technologies, found themselves in a difficult and demanding position.

    The Culture of Collaboration, Design and Interdisciplinarity

    Diana Forsythe, in a superb book entitled, Studying Those Who Study Us: An Anthropologist in the World of Artificial Intelligence says the following:

    1. To knowledge engineers, knowledge is an either/or proposition: it seems either present or absent, right or wrong. Knowledge thus seems to be conceived of as an absolute. If you have it, you’re an expert; if you lack it, you’re a novice.
    2. Knowledge engineers seem to conceive of reasoning as a matter of following formal rules. In contrast, social scientists—especially anthropologists—tend to think of it in terms of meaning and to note that the logic by which people reason may differ according to social and cultural context.
    3. Knowledge engineers tend to assume that knowledge is conscious, that is, that experts can tell you what they know if only they will. They do not have systematic procedures for exploring tacit knowledge, not so they seem aware of the inevitably partial nature of the retrospective reporting conventionally used for knowledge elicitation. (Forsythe, 52)

    These three points are central to understanding the culture of collaboration that needs to be built when researchers from diverse disciplines in the arts and engineering and computer sciences decide to work cooperatively. One of the challenges in any collaboration is developing a model of how different cultures and discourses can develop a best practices approach to understanding each other. It is not just an issue of people speaking and thinking differently, or having different research paradigms (although those two issues must be dealt with if any collaboration in this area is to be successful), it is also crucial to explore expectations, needs and what each discipline means by outcomes.

    For example, the area of Inclusive Design is about ensuring that environments, products, services and interfaces work for people of all ages and abilities. The differences and similarities between applied and pure research need to be kept in mind on an almost continual basis. (Pure research is speculative, long-term and more oriented to speculative thinking as an end in itself.) In some instances, an applied approach may not capture all the nuances of a product’s potential design and use. An applied strategy may not delve deeply enough into the subtle relationship that people have with the environments they inhabit and the objects they utilize.

    The supposed disparity between pure and applied research strategies was one of the areas of greatest conflict at NewMic. Industry members in particular wanted to move from research to end product as quickly as possible. And while this may be a necessity in the private sector, it takes more time for researchers from post-secondary institutions and independent labs to both understand the direction they want to pursue and to produce results. This may well be a weakness with the latter group, and it is the case that a good deal of the research done by universities produces no measurable outcomes, but this does not belie the fact that some of the most important research in the 20th century has come from the post-secondary sector.

    The distinctions between applied and pure research are in general, false, since there are many examples of pure research resulting in practical outcomes and applications. One of the best examples of this was the discovery in 1946 that “certain nuclei act as tiny magnets. Scientists then could scarcely have imagined the practical applications which would lead to today's multi-billion dollar industry in magnetic resonance medical imaging (MRI), which doctors use to scan the tissues and bones of patients in diagnosing cancerous turnouts or hair-line fractures. But the original discovery only provided the opportunity for the applications. To realize these required a great deal of additional sophisticated engineering, applied science and commercial development.” (Harvey Brooks, Harvard University 2004)

    An added complication was NewMic’s inclusion of researchers and practitioners with backgrounds in art and design. Artistic research is very much defined by doing, but it is also circumscribed by the process of playing as well as the creative ability to capture and realize the importance of chance and serendipity. The outcome of research in the arts is often the work of art itself. Design, on the other hand defines itself through its close relationship with clients and looks to materiality (even in a digital world) for confirmation and validation.

    There are of course many examples of successful collaborations, which in some instances have produced spectacular pay-offs like the inclusion of artists-in-residence at Xerox’s Palo Alto Research Center. (Harris, 1999) In the Palo Alto case the synergies between artists, designers and engineers produced some wonderful results and many other centres have tried to duplicate their experience. In the private sector, the design company IDEO is an excellent example of how to build a culture of connection and interaction between different disciplines. (Kelley, 2001)

    The NewMic collaboration began with two major reference points, Palo Alto and MIT’s Media Lab. Again, this was not unusual. Other projects in Montreal, Melbourne, Dublin and Germany referred to and attempted to reflect the successes of MIT and Xerox. In the beginning the mandate of NewMic was described as follows:

    To accomplish its mission, NewMIC was focused on the following objectives:


    • Attracting and retaining outstanding faculty and graduate and undergraduate students in new media research and in art and design areas.

    • Building excellence in new media innovation.

    • Developing better industry-university-institute collaboration for the purposes of technology transfer.

    • Encouraging the transfer and commercialization of technology through incubation support.

    • Attracting more venture capital to the new media industry. (March 2001)

     

    The design component was incorporated into the vision by default under the rubric of New Media. This proved to be an error because so much of New Media is driven by interface design, product design and inclusive design as well as 'old media' goals. Ultimately, the goal was to frame the experience of users of New Media within a product-oriented set of research pursuits. Ironically, so many of the lessons that designers have learned over the last two decades, the importance of detailed ethnographic inquiry, the need to think about the relationship between product and user, the flexibility that is necessary to make interfaces work for many diverse constituents, the fact that design is really about people and this knowledge, that inclusivity cannot be attained without understanding how people live, was not directly applied to the research in New Media at NewMic.

    The emphasis on innovation, technology transfer and commercialization, although necessary, cannot be accomplished in a context that is entirely oriented towards applied research with short timelines. This is a conundrum because it is completely understandable that industry would want to see some results from their investment, but the essence of collaboration is that it takes time. In fact, one of the crucial lessons of the NewMic experience is that developing designs that are environmentally sensitive and inclusive requires not only that people from different disciplines participate, but that time be given over to the development of shared communities of interest. Interdisciplinarity is as much about a coming together as it is about recognizing differences.

    Diana Forsythe in her own words:

    "Anthropologists have been using ethnographic methods since the 1970s to support the design and evaluation of software. While early use of such skills in the design world was viewed as experimental, at least by computer scientists and engineers, ethnography has now become established as a useful skill in technology design. Not only are corporations and research laboratories employing anthropologists to take part in the development process, but growing numbers of non-anthropologists are attempting to borrow ethnographic techniques. The results of this appropriation have brought out into the open a kind of paradox: while ethnography looks and sounds straightforward, this is not really the case. The work of untrained ethnographers tends to overlook things that anthropologists see as important parts of the research process. The consistency of this pattern suggests that some aspects of ethnographic fieldwork are invisible to the untrained eye. In short, ethnography would appear to constitute an example of invisible work."

     

    Second Life (2)

    I posted an earlier piece on Second Life and talked about cyberspace and the metaphoric power of alternate "realities" within the context of communications networks. Here is what Henry Jenkins, Professor of Comparative Media Studies at MIT said on his Blog: "Some have dismissed SL as a costume party -- I see it more as carnival in the medieval sense of the term -- as a time and place within which normal rules of interactions are suspended, roles can be swapped or transformed, hierarchies can be reordered, and we can step out of normal reality into a "magic circle" or "green world" which can be highly generative for the imagination. The difference is that in the old days, carnival was something that existed for a very short period of time and people planned for it all year. Now, in the era of SL, carnival exists all the day and people have to decide how much time they want to spend there."

    An example, I was 'skating' in SL and another skater approached me and asked why I was skating in 'flippers.' I responded somewhat incredulously that it didn't matter to me and he or she replied that I was breaking the rules of SL.

    I would argue that the carnivalesque quality of SL is still surrounded by 'acceptable' notions and norms of reality or first life. In fact, the sense one gets from SL is often rather banal as the physics of place, architecture and design are all set up to reflect conventional expectations of what should or must happen if people walk, talk or simply look at objects in the multiverse. A true carnival would actually push the boundaries of acceptable behaviour on a continual basis. From time to time, flying penises for example, that happens in SL, but for the most part the challenge seems to be to have a reasonable experience that fits into preexisting conceptions of reality and its limitations as well as potential.

    There are clouds that you can enter and your avatar can fly and there are designs that defy convention, but for the most part, SL tries to imitate the 3D world rather than reinterpreting its premises. This may too much to ask. Clay Shirky has written an excellent critique that focuses on many of the grand assumptions about role and use in SL.

    In contrast, Beth Coleman talks about the potential of SL and makes some important points about user-generated content. Yes, it is true that the content of SL has been created by users but the limitations of choice, style and design are quite high. Perhaps, there is a middle ground here between SL's aesthetic and orientation and the overall potential of new environments created by interested people and communities.

    My sense is that more is happening in the Machinima world where game engines are being used to create some very interesting films. Check out this one. The difference between Machinima and SL is that the former requires some real development of story lines and technology use. Notwithstanding all of this, I am still interested in exploring more of SL. After all, we are in the early phases of multiverse creation.

    There is an interview with the chairman of Linden Labs, owner of Second Life at the Reuters Second Life Newsroom.

    Jaron Lanier and The Hazards of Online Collectivism

    Jaron Lanier, who is famous for having coined the term virtual reality and the concepts that go with it, wrote an essay in late May that has provoked discussion all over the internet. Here is a quote from the piece. The complete article can be found at the EDGE website. The essay is entitled, "Digital Maoism: The Hazards of the New Online Collectivism."

    The problem I am concerned with here is not the Wikipedia in itself. It's been criticized quite a lot, especially in the last year, but the Wikipedia is just one experiment that still has room to change and grow. At the very least it's a success at revealing what the online people with the most determination and time on their hands are thinking, and that's actually interesting information.

    No, the problem is in the way the Wikipedia has come to be regarded and used; how it's been elevated to such importance so quickly. And that is part of the larger pattern of the appeal of a new online collectivism that is nothing less than a resurgence of the idea that the collective is all-wise, that it is desirable to have influence concentrated in a bottleneck that can channel the collective with the most verity and force. This is different from representative democracy, or meritocracy. This idea has had dreadful consequences when thrust upon us from the extreme Right or the extreme Left in various historical periods. The fact that it's now being re-introduced today by prominent technologists and futurists, people who in many cases I know and like, doesn't make it any less dangerous.

    The EDGE also has 28 pages of responses to what Lanier says.

    The essence of his argument is that collaborative work on the net has become increasingly hive-like. This leads to a "group mentality" approach to ideas and the notion that the "collective is all-wise." The result is a tyranny of the majority with a simultaneous loss of value both to intellectual depth and the way democracies operate. He is particularly critical of wikipedia— the online encyclopedia which is being built by individuals from all over the world in much the same manner as open source software. I have commented on wikipedia before. Some of Lanier's fears are well-founded, but for the most part, his comments don't explain or clarify why networked forms of knowledge contruction are any more hive-based than most intellectual projects. Generally, irrespective of the type of knowledge or information produced, there are communities of interest that define and reinforce the concepts, categories and arguments that they support. This has been discussed in great depth by people like Bruno Latour and Elias Canetti wrote an important "Crowds and Power," in 1962 on the phenomenon of mass hysteria and the tendency to a kind of viral effect when large groups of people operate in tandem.

    Lanier's points need discussion, not the least because networked forms of interaction on the scale that we are seeing at the moment are still very new. That said, there is not much to his analysis of conventional media. He is too skeptical of Popular Culture and gives too much weight to the role of sites like Wikipedia. His concern, that the aggregative role played by the many sites that are about sites is overstated. He is worried that these meta-sites will play an overly powerful role as arbiters of taste and choice. I think in this, he underestimates the intelligence of Internet users. Nonetheless, an important article to read.

    Geographies of Dissent (2)

    There is another term that I would like to introduce into this discussion and that is, counter-publics. Daniel Brouwer in a recent issue of Critical Studies in Media Communications uses the term to describe the impact of two “zines"? on public discussion of HIV-AIDS. The term resonates for me because it has the potential to bring micro and macro into a relationship that could best be defined as a continuum and suggests that one needs to identify how various publics can contain within themselves a continuing and often conflicted and sometimes very varied set of analysis and discourses about central issues of concern to everyone. It was the availability of copy machines beginning in 1974 that really made ‘zines’ possible. There had been earlier versions, most of which were copied by hand or by using typewriters, but copy machines made it easy to produce 200 or 300 copies of a zine at very low cost. In the process, a mico-community of readers was established for an infinite number of zines. In fact, the first zine convention in Chicago in the 1970’s attracted thousands of participants. The zines that Brouwer discusses that were small to begin with grew over time to five and ten thousand subscribers. This is viral publishing at its best, but it also suggests something about how various common sets of interests manifest themselves and how communities form in response.

    “One estimate reckons that these "Xeroxed, hand-written, desktop-published, sometimes printed, and even electronic" documents (as the 1995 zine convention in Hawaii puts it) have produced some 20,000 titles in the past couple of decades. And this "cottage" industry is thought to be still growing at twenty percent per year. Consequently, as never before, scattered groups of people unknown to one another, rarely living in contiguous areas, and sometimes never seeing another member, have nonetheless been able to form robust social worlds? John Seely Brown and Paul Duguid in The Social Life of Documents. Clearly, zines represent counter-publics that are political and are inheritors of 19th century forms of poster communications and the use of public speakers to bring countervailing ideas to large groups. Another way of thinking about this area is to look at the language used by many zines. Generally, their mode of address is direct. The language tends to be both declarative and personal. The result is that the zines feel like they are part of the community they are talking to and become an open ‘place’ of exchange with unpredictable results. I will return to this part of the discussion in a moment, but it should be obvious that zines were the precursors to Blogs.

    As I said, the overall aggregation of various forms of protest using a variety of different media in a large number of varied contexts generates outcomes that are not necessarily the product of any centralized planning. This means that it is also difficult to gage the results. Did the active use of cell phones during the demonstrations in Seattle against the WTO contribute to greater levels of organization and preparedness on the part of the protestors and therefore on the message they were communicating? Mobile technologies were also used to “broadcast? back to a central source that then sent out news releases to counter the mainstream media and their depiction of the protests and protestors. This proved to be minimally effective in the broader social sense, but very effective when it came to maintaining and sustaining the communities that had developed in opposition to the WTO and globalization. Inadvertently, the mainstream media allowed the images of protest to appear in any form because they were hungry for information and needed to make sense of what was going on. As with many other protests in public spaces, it is not always possible for the mainstream media to control what they depict. Ultimately, the most important outcome of the demonstrations was symbolic, which in our society added real value to the message of the protestors.

    To be continued...

     

    Some comments on How Images Think

    Professor Pramod Nayar of the Department of English, University of Hyderabad comments on "How Images Think." This is a small selection of a longer review that appeared in the Journal of the American Society for Information Science and Technology

    How Images Think is an exercise both in philosophical meditation and critical theorizing about media, images, affects, and cognition. Burnett combines the insights of neuroscience with theories of cognition and the computer sciences. He argues that contemporary metaphors - biological or mechanical - about either cognition, images, or computer intelligence severely limit our understanding of the image. He suggests in his introduction that image refers to the complex set of interactions that constitute everyday life in image-worlds (p. xviii). For Burnett the fact that increasing amounts of intelligence are being programmed into technologies and devices that use images as their main form of interaction and communication - computers, for instance - suggests that images are interfaces, structuring interaction, people, and the environment they share.

    New technologies are not simply extensions of human abilities and needs - they literally enlarge cultural and social preconceptions of the relationship between body and mind.

    The flow of information today is part of a continuum, with exceptional events standing as punctuation marks. This flow connects a variety of sources, some of which are continuous - available 24 hours - or live and radically alters issues of memory and history. Television and the Internet, notes Burnett, are not simply a simulated world - they are the world, and the distinctions between natural and non-natural have disappeared. Increasingly, we immerse ourselves in the image, as if we are there. We rarely become conscious of the fact that we are watching images of events - for all perceptive, cognitive, and interpretive purposes, the image is the event for us.

    The proximity and distance of viewer from/with the viewed has altered so significantly that the screen is us. However, this is not to suggest that we are simply passive consumers of images. As Burnett points out, painstakingly, issues of creativity are involved in the process of visualization - viewers generate what they see in the images. This involves the historical moment of viewing - such as viewing images of the WTC bombings - and the act of re-imagining. As Burnett puts it, the questions about what is pictured and what is real have to do with vantage points [of the viewer] and not necessarily what is in the image (p. 26).