Brain Imaging/Neurosciences/Cultural Theory

 

The Elekta Company has a machine which is called a magnetoencephalograph or MEG for short "…is presently regarded as the most efficient method for tracking brain activity in real-time for many reasons. Compared to EEG, MEG has unique sensitivity capabilities."

Real-time brain mapping allows scientists to "watch" the brain in action under controlled conditions. The Allen Institute for Brain Science (named after one of the founders of Microsoft, Paul Allen) has just completed an atlas of a mouse brain. "The goal of our inaugural project, the Allen Brain Atlas, is to create a detailed cellular-resolution, genome-wide map of gene expression in the mouse brain."

So, why is this important?

1. As more knowledge is gained about the human mind through scanning, the role of culture and images changes. Images are no longer just representations or interpreters of human actions. They have become central to every activity that connects humans to each other and to technology — mediators, progenitors, interfaces — as much reference points for information and knowledge, as visualizations of human creativity.

2. My main concern is the role played by images as the output of scanning procedures and the many different ways in which those images are appropriated within our culture to explain the intensity of our attraction to and dependence upon image-worlds as ways of explaining consciousness.

3. For better or for worse, depending on the perspectives that you hold and the research bias that you have, images are the raw material of scanning technologies like MRI’s and MEGS. In other words, the brain is visualized at a topological level, mapped according to various levels of excitation of a chemical and electrical nature and researched and treated through the knowledge that is gained. This is primarily a biological model and leaves many questions unanswered about the mind, thought and the relationship between perception and thinking.

4. The use of images entails far more than the transparent relationship of scanning to results would suggest. The biological metaphors at work make it appear as if the interpretation of scanning is similar to looking at a wound or a suture. The effort is to create as much transparency as possible between the scans and their interpretation. But, as with any of the issues that are normally raised about interpretive processes, it is important to ask questions about the use of images for these purposes from a variety of perspectives, including and most importantly, a cultural one.

5. The use of scanning technologies does not happen in a vacuum. Scientists spend a great deal of time cross-referencing their work and checking the interpretations that they make. (Many issues around image quality arise in the scanning process. These include, contrast, resolution, noise and distortion. Any one of these elements can change the relationship between images and diagnosis.) The central question for me is how to transfer the vast knowledge that has been gained from the study of images in a variety of disciplines from cultural studies to communications, into disciplines like the computer sciences and engineering which have been central to the invention and use of scanning technologies. In the same vein, how can the insights of the neurosciences be brought to bear in a substantial fashion on the research being pursued by cultural analysts, philosophers and psychologists?

The digital revolution is altering the fabric of research and practice in the sciences, arts and engineering and challenging many conventional wisdoms about the seemingly transparent relationship among images and meaning, mind and thought, as well as culture and identity.

 A complex cultural and biological topology is being drawn of consciousness in order to illuminate and illustrate mental processes. I labor under no illusions that this topology will solve centuries of debate and discussion about how and why humans think and act in the world. I do, however, make the point that images are a central feature of the many conundrums researchers have encountered in their examination of the mind and the human body. One example of the centrality of images to the debate about human consciousness has been the appearance of increasingly sophisticated imaging and scanning technologies that try to ‘picture’ the brain’s operations. The results of research in this area have been impressive and the impact on the cultural view of the brain has been enormous. In general this research has led to a more profound understanding of the rich complexity of the brain’s operations. Since I am not a specialist in these disciplines, I do not comment in detail on the medical or scientific claims that have been made about the usefulness of the research. My main concern is the role played by images as the output of scanning procedures and the many different ways in which those images are appropriated within our culture to explainthe intensity of our attraction to and dependence upon image-worlds. 

For better or for worse, depending on the perspectives that you hold and the research bias that you have, images are the raw material of scanning technologies like MRI’s. In other words, the brain is visualized at a topological level, mapped according to various levels of excitation of a chemical and electrical nature and researched and treated through the knowledge that is gained. This is primarily a biological model and leaves many questions unanswered about the mind, thought and the relationship between perception and thinking. In particular, the issues of how images are used to explain biological processes should not be marginalized.

 

Johan van der Keuken (2)

In a series of writings published on the occasion of the 42nd San Francisco International Film Festival, Van der Keuken said the following:

"The idea of the truth 24 times a second is erroneous. The acceleration that takes place in the mechanical process creates a gap between the function of mechanical repetition and its form as a continuous flow that is only perceivable in a purely subjective experience of time. "

Van der Keuken was referring to Jean-Luc Godard's statement: "Photography is the truth, and cinema is the truth 24 times a second," which was made in reference to his film, Le Petit Soldat.

le_petit_soldat.jpg

Van der Keuken goes on to say, "The important thing is not the reproduction of a three-dimensional reality, but by way of the time elements in a film, the creation of an autonomous space."

An autonomous space — this means that the flow of a film creates its own time and space, because the viewing experience is never simply a function of what is shown or seen.

Vision — the cultural approach to seeing and thinking, privileges objects of sight, as if they will provide some clear answers to the dilemmas of viewing and understanding, as if the questions, indeed possible contradictions of autonomy, need not be addressed.

For example, hallucinations and dreams are sights not in the control of the conscious mind. It is more difficult to trace their origin because they suggest autonomy without specifiable external or experiential causes.

This could be reason for excitement, visible evidence so to speak, of the mind reconstructing and redeveloping conscious and unconscious relations. Instead, autonomy, which I am not suggesting is the only process at work here, is more often than not recontextualized into an objectivist language of description and analysis. In fact, the sense of estrangement attributed to hallucination or dream cannot be divorced from the hesitations which we feel in describing the “inner workings of vision — the often obvious way in which the reflective autonomy of thought challenges preconceptions of order and disorder.

So, van der Keuken is talking about the unique circumstances through which the cinema makes it possible to experience the world and those experiences are a product of the viewer's own consciousness as much as they are evidence of the world we inhabit.

____________________

There is a superb interviewwith Bill Joy originally published in New Scientist that is worth a read.

The context for learning, education and the arts (4)

(This entry is in five parts) One, Two, Three, Four, Five)

So why explore the intersections of human thought and computer programming? My tentative answer would be that we have not understood the breadth and depth of the relationships that we develop with machines. Human culture is defined by its on-going struggle with tools and implements, continuously finding ways of improving both the functionality of technology and its potential integration into everyday life. Computer programming may well be one of the most sophisticated artificial languages which our culture has ever constructed, but this does not mean that we have lost control of the process.

The problem is that we don’t recognize the symbiosis, the synergistic entanglement of subjectivity and machine, or if we do, it is through the lens of otherness as if our culture is neither the progenitor nor really in control of its own inventions. These questions have been explored in great detail by Bruno Latour and I would reference his articles in “Common Knowledge as well as his most recent book entitled, Aramis or The Love of Technology. There are further and even more complex entanglements here related to our views of science and invention, creativity and nature. Suffice to say, that there could be no greater simplification than the one which claims that we have become the machine or that machines are extensions of our bodies and our identities. The struggle to understand identity involves all aspects of experience and it is precisely the complexity of that struggle, its very unpredictability, which keeps our culture producing ever more complex technologies and which keeps the questions about technology so much in the forefront of everyday life.

It is useful to know that the within the field of artificial intelligence (AI) there are divisions between researchers who are trying to build large databases of “common sense in an effort to create programming that will anticipate human action, behaviour and responses to a variety of complex situations and researchers who are known as computational phenomenologists . “Pivotal to the computational phenomenologists position has been their understanding of common sense as a negotiated process as opposed to a huge database of facts, rules or schemata."(Warren Sack)

So even within the field of AI itself there is little agreement as to how the mind works, or how body and mind are parts of a more complex, holistic process which may not have a finite systemic character. The desire however to create the technology for artificial intelligence is rooted in generalized views of human intelligence, generalizations which don’t pivot on culturally specific questions of ethnicity, class or gender. The assumption that the creation of technology is not constrained by the boundaries of cultural difference is a major problem since it proposes a neutral register for the user as well. I must stress that these problems are endemic to discussions of the history of technology. Part of the reason for this is that machines are viewed not so much as mediators, but as tools — not as integral parts of human experience, but as artifacts whose status as objects enframes their potential use.

Computers, though, play a role in their use. They are not simply instruments because so much has in fact been done to them in order to provide them with the power to act their role. What we more likely have here are hybrids, a term coined by Bruno Latour to describe the complexity of interaction and use that is generated by machine-human relationships.

Another way of understanding this debate is to dig even more deeply into our assumptions about computer programming. I will briefly deal with this area before moving on to an explanation of why these arguments are crucial for educators as well as artists and for the creators and users of technology.

Generally, we think of computer programs as codes with rules that produce certain results and practices. Thus, the word processing program I am presently using has been built to ensure that I can use it to create sentences and paragraphs, to in other words write. The program has a wide array of functions that can recognize errors of spelling and grammar, create lists and draw objects. But, we do have to ask ourselves whether the program was designed to have an impact on my writing style. Programmers would claim that they have simply coded in as many of the characteristics of grammar as they could without overwhelming the functioning of the program itself. They would also claim that the program does not set limits to the infinite number of sentences that can be created by writers.

However, the situation is more complex than this and is also subject to many more constraints than initially seems to be the case. For example, we have to draw distinctions between programs and what Brian Cantwell Smith describes as “process or computation to which that program gives rise upon being executed and [the] often external domain or subject matter that the computation is about. (Smith, On the Origin of Objects, Cambridge: MIT Press, 1998: 33) The key point here is that program and process are not static, but are dynamic, if not contingent. Thus we can describe the word processor as part of a continuum leading from computation to language to expression to communication to interpretation. Even this does not address the complexity of relations among all of these processes and the various levels of meaning within each.

To be continued........

 

The context for learning, education and the arts (3)

This Entry is in Five Parts. (One, Two, Three, Four, Five)

This initial creativity was soon lost in the final version of “Understanding Media published in the 1964. In the book the medium becomes the message through the operations of an instantaneous sensory recognition of meaning. McLuhan explores affect by claiming that cubism in its elimination of point of view, generated an “instant total awareness [and in so doing] announced that the medium is the message? (Marshall McLuhan, Understanding Media, (Cambridge: MIT Press, 1994, p.13.) I am not sure what ‘instant total awareness’ is, but one can surmise that it is somewhere between recognition and self-reflexive thought. In choosing this rather haphazard approach McLuhan eliminates all of the mediators that make any form of communication work.

Take the World Wide Web as an example. Few users of the web are aware of the various hubs and routers that move data around at high speed, let alone of the complexity of the servers that route that data into their home or business computers. They become aware of the mediators when there is a breakdown, or when the system gums up. The notion that we receive information instantly is tied up with the elimination of mediation. So, the arrival in my home of a television image from another part of the world seems instant, but is largely the result of a process in which radically different versions of time and space have played significant roles (the motion and position of the satellite, transmitting stations, microwave towers and so on). I won’t belabour this point other than to point out that the notion of instant recognition has played a significant role in the ways in which our culture has understood digital communications. This has tended to reduce if not eliminate the many different facets of the creative and technological process.

But let’s return to the more interesting and potentially creative idea that the subject is the message (mnetioned in an earlier post). As the sense-ratios alter, the sum-total of effects engenders a subject surrounded by and encapsulated within an electronic world, a subject who effectively becomes that world (and here the resonance with Jean Baudrillard is clear). This is not simply the movement from machine to human, it is the integration of machine and humans where neither becomes the victim of the other. As mediums we move meanings and messages around in a variety of creative ways (hence the link to speech) and as humans interacting with machines we are the medium within which this process and processing circulates. I repeat, this does not mean that we have become the machine, a concept that has inspired a great deal of criticism of technology in general, rather we end up sharing a common ground with our own creations, a mediated environment which we are explore everyday and try to make sense of the information that we are learning.

Interestingly, Derrick De Kerckhove, the Director of the McLuhan Centre at the University of Toronto who has been described as the successor to McLuhan wrote a book entitled, The Skin of Culture: Investigating the New Electronic Reality (Kogan Page, London: 1998). He said:

“With television and computers we have moved information processing from within our brains to screens in front of, rather than behind, our eyes. Video technologies relate not only to our brain, but to our whole nervous system and our senses, creating conditions for a new psychology. (De Kerckhove: 5)

To Kerckhove, human beings have become messages (and this is different from being mediums) with our brains emulating the processing logic and structural constraints of computers. Here we do become the machine. We no longer signify as an act of will. Agency is merely a function of messaging systems. Agency no longer recognizes its role as a medium and as a result we seek and are gratified by the instantaneous, the immediate, the unmediated. Now, the ramifications of this approach are broad and need extensive thought and clarification.

The important point here is that De Kerckhove has molded the human body into an extension of the computer, because we are already, to some degree, machines. Our nervous systems, which scientists barely understand and our senses which for neuroscientists remain one of the wonders of nature are suddenly characterized through the metaphors of screens, vision, technology and a new psychology. The inevitable result are mechanical metaphors that make it seem as if science, computer science and biotechnology will eventually solve the ambiguous conundrums of perception (e.g., in the virtual world we become what we see), knowledge and learning. To say that we are the machine is a far cry from understanding the hybrid processes that encourage machine-human interactions. De Kerckhove has transformed the terrain here much as McLuhan did, so that humans lose their autonomy and their ability to act upon the world, although his is a far more sohisticated examination than McLuhan's.

As I said, this is not an article about McLuhan and so I will not explore the report that he wrote any further or the vast literature that has grown up around his thinking. As you can no doubt tell, I am concerned with the rather mechanical view that our culture has of the human mind and am fascinated with the ease with which we have taken on McLuhan’s simplified versions of affect and effect. It is not so much the behavioural bias that concerns me (although it is important to be aware of the influence of behaviourism on the cultural analysis of technology) but the equations that are drawn among experience, images and technology.

These equations often reduce the creative engagement of humans with culture and technology, to the point where culture and technology become one, eliminating the possibility of contestation. In large measure, many of the complaints about digital technologies, the fears of being overwhelmed if not replaced are the result of not recognizing the potential to recreate the products of technological innovation. The best example of this is the way video games have evolved from rudimentary forms of storytelling to complex narratives driven by the increasing ease with which the games are mastered by players. The sophistication of the players has transformed the technology. But none of this would have been possible without the ability of the technology to grow and change in response to the rather unpredictable choices made by humans.

If we turn to the computer for a moment, the notion that it has the power to affect human cognition is rooted in debates and theories developed within the fields of cybernetics and artificial intelligence. The “…popular press began to call computers ‘electronic brains’ and their internal parts and functions were given anthropomorphic names (e.g., computer memory)… (Warren Sack, “Artificial Intelligence and Aesthetics pg. 3)

The notion that a computer has memory has taken root in such a powerful way that it seems impossible to talk about computers without reference to memory. So, an interesting circle has been formed or it might be a tautology. Computer memory becomes a standard which we use to judge memory in general, hence the fears about Deep Blue somehow replacing the human mind, even though its programming was created by humans! The problem is that there is a long tradition of human creativity in the development of technologies and this history is embedded in every aspect of our daily lives. Deep Blue is just one more extension of the process. The fact that we can use the computer to judge our own memories certainly doesn’t eliminate anything. It merely means that we now have a tool that we can use to examine what we actually mean by memory. In fact, recent neuroscientific research into memory suggests that we have profoundly underestimated our own minds let alone the digital ones that we are creating.

The very idea of a computer program is linked to the power to do. (Sack: 5) Again, there are certain debates that cannot be developed here, including the significant one between Daniel Dennett and John Searle, a debate explored by Stephen Pinker in his new book, How the Mind Works. Pinker is a supporter of cognitive psychology and also suggests that the brain operates like a computer. His argument is more subtle than that however, because he is quite worried about creating too great an equivalence between the brain and the mechanics of the computer. I bring this up because it is the cultural attraction of the metaphors which interests me. It is important to understand that computer programs are carefully constructed artificial languages that have great difficulty dealing with the unpredictable, with the tentative, the contingent or the irrational. Computer programs are codified according to a strict set of rules and I think that we can make the argument that common sense is not. I will briefly return to this discussion later on.

To be continued......

Remix 06: Blending, Bending and Befriending Content

Innovative Content Development in New Media has some of the following characteristics (This is by no means a comprehensive list.):
_______________
Imaginative storytelling (Breaking the rules and building new ones)
_______________
Not derivative (but can be a copy—mush — experimental cinema and music as models)
_______________
Aware of aesthetics, form and feel (Use OF Technology — Not Used by Technology)
_______________
Creating new knowledge and information (Play in every sense of the word.)
_______________
Aware of collage, montage and other techniques of bricolage (Stories can make the impossible real — photo-realism is a dead end)
_______________
Talent (Learning and Education and Research)
_______________
Decentralized modes of information gathering, exchange and distribution (Open Source)
_______________
Interactivity (Video games create the illusion of interactivity — interactive game play should be about a complete transformation of the game by the player — interactivity becomes creativity)
_______________
Bring body movement into the video game storytelling equation (Hands are not enough — Wii)
_______________
Link popular culture, games, books, magazines, fans, television and the web into content development (Specialized studios need cultural analysts and ethnographers as much as they need creators)
_______________
Work with audiences not against them (Fan movements, fansites, fan literature)
_______________
Assume that trends will shift as quickly as they are recognized — old style marketing will not work (Time is compressed but that does not mean that clip stories will last — marketing becomes discovering stories as well as creating them)
_______________
Non-linearity, complexity and chaos are at the center of digital content creation
_______________
Simulations are only as effective as the stories that underly them — Algorithms are culture
_______________
Telepresence and visualization need haptics and vice versa (Dreams are the Royal Road into Storytelling)
_______________
Narrowcast not broadcast (P2P will become C2C)

Geographies of Dissent (Final)

Another vantage point on this process is to think of various communities, which share common goals becoming nodes on a network that over time ends up creating and often sustaining a super-network of people pursuing political change. Their overall impact remains rather difficult to understand and assess, not because these nodes are in any way ineffective, but because they cannot be evaluated in isolation from each other.

This notion of networks may allow us to think about communities in a different way. It is, as we know possible at one and the same time for the impulses that guide communities to be progressive and very conservative. There is nothing inherently positive with respect to politics within communities, which are based on shared points of view. But, if the process is more important often, than the content, then this raises other issues. The intersection of connectivity and ideas leads to unpredictable outcomes. Take fan clubs for example. They generally centre on particular stars, films or television shows. They are a form of popular participation in mainstream media and a way of affecting not so much the content of what is produced (although that is happening more and more, Star Trek has continued as a series on the Net) but the relationship of private and public discourse about media products and their impact. Over time, through accretion and sheer persistence, fan clubs have become very influential. They are nodes on a network that connects through shared interests, one of which is to mold the media into a reflection of their concerns.

More often than not this network of connections is presumed to be of greater importance than the content of what is exchanged. This is classically what Baudrillard meant by the world becoming virtual and McLuhan, when he claimed that the medium was the message. Except, that they are both wrong.

The process of exchange, that is the many different ways in which people on shared networks work and play together cannot be analyzed from a behavioral perspective. Take FLICKR for example. There is nothing very complicated about this software. It was developed by two Vancouverites and then bought for 30 million dollars by Yahoo. The software is simple. It allows users to annotate photographs that they have posted to the web site. The annotations become an index and that index is searchable by everyone. The reason Yahoo paid so much is that over 80 million photographs had been uploaded and there were hundreds of communities of interest exchanging images with each other. Most of this is completely decentralized. The web site just hosts the process of community building.

The same elements attracted the News Corporation to MySpace.com and Rupert Murdoch paid over three hundred million dollars for that site or should I say community. Communities become currencies because there are so few ways to organize and understand all of the diversity that is being created within the context of modern-day networks. This is not because the medium is the message; rather, it is because the media are inherently social — social media. And in being social, they reshape modes of human organization and most importantly, the many different ways in which collectivities can form and reform.

(Please note: The last three entries, Geographies of Dissent were presented in a different format at York University, at a conference of the same name.)

 

Geographies of Dissent (2)

There is another term that I would like to introduce into this discussion and that is, counter-publics. Daniel Brouwer in a recent issue of Critical Studies in Media Communications uses the term to describe the impact of two “zines"? on public discussion of HIV-AIDS. The term resonates for me because it has the potential to bring micro and macro into a relationship that could best be defined as a continuum and suggests that one needs to identify how various publics can contain within themselves a continuing and often conflicted and sometimes very varied set of analysis and discourses about central issues of concern to everyone. It was the availability of copy machines beginning in 1974 that really made ‘zines’ possible. There had been earlier versions, most of which were copied by hand or by using typewriters, but copy machines made it easy to produce 200 or 300 copies of a zine at very low cost. In the process, a mico-community of readers was established for an infinite number of zines. In fact, the first zine convention in Chicago in the 1970’s attracted thousands of participants. The zines that Brouwer discusses that were small to begin with grew over time to five and ten thousand subscribers. This is viral publishing at its best, but it also suggests something about how various common sets of interests manifest themselves and how communities form in response.

“One estimate reckons that these "Xeroxed, hand-written, desktop-published, sometimes printed, and even electronic" documents (as the 1995 zine convention in Hawaii puts it) have produced some 20,000 titles in the past couple of decades. And this "cottage" industry is thought to be still growing at twenty percent per year. Consequently, as never before, scattered groups of people unknown to one another, rarely living in contiguous areas, and sometimes never seeing another member, have nonetheless been able to form robust social worlds? John Seely Brown and Paul Duguid in The Social Life of Documents. Clearly, zines represent counter-publics that are political and are inheritors of 19th century forms of poster communications and the use of public speakers to bring countervailing ideas to large groups. Another way of thinking about this area is to look at the language used by many zines. Generally, their mode of address is direct. The language tends to be both declarative and personal. The result is that the zines feel like they are part of the community they are talking to and become an open ‘place’ of exchange with unpredictable results. I will return to this part of the discussion in a moment, but it should be obvious that zines were the precursors to Blogs.

As I said, the overall aggregation of various forms of protest using a variety of different media in a large number of varied contexts generates outcomes that are not necessarily the product of any centralized planning. This means that it is also difficult to gage the results. Did the active use of cell phones during the demonstrations in Seattle against the WTO contribute to greater levels of organization and preparedness on the part of the protestors and therefore on the message they were communicating? Mobile technologies were also used to “broadcast? back to a central source that then sent out news releases to counter the mainstream media and their depiction of the protests and protestors. This proved to be minimally effective in the broader social sense, but very effective when it came to maintaining and sustaining the communities that had developed in opposition to the WTO and globalization. Inadvertently, the mainstream media allowed the images of protest to appear in any form because they were hungry for information and needed to make sense of what was going on. As with many other protests in public spaces, it is not always possible for the mainstream media to control what they depict. Ultimately, the most important outcome of the demonstrations was symbolic, which in our society added real value to the message of the protestors.

To be continued...

 

Some comments on How Images Think

Professor Pramod Nayar of the Department of English, University of Hyderabad comments on "How Images Think." This is a small selection of a longer review that appeared in the Journal of the American Society for Information Science and Technology

How Images Think is an exercise both in philosophical meditation and critical theorizing about media, images, affects, and cognition. Burnett combines the insights of neuroscience with theories of cognition and the computer sciences. He argues that contemporary metaphors - biological or mechanical - about either cognition, images, or computer intelligence severely limit our understanding of the image. He suggests in his introduction that image refers to the complex set of interactions that constitute everyday life in image-worlds (p. xviii). For Burnett the fact that increasing amounts of intelligence are being programmed into technologies and devices that use images as their main form of interaction and communication - computers, for instance - suggests that images are interfaces, structuring interaction, people, and the environment they share.

New technologies are not simply extensions of human abilities and needs - they literally enlarge cultural and social preconceptions of the relationship between body and mind.

The flow of information today is part of a continuum, with exceptional events standing as punctuation marks. This flow connects a variety of sources, some of which are continuous - available 24 hours - or live and radically alters issues of memory and history. Television and the Internet, notes Burnett, are not simply a simulated world - they are the world, and the distinctions between natural and non-natural have disappeared. Increasingly, we immerse ourselves in the image, as if we are there. We rarely become conscious of the fact that we are watching images of events - for all perceptive, cognitive, and interpretive purposes, the image is the event for us.

The proximity and distance of viewer from/with the viewed has altered so significantly that the screen is us. However, this is not to suggest that we are simply passive consumers of images. As Burnett points out, painstakingly, issues of creativity are involved in the process of visualization - viewers generate what they see in the images. This involves the historical moment of viewing - such as viewing images of the WTC bombings - and the act of re-imagining. As Burnett puts it, the questions about what is pictured and what is real have to do with vantage points [of the viewer] and not necessarily what is in the image (p. 26).

1st Colloquium on the Law of Transhuman Persons in Florida

Moot Court Hearing On The Petition Of A Conscious Computer

Ray Kurzweil runs a terrific web site on artificial intelligence and other matters related to technology and society. He recently provided the transcript of the court hearing on whether a conscious computer should be treated as a person.

This issue has been raging for some time. It reached its apogee with the discussion about whether "Deep Blue" the computer that (who?) beat Gary Kasparov was actually intelligent. IBM has some wonderful research on this available here.

"We have a petition by BINA48, an intelligent computer, to prevent its owner and creator, Exabit Corporation, from either turning off its power, or if it turns off its power, from reconfiguring it; and BINA48 doesn't want that to happen."

Machines attract and repel us. Although human beings are surrounded by many different machines and rely on them everyday, our culture views them with a great deal of skepticism . At the same time, the desire to automate the world we live in and efforts to link humans and machines have always been a part of the arts, sciences and mythology and have been foundational to the cultural and economic development of Western societies. Automation brings with it many attendant dangers including the assumption, if not the reality that humans no longer control their own destiny. If the interactions were between nature and humans, then this loss of control would be expected. For example, you might anticipate a tornado or a hurricane, but you cannot control them. The fact is that virtual spaces are cultural and technological and are therefore subject to different rules than nature. They are artificial constructs. It seems clear however, that the conventional meaning of artificial will not suffice to explain autonomous processes that build microscopic and macroscopic worlds using algorithms that often develop far beyond the original conceptions of their progenitors. We may be in need of a radical revision of what we mean by simulation and artificiality because of the ease with which digital machines build complex non-natural environments. (From "How Images Think")

The Challenge of Change in Creating Learning Communities (3)

The notion of learning communities needs to be deepened through an analysis of institutions and how they function. If we are going to create a new model for learning, then it will have to stand the test of organizational restructuring and disciplinary redefinition. The latter will not be accomplished unless we take a long and hard look at the informal learning that is a part of everyone’s daily existence. The disciplines that have been the bedrock of education must incorporate the lessons of the informal into their purview. For example, the study of language and composition should not take place outside of the experience of popular culture. The study of the sciences cannot be divorced from ethical and philosophical issues.

If we are to take the effort seriously, then the creation of new learning communities will bring with it a transformation of what we mean by disciplines. For better or for worse, the very nature of disciplines, their function and their role within and outside of institutions has changed. The context for this change is not just the individual nature or history of one or other discipline. Rather, the social and cultural conditions for the creation and communication of ideas, artifacts, knowledge and information have been completely altered. From my point of view, this transformation has been extremely positive. It has resulted in the formation of new disciplines and new approaches to comprehending the very complex nature of Western and non-Western societies. We are still a long way from developing a holistic understanding of the implications of this transformation.

It is an irony that one of the most important of the physical sciences relating to the brain, neuroscience, has become a combination of anatomy, physiology, chemistry, biology, pharmacology and genetics with a profound concern for culture, ethics and social context. Genetics itself makes use of many different disciplines to achieve its aims. To survive in the 21st century the neurosciences will have to link all of their parts even further and bring genetics, the environment, and the socio-cultural context together in order to develop more complex models of mind. It may well be the case that no amount of research will produce a grand theory. But, as the great neuroscientist V.S. Ramachandran has suggested, the most puzzling aspect of our existence is that we can ask questions about the physical and psychological nature of the brain and the mind. And we do this as if we can somehow step outside of the parameters of our own physiology and see into consciousness. Whatever the merits of this type of research, it cannot avoid the necessity of integration.

Unfortunately, the same cannot be said for many of the disciplines in the social sciences and humanities. Although there has been an explosion of research and writing in the conjoining areas of Cultural Studies, Communications and Information Technologies, the various specializations that underlie these areas remain limited in their approach to the challenges of interdisciplinarity and learning. The reasons for this are complex. Among the most important, is the orientation that some of these disciplines follow and that is to develop their own language and culture of research and practical applications. The difficulty is that, as they grow more specialized, they cease to see or even envisage the potential connections that they have to other areas. They also disconnect themselves from the educational context that is after all a context of communications and exchange.

Most importantly, the research agendas in all disciplines will have to incorporate new approaches to culture and to the fundamental importance of popular and traditional cultures in creating the terrain for learning at all levels. This will be a huge challenge, but it is the most basic one if we are to create the conditions for learning communities and learning societies.

END....

 

 

The Challenge of Change in Creating Learning Communities (2)

There is a simple definition of learning community that says, “This phrase describes a vision and model where a community's stakeholders come together and share resources?
Another definition is, “A “learning community? is a deliberate restructuring of the curriculum to build a community of learners among students and faculty. Learning communities generally structure their curriculum so that students are actively engaged in a sustained academic relationship with other students and faculty over a longer period of their time than is possible in traditional courses?

[Fanya had a good thought here, that I would like to quote from…]
Not as a 'comment' - just as a thought - learning institutions may be run and funded by the government - but their efficiency and status are a pride to the particular community where they function. It's not
only an interaction between the 'school' and the community - but a challenge to that community to
provide that institution with whatever it needs to succeed and thus provide the community with a source
of pride! This may be harder to examine in the larger frameworks - but you can see it here in the kibbutzim and moshavim - where the institutions are smaller and in many cases, self-run, if not self-budgeted.

The above two definitions are very broad, but they do point out the extent to which a ‘model’ of communications also surrounds every discussion of education and learning. And this crucial point links to another important issue, to what degree do the many shifting media and communications environments that now dominate the cultural landscape of most countries in the world affect notions of learning? Even in environments where the global media are weak, such as Nepal, radio is being used to teach and communicate. The same situation exists in much of East Africa. The fact that radio can play such an important role in the education of the community suggests how crucial the linkage is between learning, media and tools of communication. This is an area in desperate need of further research and development.

When one asks the question, how can a learning community be built? There is the potential that the question will not deal with the reality that learning is one of the most unpredictable activities that human beings engage in. This issue exceeds the boundaries and mandate of this article. But, anyone who has examined the vast plethora of informal learning contexts that people in communities create for themselves knows that the rules for learning cannot be predefined. This is why high schools remain an oppressive experience for most teenagers. They are at an age when they are actively involved in creating and participating in their communities of interest. High school often becomes an impediment to learning and trivializes the vast amount of education that goes on outside of its walls. This process is so unpredictable and the influences are so broad, that the question of how learning takes place cannot be reduced to locality or even community and especially to school itself.

So, we have a paradox here that defies simplification. The desire to create a learning community is very much about the need to create an institutional context for learning. We are talking here, in the most fundamental of ways, about the process of building formal strategies for the learning process. The difficulty is that building an institutional context for learning means redefining what we mean by students and it is not enough to just transform student to learner. It also means redefining what we mean by community since it is likely that any school is really made up of communities of learners. Some of these learners may be connected to each other and many may not be connected. The complexity of social interactions within a school far exceeds the complexity of the classroom, which is itself barely manageable as a learning environment.

To be continued……

 

 

Response to The Challenge of Change in Creating Learning Communities (1)

Jan responds to the previous entry:

I think it is important not to limit the idea of learning community to that of 'a community that cares for the institutions - such as schools - through which people learn,' which seems to be what you are saying in this opening piece (or do I read you incorrectly). Such a notion limits the idea of learning to what a learning individual does. In my perception that is an unnecessarily reduced meaning of learning.

Learning is what we do that allows us to enhance our constructive intercation with change. That's an abbreviated version of a more detailed and comprehensive definition of learning that I once developed and that I find useful in helping me understand the idea of learning community. Just as individuals, communities, societies, nations, regions, corporations, etc. interact with change. They produce change and they adapt to change; a complex multifaceted game. Both individual people and smaller and larger social entities become better at that game by experimenting different kinds of behavior and reflecting on such behaviors. The result settles down in the individual mind of people as much as in the collective mind of those social entities. Indeed, stories and symbolism play crucial roles in shaping the mind of the community, but it's a process more complex than what you find by adding up the learning of all the individuals that are part of the community. A learning community simply learns at a higher level of complex organization than the individuals that are part of the community.

One can extrapolate form the above relationship between learning individuals and the learning communities of which they are part (often more than one, e.g. a professional community, a religious community, a community of people who engage in a particular sport, a community in myspace.com, etc.). All these (learning) communities together - and together also with the (learning) individuals that constitute them - are the complex building blocks of yet more complex social entities such as entire (learning) societies.

You say that "the claim that the linkages between learning and community mean fundamental change, ignores the fact that links of this sort have been the defining ideology of most learning environments in the 19th and 20th centuries" and I agree with your observation. Of course, we have always been learning, and so have our communities. The fact that we didn't recognize it is perhaps yet another consequence of the too narrow identification of learning with what happens inside schools.

My response:

I of course agree with you.....the challenge seems to be
in the definition of community, the boundaries and borders
of practice and learning that grow out of the experiences of working with
people (and sometimes working against them!).

People cluster together for a variety of reasons and are motivated
to continue if they feel that there will be some value to the experience.
Problem is that value tends to be seen through a very narrow lens.

 

 

Paradoxes of New Media (3)

(From Part 2)

There is another important question here. What makes a medium specific discipline
a discipline in any case? Is it the practice of the creators? Is it the fact that a heritage of production and circulation has built up enough to warrant analysis? I think not. Disciplines are produced through negotiation among a variety of players crossing the boundaries of industry, academia and the state. The term New Media has been built upon this detritus, and is a convenient way in which to develop a nomenclature that designates in a part for whole kind of way, that an entire field has been created. But, what is that field? Is it the sum total of the creative work within its rather fluid boundaries? Is it the sum total of the scholarly work that has been published? Is it the existence of a major journal that both celebrates and promotes not only its own existence but also the discipline itself? These issues of boundary making are generally driven by political as well as cultural considerations. They are often governed by curatorial priorities developed through institutions that have very specific stakes in what they are promoting. None of these activities per se may define or even explain the rise, fall and development of various disciplines. But, as a whole, once in place, disciplines close their doors both as a defensive measure, but also to preserve the history of the struggle to come into being.

(Part 3 begins)

I am not suggesting by any means that things have not changed. I am not saying that digital media are simply extensions of existing forms of expression. I am saying that the struggle to define the field or discipline of media studies has always been an ongoing characteristic of both artistic and scholarly work in media. The permanence of this quasi existential crisis interests me. For the most part, for example, media studies ran into a wall when cultural studies appeared as an extension of English Departments, and when Communication Studies grew into an important discipline in its own right in the late 1950’s. Why? Suddenly, everyone was studying the media, commenting about popular culture, appropriating (mushing and mixing) intellectual traditions in a variety of different and often anarchic ways. But, somehow, the discipline as such grew into further and further levels of crisis. Which intellectual model works best? Does one use structural or post-structural modes of analysis? How can we factor in the linguistic, semiotic and ethnographic elements, and also bring in the contextual, political components? So, this is where I return to vantage point.

Juxtapose the following: The film, The Polar Express by Robert Zemeckis, which bridges the gap between digital worlds and the human body and tries to humanize an entirely artificial world; The American election of 2004 which relied on the Internet both for information and misinformation; the spectacular growth of web sites, like Friendster.com, which extend the way humans interact, communicate and develop relationships; the growth of Blogs, which have pushed publishing from the corporate world to the individual; the growing importance of search engines and popular discussions of how to engage with a sea of information; and finally, the spectacular growth of games, game consoles and on-line gaming.

Together, these and many other elements constitute image-worlds, which like a sheath cover the planet, allowing and encouraging workers in India to become office employees of large companies in the West and Chinese workers to produce goods and manage inventories on an unimaginable scale. These image-worlds operate at micro and macro levels. They are all encompassing, a bath of sounds and pictures immersing users in the manipulation of information both for exchange and as tools of power.

Picture these image-worlds as millions of intersecting concentric circles built in pyramidal style, shaped into forms that turn metal into messages and machines into devices that operate at the nano-level. Then imagine using a cell phone/PDA to call up some information that locates humans on a particular street as was done during the crisis in Louisiana and you have processes that are difficult to understand let alone see without a clear and specific choice of vantage point.

Can I stand, so to speak above the fray? How do I escape from this process long enough to be able to look back or ahead? Does Google represent the vantage point? Since historical analysis is by its very nature retrospective and since time is at best an arbitrary metaphor for continua, am I left with a series of fragments, most of which splay off in different directions? It is an irony that the thrust of this conference has been so archeological, trying to pick up the pieces, show what has been missed, connections that have not been made, as if retrospection is suddenly adequate irrespective of politics, conflict and ethics. Most interesting from my point of view is the use of the cognitive and neurosciences, dominated as they are by positivism and empiricism. Even more to the point, and to give you a sense of how important vantage points are, take the best example of all, the computer sciences which until very recently had transformed subjectivity into that insidious term user and for whom the cybernetic dream of linking input and output has determined the shape and form of most computer programs.

The digital age or perhaps better put, the algorithmic age, makes these issues all the more urgent because if the fundamental tropes for human subjectivity can so easily be reduced to terms like user, then not to understand the origins of the research in engineering that went into the trope pose many dangers. Tor Norretranders' brilliant book, The User Illusion: Cutting Consciousness Down to Size (1998) investigates this problem in great depth and it is clear to me that richer paradigms of computer/human interaction are needed if we are to move beyond the limitations of mechanical modes of thinking about digital technologies and their impact on human consciousness. Yet, “user��? is also an outgrowth of devalued models of subjectivity within media studies itself, a confluence of the media’s own evaluation of its viewers (ie the couch potato metaphor) as well as the challenge of studying viewing itself. This is perhaps the greatest irony of the ebb and flow of analysis in media studies. At times, particularly in the early to mid-seventies with the advent and growth of feminism, subjectivity became a site of contestation with a variety of methods from psychoanalysis to sociology to linguistics used as avenues into analysis, criticism and interpretation. All of that heterogeneity is now built into the analysis of new media with varying degrees of success and often with no reference to the historical origins of the intellectual models in use. Subjectivity remains a site of contestation as a concept, explanation and framework for understanding what humans do with the technologies and objects they use.

The conflation of user with experience, the reduction of subjectivity to action and reaction, is only possible if theory and analysis put to the side the far more complex side of human thought and that is the imagination. Digital experiences are highly mediated by technology but imagination, fantasy and daydreams increase the levels of complexity and add many more levels of mediation to the rich interrelationships that humans have with their cultures. All of these levels need to be disentangled if a variety of vantage points are to be constructed. Perhaps then, media studies can begin to make some claims about a paradigm shift of enough strength to warrant the use of the term new…..

End.....

Paradoxes of New Media (2)

(The first paragraph connects part 1 and part 2.)

To understand why New Media may have been convenient for both scholars and artists one need only look at the evolution of media studies. Although humans have always used a variety of media forms to express themselves and although these forms have been an integral part of culture, and in some instances the foundation upon which certain economies have been built, the study of media only developed into a discipline in the 20th century.

There are many reasons for this including and perhaps most importantly, the growth of printing from a text-based activity to the mass reproduction of images (something that has been commented on by many different theorists and practitioners). The convergence of technology and reproduction has been the subject of intense artistic scrutiny for 150 years. Yet, aside from Museums like MOMA the disciplines that we now take for granted, like film, photography, television and so on, came into being in universities only after an intense fight and the quarrel continues to this day.

The arguments were not only around the value of works in these areas, (photography for example, was not bought by serious art collectors until the latter half of the 20th century which may or may not be a validation of photography’s importance), but around the legitimacy of studying various media forms given their designation as the antithesis of high culture. Film was studied in English Departments. Photography was often a part of Art History Departments. Twenty years after television started to broadcast to mass audiences in the early 1950’s there were only a handful of texts that had been written, and aside from extremely critical assertions about the negative effects of TV on an unsuspecting populace (the Postman-Chomsky phenomenon), most of the discourse was descriptive.

The irony is that even Critical Theory in the 1930’s which was very concerned with media didn’t really break the scholarly iceberg that had been built around various media forms. It took the convergence of structuralism, semiotics and linguistics in the late 1960’s, a resurgence of phenomenology and a reconceptualization of the social and political role of the state to provoke a new era of media study. In Canada, this was felt most fully through the work of McLuhan and Edmond Carpenter and was brought to a head by the powerful convergence of experimentation in cinema and video combined with the work of artists in Intermedia, performance and music.

Another way of thinking about this is to ask how many people were studying rock and roll in 1971? After all, rock and roll was disseminated through radio, another medium that was not studied seriously until well after its invention (sound based media have always been the step-children of visual media).

So, the resistance to the appearance of different media forms may explain why media were renamed as new media. It may explain why someone like Lev Manovich relies on the trope of the cinema to explain the many complex levels that make up media landscapes and imageworlds. New in this instance is not only an escape from history, but also suggests that history is not important.

There is another important question here. What makes a medium specific discipline a discipline in any case? Is it the practice of the creators? Is it the fact that a heritage of production and circulation has built up enough to warrant analysis? I think not. Disciplines are produced through negotiation among a variety of players crossing the boundaries of industry, academia and the state. The term New Media has been built upon this detritus, and is a convenient way in which to develop a nomenclature that designates in a part for whole kind of way, that an entire field has been created.

But, what is that field? Is it the sum total of the creative work within its rather fluid boundaries? Is it the sum total of the scholarly work that has been published? Is it the existence of a major journal that both celebrates and promotes not only its own existence but also the discipline itself?

These issues of boundary making are generally driven by political as well as cultural considerations. They are often governed by curatorial priorities developed through institutions that have very specific stakes in what they are promoting. None of these activities per se may define or even explain the rise, fall and development of various disciplines. But, as a whole, once in place, disciplines close their doors both as a defensive measure, but also to preserve the history of the struggle to come into being.

To be continued......

mt_foundation-show.jpg

Paradoxes of New Media (1)

The continuum that links real events with their transformation into images and media forms knows few limits. This is largely because of the power of digital media and digital mediation and is something that has been commented upon in many different contexts. It is perhaps not an accident that terrorists, governments and corporations all make use of the same mediated space. We call this the Internet, but that now seems a rather quaint way of describing the multi-leveled network that connects individuals and societies with often-unpredictable outcomes. Networks, to varying degrees, have always been a characteristic of most social contexts. But, the activity of networking as an everyday experience and pursuit has never been as intense as what we have now, nor have the number of mediated experiences been so great. This may well be one of the cornerstones of the new media environment. However, new media as a term, name, or metaphor is too vague to be that useful. There are many different ways of characterizing the creative process, many different methods available to talk about the evolution of networks and technologies and the ways in which creative work is distributed, and the extraordinarily intense way in which communities and individuals look for and create connections to each other. The activities that are encapsulated by the term media are broad and extend across so many areas, that the danger is that no process of categorization may work. Typologies become encyclopedic so that what we end up with are lists that describe an evolving field but no vantage points to question the methodological choices being made. What distinguishes one list from another?

To understand why New Media may have been convenient for both scholars and artists one need only look at the evolution of media studies. Although humans have always used a variety of media forms to express themselves and although these forms have been an integral part of culture, and in some instances the foundation upon which certain economies have been built, the study of media only developed into a discipline in the 20th century. There are many reasons for this including and perhaps most importantly, the growth of printing from a text-based activity to the mass reproduction of images (something that has been commented on by many different theorists and practitioners). The convergence of technology and reproduction has been the subject of intense artistic scrutiny for 150 years. Yet, aside from Museums like MOMA the disciplines that we now take for granted, like film, photography, television and so on, came into being in universities only after an intense fight and the quarrel continues to this day.

To be continued......

MT-April22_06.jpg

Every historical period sees itself as contemporary

Every historical period sees itself as contemporary. The inventors of the telegraph made many of the same claims as the designers of the Internet. Pioneers in the production and creation of film in the 1890’s traveled the world in an effort to generate interest in the new medium and to establish networks of playhouses where their films could be viewed (this activity is now being repeated in the Microcinema Movement which uses DV cameras). Every form of transportation that humans have invented has led to their use in the transmission of information from trains to boats and planes. The movement of personal letters across vast distances especially from the 17th century onwards is partially the result of the increase in modes of transport, especially ships (and the idea of a post office as a fulcrum for distribution). Typography remains as important to the World Wide Web as it did to early forms of publishing with the Gutenberg Press. Modes of illustration, although fundamentally altered by sophisticated software, remain embedded in centuries old methods of drawing, painting and sketching.

My point is not to belittle or dilute the importance of new technologies at the beginning of the 21st century. Rather, it is to place them into a context that will connect innovation to history and that will show how the very notion that a computer can create links between different bits of information was an “invention? that came about because of a three hundred year experiment in Western culture with novels and theater. It is important to remember that during the early phase of discussions about computers among engineers and designers, computers were generally thought of as “arithmetic? machines, or glorified calculators. As time progressed, it was the culture of experimental labs like the Bell Labs and Xerox-Parc that began to move computers far beyond initial assumptions both about their power and their utility. Brilliant scientists and engineers ran those labs. The actual role and impact of creative artists and designers needs to be examined with great care, but it is clear that the effort to go beyond simple functionality came about because of the tensions and challenges posed by different disciplinary orientations clashing with each other.

Michel Serres and Technology

SingaporeMT.jpg

Little India in Singapore

The brilliant French philosopher, Michel Serres proposes in recent publications that one of the best ways of understanding history is to think about human events as a series of interconnected folds, a networks of networks in which events that may have taken place thousands of years ago are still connected to the present through human memory and human artifacts.

The folds of which Serres speaks can be visualized as a series of pleated pages in which different points touch, sometimes arbitrarily and other times by design. The metaphor that Serres has developed has another purpose. In order to understand the technologies, social movements and cultural phenomena that humans have created, each point of contact among all these pleats needs to be drawn out in a detailed and narrative manner. Although Serres does not describe this method as stream of consciousness that is sometimes how it reads, to the point where the simplest of objects becomes the premise for an expansive narrative.

For example, (adapting Serres’s method) the notion of networks needs to be understood not only as a function of technology and communications systems, but also through the efforts by nearly every culture and every generation to develop a variety of bonds using any number of different means from language to art to music to political, religious and economic institutions. This suggests that the Internet, for example, is merely a modern extension of already existing forms of communication between people. And, while that may seem obvious, many of the claims about the Internet suggest that it is a revolutionary tool with implications for the ways in which people see themselves and their surroundings. More often than not, its revolutionary character is related to obvious characteristics like speed of communications, which may in fact be no more than a supplement to profoundly traditional modes of information exchange. The intersection of the revolutionary with the traditional is essential to the success of any new and innovative technology and may be at the heart of how quickly any individual innovation is actually taken up by individuals or by society as a whole.

Some recent comments on Research and Wikipedia

From Chris on Research in the Arts

Here in the UK, arts research culture might be a bit more accepted, but it is still nascent. I agree that the terms 'practice-based' and 'theory-based' set up a problematic dichotomy for research culture. In acknowledging the distinction, one runs the risk of mirroring the historical bias towards empiricism. This bias has supported a hierarchy of epistemologies that, descending from quantitative research to qualitative research and from theory-based to practice-based research, denies the creative arts a platform for expression as knowledge.

The ways in which the creative arts shape our understanding of the world are difficult to measure, but no less significant than other models of knowledge. If most 'pure science' researchers would accept that some form of rudimentary research occurs prior to art making, can we take it even further? Can we suggest that an artwork - in itself - is a form of research?

I believe we can. Especially when it involves the active questioning of existing frameworks for understanding, with the inclusion of an 'experiment' designed to fill in the gaps that are opened up by these questions. This occurs most frequently in the new media arts now, an area informed by cognitive models of the human condition, based on active experimentation with new technologies that pose questions about how we perceive.

The conclusions from these arts experiments may not be concrete, indeed they may be difficult to outline and impossible to apply in any economy. But insofar as they function as part of a process of semiosis - the generation of signs and thus meaning... well... they're rather important, and deserve to be encouraged.

From Mary on the idea of an Art School as Wikipedia

If Wikipedia were an art school, it would look like WalMart. Nah. It would look like an academic department that has been around for too long - a congerie of pseudo-experts. Nothing worse than that. Consolidated mediocrity. When I first saw Wikipedia I thought - WOW - post-structuralism meets pedagogy in the form of an ever-evolving set of artifacts. Nope. Take a deeper look at the rules governing the construction of knowledge in Wikipedia -- no controversy blah blah -- but the most interesting thing to me -- no original knowledge -- wow -- and just go look at how this absolutely implausible limit condition is defined and policed. Fascinating. Then go look at the Rosa Parks entry, and carefully go through the history of the page. Look at the contest over "getting it right" and "getting the controversy out of the story". Art school. Wow. I hope not. Wikipedia is modernism run amok. A moebius strip of epistemic spam.

EmilyCarrHalloween.jpg

From a Recent Event at Emily Carr Institute