Ronny Siebes is a researcher at the Free University of Amsterdam. He and I met recently in The Hague and the ensuing email exchange represents only a small facet of the longer discussion that we had.
I thought about the question you asked "Can machines dream" and have the following answer:
First, I would like to give my definition of what human dreaming is. Most humans know that they sometimes dream and may remember what they have dreamt, like the images, sounds or other impressions. Obviously, these things like pictures are not really there in the head because we don't have eyes in our head to look at them and if we had, it is too dark to see it (Dennet:). I'm not an expert on neuroscience but I guess that the brain works like this: images (encoded in a parallel bundle of light beams) that our eyes receive trigger a set of neurons that are responsible for interpretating visual input and these interpretations are stored in our memory. When we dream, parts of our memory become active and are manipulated by a script generated by fears, angers or other chemical impulses.
For this information to be remembered, the outcomes of these manipultion processes which are generated by the scripts are stored back again into our memory. Our consciousness (whatever that may be) walks through our memory and recognises that there is new information, namely the new stuff that was added by the dream process.
Computers are also able to receive, store and manipulate information from the outside world. For example, take a computer that has a web-cam connected to it and stores the bitstream on a hard disk or other kind of memory. It is easy to build a program that reads out the bits that represent the movie and to manipulate it. This manipulation would currently be very rude (for example just change some colors, or cut/copy- and paste some shots), but also very advanced like algorithms that detect scenarios and are able to replace objects by other objects. These manipulated movies can be stored again and after a while be 'played' (my free definition of becoming conscious) in a macromeda or windows media player.
Thus to summarize my point: if we describe human dreaming by its functional properties, we can apply it to its artificial counterpart.
Response by Ron Burnett
Imaging of the brain can provides pictures of the connections between different parts, but imaging cannot provide details of what Gregory Bateson has so aptly described as the set of differences that make relations between the parts of the mind possible. “The interaction between parts of mind is triggered by difference, and difference is a non-substantial phenomenon not located in space or time… (Bateson, 1972: 92)
Difference is not the product of processes in the brain. Thought cannot be located in one specific location; in fact difference means that the notion of location is all but impossible other than in the most general of senses. Bateson goes on to ask how parts interact to make mental processes possible. This is also a central concern in the work of Gerald Edelman, particularly in the book he co-authored with Giulio Tononi (2000) where they point out how the neurosciences have begun to seriously investigate consciousness as a scientific ‘subject.’ (3) Edelman and Tononi summarize the challenge in this way:
What we are trying to do is not just to understand how the behaviour or cognitive operations of another human being can be explained in terms of the working of his or her brain, however daunting that task may be. We are not just trying to connect a description of something out there with a more scientific description. Instead, we are trying to connect a description of something out there — the brain — with something in here — an experience, our own individual experience that is occurring to us as conscious observers. (11)
The disparities between the brain and conscious observation, between a sense of self and biological operations cannot be reduced to something objective, rather, the many layers of difference among all of the elements that make up thought can only be judged through the various strategies that we use to understand subjectivity. Edelman and Bateson try and disengage a series of cultural metaphors that cover up the complexity of consciousness.
One of these metaphors is that the brain is like a computer and that human memory stores information much like a hard disk. There is simply not enough evidence to suggest that the metaphor works. So, machines cannot dream because among many other things, we don't have an adequate definition of what the mind does when it dreams. All we have is the language of metaphor and description, a semantically rich space that cannot be reduced to any single or singular process.