Tangible. Touchable. Sensible (1)

There are dangers in surrounding ourselves with screens. Not the danger that we will become one, but the severe danger that we will lose contact with the tangible nature of the environment we live in. Tangible, touchable, palpable — not just words, but essential parts of our being, fundamental to the cellular/physical nature of humans.

Smell, passion, sight are not just extensions of who we are, but at the heart of what we mean by human life. I am not a believer in the post-human, since I am pretty sure the term references more of a psychological state than a real one. I am a lover of digital culture and the world it has generated, but not ready to give up on the sounds of birds, the smells of a wet tropical forest or the vistas available to anyone who climbs a hill or a mountain.

The visual power of screens and in particular, the visual power of internet-based communications technologies does not so much convert the world into a vast array of signs and symbols, as it supports and promotes the potential strength of people interacting with each other. Screens now mediate most of these interactions, but that does not mean that we have to abandon the physical world we share.

How does this make you feel? A father is feeding his daughter. He holds the bottle in one hand and an iPhone in the other. The screen of the iPhone is in Facetime mode. The baby is staring at her mother who is talking and gesticulating — trying against all odds to reach out from the screen to her child. The interaction is rich in potential and contradiction. On the one hand, mother and child are “together.” On the other hand, they are not. The father has become the medium and is a passive observer.

This scene, which is more like a tableau, took place in Whole Foods near their food counter on November 29th 2013.

If screens are boundaries, mediators and frames for reality, then it is possible to envision the baby connecting with her mother solely through the screen of the iPhone. As preposterous as this may appear to be, that is essentially what Facebook is, a virtual environment to which we entrust some of our most personal moments. This is largely because connection has superseded the rather banal and sometimes trying moments of physical interaction. That is the ultimate irony of networks. They permit, encourage and support interconnection without the need to test the physical space of touch and smell and more. (Part 2 will appear soon.)






Photographs as Data (1)

Here is a partial list of the ideas and terms that would be worth thinking about with respect to the role of data, information and visualization in our culture.


Information/Screens/3D Worlds/Virtual Reality/Images/Artifacts/


All of the above is an inventory of ideas and terms and is also a potential index. Can all of the above be easily classified?

Lets start with a map that might also be an illustration.

Screen Shot 2013-04-19 at 9.17.01 PM.png

If you are a reader of texts and articles on the digital world as well as books on business in the Internet era, you will be familiar with this type of diagrammatic guide. It is supposed to reflect the impact of new digital technologies and lead you to a better understanding and visualization of ideas that may otherwise be difficult to explain in words alone. The mapping process is also meant to take raw data, ideas and terminology and allow you to visualize greater and greater levels of complexity — to, in effect, produce a taxonomy. The taxonomy is both a reflection of reality and a representation of active material practices. It is a way for ideas to become concrete, for phenomena to be linked and for the visualization of networks of ideas to be given some concreteness.

Classification is an important aspect of trying to comprehend digital environments. How can large databases and blocks of information be explored? What are some of the categories that we normally use to classify data and knowledge? How can we link taxonomies to real life experiences?

For example, the challenge of trying to search through a database of images is an enormous one. Since images can represent, document, and be metaphors of information and ideas, the search parameters would have to be very large to accomplish even minimal search tasks. In addition, can information of this complexity be organized around interfaces that respond to the intuitive needs of users so as to facilitate often times subtle types of searching? Can the serendipity of exploring images match the pressure to classify them within digital environments? 

At the core of these issues are the developing nature of computers and mobile devices and their evolution from desktop environments into everyday appliances with phones, for example, functioning effectively as replacements for cameras. The ubiquitous presence of these devices suggests that they can perform every function that is demanded of them. There is also a cultural sense that smart technologies have an infinite capacity to engage with any number of different problems, which is why apps have become the software language of mobility.

Consequently, software is being pushed into realms of greater and greater complexity. The underlying impact of all of this activity is quite ironic: as machines take on more and more tasks, it becomes less and less clear how intentionality (who did what and why) actually comes into play at both the hardware and software level. For a computer or a phone or a tablet to be powerful enough to do what we expect of them, millions of lines of code have to be written. In effect, this is another level of data construction and visualization (however abstract).

All of this effort will inevitably be opaque to the user. Therefore, the issue of visualization is even more complex than what a list, diagram or taxonomy could ever offer. For example, how do we visualize the programming process? What do you make of this programming script in one of the dominant languages used for the web, JavaScript?

<!--  Begin

function MakeIt(form){

var txt='<META NAME="DESCRIPTION" CONTENT="'+form.description.value+'">\r\n';


if (form.keywords.value)




function AddText(form, Action){

var AddTxt="";

var txt="";



function ResetPage(form){

if(confirm("Do you want to clear all and start a new META-tag Creation?")){  






// End -->

This script is designed to add a meta-tag to a web page in order to facilitate categorization by search engines. It is one of the most important ways of making sure that pages are recognized and classified in order for users to have access to the information that Web programmers and content providers have placed on the web. The JavaScript facilitates searching and is a way of envisioning or anticipating usage. At one level, the opaqueness of this script is a good thing because it allows processes to happen without the need to understand the underlying logic. On the other hand, does this lack of knowledge hinder what users do as they search for information? 

As we know, images are full of information. Digital images are carefully designed representations of complex coding. We don't explore photographs in order to explore their code. Rather, we want images to visualize what we have seen.

More on this in the next part of this series.