Monday, June 2, 2008

Visual Identities Application 2

While connecting a fairly complex web of interconnected observations and semantic models, I found that the Visual Identities reading on Apple and IBM's logos provides useful insight into how we can weave "hard data" and more qualitative analysis together into a meaningful narrative. I also found the breadth of data analyzed--ranging from the actual form of a logo, to corporate documents, to the broader context within which it interacts fascinating. This has triggered me to think even further beyond the image of Che itself and to examine in greater depth the context where is image does (or doesn't) appear.

In the author's analysis, I specifically appreciated how Floch narrowed down a broader context of observation and invited the reader to see the process by which she deduced similarities and differences in few significant categories like structure, color, and form. Though I'm not sure I tracked all the nuances of her treatment of the symbols, my understanding was that she focused more on how they actually functioned than what they were supposed to mean in an abstract/theoretical sense. I found this mode of analysis refreshingly pragmatic (and quite revealing as well). By seeing inside the author's process, I felt comfortable working loosely with her conclusions about how the two symbols "mean," especially in relation to each other.

This makes me want to understand Che's image outside of a static isolated context and to begin to make observations about how it has a dynamic conversation with its environment. Can changes to the image over time be explained by these observations? Some specific pieces of contextual data that I've intuitively assessed to be useful include:
- The frequency of repeated images on a Google image search
- The "caption" text associated with Flickr posts
- The user IDs of deviantART posters together with their image captions

At the moment, I've physically placed a single copy of each of my image samples into unique folders like "tribute," "art," "editorial," "merchandise," etc. While these are useful for finding trends, they are also counterproductive, when an image best fits in multiple categories. For example many pieces of art with Che's image, are also tributes, but not all tributes are artworks and neither are all artworks tributes.

Ideally I'd be able to automatically leech text associated with the images and store it as meta data, while manually flagging images with as many "tags" as are appropriate. Tags would include generic statistics like where it was from, when it was accessed, when it was posted, and how frequently duplicate images occur, as well as which categories apply to it (art, tribute, merchandise, all of the above, etc). This floating cloud of images linked to data would then facilitate clearer numeric observations that would pave the way inquiries into descriptive narratives. For example, I expect that a statistical analysis of my images would reveal that images bashing Che tend to modify the iconic Korda image less, while images giving tribute to him tend to be more unique and personalized. I'd like to explore this, but first it seems necessary to determine the degree to which the observation is true.

No comments: