When you press the shutter of Camera Reversa, you effectively take two images: one that you saw, and one “seen” via a content-based image retrieval (CBIR), or reverse image search, that uses various algorithms to match the input image with visually or semantically similar output images, which are drawn from Google’s image database.

Content-based image retrieval algorithms currently function within two levels: from low-level visual analysis to high-level semantic inferences. When an image is utilized as a search query, an image signature is extracted, which is then used to compare the image signatures from a database of pre-analyzed images. The image signature can vary from a histogram, which maps out primitive visual characteristics such as color, texture, and shape, to concept probability models used to infer complex semantic relationships.

Depending on the sophistication of the CBIR algorithms being utilized, a “semantic gap” may occur, in which there is a mismatch in similarity judgments between user and machine. The semantic gap can be a source of frustration, as the human operator may be unsatisfied or unimpressed with the retrieved result, or it may be the source of a poetic juxtaposition which neither human nor algorithm could have generated solely.

This generative photographic process questions traditional notions of intention, authorship, and subject. What are the implications of engaging in a photographic relationship with the world, not to render your embodied environment, but rather to operationalize the visual and semantic qualities of that environment as search queries to explore a given image database? Aesthetic characteristics such as composition, lighting, focal length, and depth of field all have stakes within the photographic act – not in a singular direction, but within overlapping fields of weighted probabilities, as visual attributes compete, intersect, and intermingle with semantic objects.

Furthermore, is it productive to trace authorship to the images generated from such a process? How would creative agency be distributed between the original image maker, the image uploader, the algorithms implicated in the web crawling, indexing, and CBIR processes, the numerous engineers of those algorithms, and the individual photographing his/her embodied environment to provide the search query?

Lastly, who/what is the subject of this photographic discourse? Is it the content of the retrieved images, as reflections of the social and cultural processes that constitute the underlying image database? Are we the subject as image producers and disseminators – a type of collective autoethnography? Is it the various computer algorithms, as we interrogate their evolving retrieval capacities? Or is it our consciousness, as we employ algorithmic counterpoints with which to probe the structures and limitations of our perceptual vantage points?


About the Artist:

AaronKutnick_PhotoAaron Kutnick is a recent graduate of the Masters of Fine Arts program in Experimental and Documentary Arts at Duke University. His background is in visual research, including documentary filmmaking in South American indigenous communities and first-generation immigrant populations in the United States. He has also worked as a media producer at the Smithsonian National Museum of the American Indian’s Film and Video Center.

His latest work, Docu{rithm}, asks what ethnographic fieldwork looks like within the virtual spaces of the Internet, and what kind of tools are best suited to meaningfully engage this rich, dynamic realm of cultural artifacts. He has designed and constructed a series of interactive cameras that employ computational intervention between the light coming through the lens and the output image displayed on the screen. By adding algorithmic functionality to traditional research tools, the cameras forge a bridge between the embodied world of the photographer and the virtual world of the Internet, inviting users to explore the liminal space in which they find themselves.