Our photos are filled with gobs of metadata, but it’s mostly useless stuff to anyone but pro photographers. It identifies the camera the photo was taken on and all of its settings, but it can’t tell us anything else. Was it raining in the photo? Was it taken on the beach? This is all mineable data for search engines that it’s up to us to categorize.
The Descriptive Camera is a project by Matt Richardson, and it’s essentially a webcam glued to a printer and some additional processing equipment. But when it takes a photo, rather than printing an image, it prints out text descriptions, such as:
“Looks like a cupboard that is ugly and old. having some plates on it with a lamp attached to it.”
“It’s a dark room with a window. The image is quite pixelated.”
The responses, while odd by any measure, are actually rich with information--far richer than automated image processors could identify. So how is that possible? After the camera takes a photo, a JPEG is uploaded via Amazon’s Mechanical Turk API. This API has actual humans pick up various intelligence-related tasks, and in this case, that task is to automate a process that a computer normally can’t: describe a photo.
So to the end user, after a picture is taken, a yellow LED signals that the photo is “developing.” Following a few minutes of waiting (while workers scurry to pick up the task), a description is printed and the user has their photo.
Is the project scalable? It could be. Amazon charges about $1.25 per annotated photo, putting the price on par with high-end photo prints. But when it comes to my personal photos, I’d take the soullessly faulty calculations of computer software over the brutal honesty of a total stranger any day. Then again, $1.25 isn’t bad for a piece of art.
[Hat tip: PetaPixel]