Fashion Week Models Take to the Runway Wearing Google Glass’s Ubiquitous Cameras
At Diane von Furstenberg’s New York Fashion Week show yesterday, every model to walk down the runway wore Google Glass, … Continued
At Diane von Furstenberg’s New York Fashion Week show yesterday, every model to walk down the runway wore Google Glass, the company’s experimental augmented-reality camera glasses. Images from the models’ points of view have been posted to the Diane von Furstenberg Google+ page (if you couldn’t yet tell, this was a marketing partnership; Google’s Sergey Brin took the stage too, wearing Glass). There are plans to release a short film later this week edited only from images and video captured by the wearable cameras during the course of the show.
The photos released so far are interesting. They could be mistaken for conventional camera phone shots at first glance, but even in this small sampling, some unique characteristics of photos taken with an always-on wearable camera begin to emerge. The eye-level perspective on the runway is especially pronounced, and the blurring of of the edges of the image seems like a rough digital simulacrum of our brains’ limited ability to process all the visual information our eyes bring in at any given time. It’s also possible to see the effects of Glass’s unique form factor in the images’ human subjects; with no camera pointed at them in the traditional way, and no shutter-click moment, many of the people we see aren’t specifically aware of the camera.
While the chance for Terminator-style augmented reality vision via the display of information on a small heads-up display remains Glass’s most futuristic new development, its most immediate use-case will be an always-on wearable camera that’s connected to the internet. Google says Glass is “designed to help you live in the moment.” But will we all need to employ full time editors to make sense of our constant visual stream, editing out the boring and unpleasant parts to help us broadcast a more perfect self? Is the “decisive moment” now essentially an act of editing, not of clicking the shutter?
As the volume of digital photos and videos we’re capable of capturing and sharing at any given moment continues to increase, it will be fascinating to see how technologies like Google Glass will be used by both by “regular people” and those who consider themselves professional image-makers. And also, how much further can technology go until we reach a ceiling of image creation potential? I can’t imagine the ceiling ever being higher than the one set by Glass’s always-on, always-connected abilities. But one never knows!