Edible cameras. Focus that changes after you shoot. What's really in store for photo technology?
Back to the Future
In the 1890s, stereoscopes were common in American homes, but their popularity declined rapidly after Kodak introduced the Brownie camera in 1900—possibly because people would rather look at their own two-dimensional images than someone else’s stereoscopic ones. Since then, 3D, whether for movies or still images, has gone through numerous boom-and-bust cycles.
Now seems to augur another boom. Panasonic, Samsung, and Sony have all launched 3DTVs (and dedicated goggles). The Consumer Electronics Association expects unit sales to hit 2.1 million, or $2.7 billion in revenue in 2010 (they came on the market only last spring). In 2011, the CEA forecasts sales of 6 million units, for $7 billion.
Of course, those 3DTV makers also make cameras. To be more than a passing fad once again, 3D must become a popular format for consumer-created digital images.
Fujifilm actually got there a year ago with the FinePix Real 3D W1, which has dual lenses and dual sensors. Recently we got to check out the updated W3, which sports a 3.5- inch auto-stereoscopic 3D screen.
Sony recently started building what it calls Sweep 3D capability into single-lens, single-sensor compacts and offering it as a firmware update on its ILCs. Next up: DSLRs.
With Sweep 3D, you pan horizontally or vertically. “The camera can capture up to 100 shots and stitch them together into a single panorama in just a few seconds,” says Mark Weir, Sony’s digital imaging senior technology manager. To create 3D images, it takes right and left portions from each frame.
Panasonic is taking a different approach: By the end of 2010, it will offer a dual-lens optical system that puts the stereo lenses on a single Micro Four Thirds mount for use on its Lumix G series ILCs.
All of these cameras store their 3D images as MPO files (also called Multi-Picture Format), the industry standard—any compatible 3DTV can play them. Fuji also makes a desktop viewer that enables viewing the images from its camera without dedicated glasses.
Lenticular printing, which overlays the image with a plastic coating that lets you see the image in 3D without glasses, is now available online for about $10 a print from Start3d.com.
Recently, we’ve seen engineering advances in sensor design. For instance, OmniVision’s backside-illuminated (BSI) sensor, which started appearing in many compacts last year, moves the support structure to the rear of the sensor for better low-light results. More powerful processing and software play a big role. You can see the effects in compacts, with their growing lists of catchy features from in-camera HDR to super-fast burst speeds at lower resolution. In DSLRs, witness faster full-res bursts (up to 85 JPEGs or 26 RAW at 10 frames per sec in the Canon EOS-1D Mark IV).
And some improvements happen through firmware upgrades, without your having to buy new gear. “It’s not the computing power that makes the difference, but the algorithms used in imaging engines,” says Kazuto Yamaki, Sigma’s chief operating officer.
Ever more sophisticated image-editing capabilities will jump from computers into cameras. And physical features that began on compacts and even smartphones are migrating up to ILCs and DSLRs. Some changes are cosmetic— look at Pentax’s colors for its very successful K-x—and some more functional, such as the touchscreen control on the Panasonic Lumix DMC-G2.
The possibilities are endless. How long will it be until cameras have screens that mimic the bright, extremely high-res, touchscreen Retinal display on the Apple iPad and iPhone 4? Until interchangeable lenses have as big a range as the 30X ultrazoom on Fuji’s FinePix HS10? Until ILCs shoot low-light images that equal those of today’s DSLRs? Maybe not long at all.