[singlepic id=207 w=320 h=240 float=left]Photographers and computer graphics geeks alike have been looking forward to the release of the Lytro Light Field Camera early this year. I pre-ordered mine last year and am hoping to get it in March if everything goes according to plan. The light field camera (also sometimes referred to as a plenoptic camera) has the potential to open up a new generation of techniques for capturing, viewing and exploring images.
Tracing the direction and intensity of light in an artificial computer generated scene has been a well researched topic over the years, involving such techniques as radiosity and ray tracing. These techniques have been used to drive advances in realistic computer graphics that are now routine to see in video games and movies. In essence the light field camera brings these theoretical computer generated graphics techniques into the real world and “reverse engineers” a captured scene by recording both the direction and intensity of the incoming light across the entire sensor array.
From my initial read of Lytro CEO Ren NG’s Ph.D. Thesis, the new technology creates a 4D mapping of the scene as a set of “ray pixels” holding light intensity and incoming direction vector at each sensor location. Through software treatment of the sensor data it is possible to then change how the scene appears in a 2D image. The most frequently given examples shown for interacting with the scenes are by changing the focus points and creating 3D type effects. This technology represents a fascinating fusion of photography and computer graphics, and I am sure that I will have much more to write about this technolgy in the future.
For now though, I am interested in the consumer perspective for this new technology. Lytro seems to be focussing (hah!) on a fairly broad range of customers at this point, with a simple to use device that can be enjoyed by complete novices. The device appears to have relatively low resolution compared to DSLRs and even consumer compact cameras. That makes complete sense for their current startup status; once they get the first version out the door and the revenues start flowing in then they can work on creating tighter sensor array densities and higher image resolutions.
[singlepic id=208 w=320 h=240 float=right]From a photographer’s perspective the Lytro camera offers a huge advantage in not having to attend too much to finding the best focussing distance for the shot. In addition, the camera does need not to spend time autofocusing before the shot, which is a big cause for the slowness of consumer cameras with their slow contrast detect focussing technique. From the examples shown on their website it appears that the device is able to capture a fairly large depth of field in a single shot, but not an infinite depth of field. This has the potential to be a boon for certain types of photographers who require a fast reacting and highly mobile camera, street photographers come to mind immediately. This could potentially also be useful for other types of photography where a large depth of field is desirable, such as landscape and architectural photography. I think the usefulness in these latter types would be more limited however, as the use of tripods and smaller aperture settings with longer exposures can already provide a fairly large depth of field. It is pretty easy to simulate out-of-focus areas in an in-focus image using Aperture or Photoshop.
After the scene is captured, the interesting question becomes what are the cool and useful ways of allowing the viewer to see and interact with those images. “Cool” and “useful” aren’t always the same thing when it comes to new technology. The Lytro site has several flash examples allowing the user to click on regions of an image and have the focus point shift to that part of the image. It isn’t completely clear to the outsider what is happening under the hood, but it appears that the interaction is telling the software to choose a single focus plane from the scene and render the image with that plane in focus. My guess would be that it uses a contrast detect algorithm to find the plane of maximum focus in the 2D region of the touch, and then apply that focusing plane to the entire image.
This is really cool to watch, but is it actually useful? Lytro is making a big effort to make it easy for people to view and share their images online, pointing out that the Facebook community is projected to publish 100 billion (thousand million) photos online in 2011. Clearly online is “where the eyeballs are” so it makes sense to go after that market aggressively, but will the average facebook member — faced with dozens of photos in their stream every day — really want to have to click and explore individual photos to get the most of them? Pushing focus presentation choices out to the viewer empowers them, but with more and more online photos to view and attention spans even shorter on-line than in the real world, there is a potential conflict. Once the novelty wears off there is a need for a value proposition that makes it worthwhile for both the producer and consumer of the scenes to bring the additional complexity to the viewing process.
In my opinion this is the truly critical area for Lytro to address, and the richest opportunity for creating a paradigm shifting sharing and viewing experience. My guess is that the best value propositions will arise not from allowing viewers to interact directly with the field data to shift focus fields around, but rather from intermediate software layers that allow creatives and producers to produce semi-guided presentations of the scene to the viewer. For example an interactive ad creator could use animated focus shifts to guide the viewer through a sequence of impressions in a single image in a subtle fashion. The need that I imagine for guided tours through the 4D light field dataset is somewhat analogous to what we saw in the graphics and visualization community in the 1990’s and 2000’s: users given free reign to navigate unconstrained through virtual 3D spaces could quickly become lost or fail to observe the critical features. A variety of constrained navigation techniques were introduced to allow novice and expert viewers to more easily move through the 3D worlds. I see a similar set of techniques arising to deal with the 4D light field camera sensor data.
It remains to be seen whether Lytro’s light field camera will be just a cool gadget or something more enduring in the market. My guess leans more towards the latter, but there are substantial software and marketing hurdles to got over before that will happen. My hope is that Lytro will open up a Developers SDK for their file formats that lets them tap into a rich community of developers on different platforms. They have indicated some tentative support for this idea, but nothing is on the table yet. Of course when these light field principles are applied to video, a whole set of ripe opportunities opens up, in addition to some formidable engineering challenges. I’ll write more about that in a future post.
One thing is for sure, this is going to be an interesting few years as the technology enters the market, and I will be following it fairly closely. Please select one of my feeds from the “Stay in Touch” area at the top of this page for more updates and examples when I get my own copy of the camera to experiment with.