Lytro, The Radical Camera Startup, Introduces Phase Two

Lytro unveils amazing new editing and sharing tools, but what’s even more clever is their product development strategy.

It’s been less than a year since Lytro started shipping its remarkable light field camera, an entirely new type of camera based on the Stanford Ph.D. thesis of founder Ren Ng. But today, the company announced a set of new software tools that will allow users to take greater advantage of the “living images” produced by the camera, which has been so popular the young startup has had problems meeting demand.

If you’ve never used a Lytro camera before, here’s the basic gist. The rectangular device contains a more developed version of a camera Ng built as a PhD student, which used 100 small cameras to capture an image that could be focused after the fact. Snap a shot with the gadget, and the company’s desktop software allows you to choose what part of the image you want to focus on. By capturing the entire “light field,” or all light in a given environment, the camera produces a file that contains thousands of different versions of a single shot.

One of the new filters for Lytro images, dubbed "Crayon." You can also shift perspective by clicking the image and waving your cursor around in a circle.

Today’s software release introduces a new set of tools for its existing camera, giving users a greater scope of editing and sharing capabilities. Perspective Shift is a new feature that lets users turn their photos into holographic environments, where clicking the image lets you explore the different points of focus within a single image. “It brings living pictures to life in an entirely new way,” Ng explains. You can embed an explorable version of your images on Facebook or Twitter, too. Meanwhile, Living Filters let users apply nine interactive effects to their images, like Film Noir and Crayon, which exploits the image-refocusing capabilities by layering the focused portion of each image with after effects. The update will be free to existing users on December 4, while new Lytro owners will receive the software when they buy their camera.

The new features are great, but more interesting is what the announcement tells us about Lytro itself, which is essentially a software company masquerading as a camera company. Traditionally, a camera company is expected to turn out a new piece of hardware for every holiday season, giving consumers a reason to buy a new mode. But Lytro operates on a fairly radical logic: The hardware—the light field camera—remains the same. What changes is the software—the tools users have available for editing, viewing, and sharing their photos.

“When we send the pictures through to the web or to mobile phones, the light field engine software travels with the pictures,” Ng said recently. “We use modern web technologies to build that in there, so when your friends and family see your picture on Facebook, they don’t need to install software to be able to have an interactive experience.” At their office in Mountain View, an in-house team of designers and developers work not on updating the camera but on better ways to unpack the remarkable package of data that lives inside every light field image.

This strategy also plays to Lytro’s strengths and weaknesses as a startup. Bringing new hardware to market—be it cameras or phones—is an expensive project. Though Ng’s much-lauded thesis had investors lining up at his door after he graduated, and raising $60 million is nothing to sneeze at, launching a new version of the camera would be a challenge for a company that’s only a year old. As a user experience strategy, focusing on software is a deft move. Giving existing users a new way to use a camera they already own helps to reengage early adopters, who are especially prone to drift away from new gadgets once they grow bored. It’s a double-pronged approach—excite existing users with free tools, help them push their images through social channels, and hook new users who may not know of Lytro.

A perspective shift image. To see the effect, click on the image and wave your cursor around in a circle.

“The product was a device,” Ng said yesterday. “But just as much as a device, it was a new kind of picture, which threaded into social networks in a way that meant you didn’t have to install new software.” Sure, Ng is the creator of one of photography’s most radical innovations in recent history. But a less glamorous revolution—in software—is what’s driving his company. For more on the new software, plus pricing for Lytro’s cameras, check out their website.

Add New Comment

9 Comments

  • hypnotoad72

    After looking at the bug in the green grass blades, and "changing focus"  with the mouse,  the algorithms to - in real time - augment a very small image's faux depth of field is impressive.  But image editing tools can already do these things, and the crosshatching effect surrounding the largest (foreground) bug's eyes when it is out focus doesn't sell me the notion that this is using a different method to capture data or detail.  To say nothing when looking at the largest blades of grass, with an even more pronounced crosshatching effect.  It  almost looks like JPG compression, but that can not the case.  It looks like a software-based edge-detection algorithm is thinking out where to select and is not always being accurate.

    Being so limited with its final exposure (unsuitable for even something as small as a 4x6" print) isn't going to engender much in the way of sales either.  But based on the jagged artifacting, it's easy to see why customers are stick with very small images, unsuitable for prints (apart from what can fit on a business card).

    The perspective shift is interesting, but the visual artifacts from the faux depth of field give away the proverbial man behind the curtain as well.  The algorithm merely but cleverly creates objects, and to see this in a letter page-sized image would be impossible.  Even in the upper left corner of the new perspective effect - never mind the perspective is highly limited, look at the blade of grass while moving the mouse pointer.  Note the tip and the lack of definition at the edge of it. 

    It's all software trickery.  The more you move the mouse, eventually your eyes will adjust and you will see the jagged edges and trailing pale streaks for what they really are.  It's a clever use of maths,  but it's not capturing any fancy technobabble. 

    They're adding features but not size, which is what is really wanted.

    With luck the next version will do better, but right now I am not convinced of this gadget's legitimacy beyond a parlor trick that will be had for 99 cents in some app store in a few years.  A series of automated Photoshop (or similar) effects can already do the same thing by layering object layers, and done by hand one can be far more accurate - even with much larger images that would be needed in print media.  But even as it stands, a half-trained eye can see where the algorithm is making mistakes with the created objects.

  • hypnotoad72

     Lytro is interesting - the company has taken what looks like a cell phone quality digital sensor with an infinite focal point (generic low-end hardware), and has an engine applying both a custom depth of field software-based filter, along with the new 'crayon' and other filters, all of which I'd rather see as Photoshop plugins with my own equipment, which is hand-picked as a compromise of funds and quality.  No sensor the size of a pinky finger can eke out as much real detail as a DSLR's.    The camera's 1st gen images looked soft and lacking in gamut; if this device is capturing every atom of light detail, then these base images should be far sharper than what people are getting. 

    Never mind the ends of the spectrum that the human eye cannot see.  The light field being captured must be really really... big. 

    I'd rather keep a DSLR with good lenses for good quality, sharp images (not 2x telephoto adapters, as I learned the hard way).  I'd also keep RAW format for access to the entire gamut and what the sensor actually shot, use infinite focal point, and do my own Photoshop depth of field manipulation and on my own terms.  I dislike computers, but they can be tools.  I'd rather do my own thinking as to how to use them, which is the basis of art - how one designs and uses tools at hand.  The more technology does things for us merely makes us incapable of thinking or doing for ourselves.   

    I'd seen their website and read the materials, and that was obviously a couple of years ago, but a lot of it came across as technobabble and hokum.  To capture that much light, the way Lytro's claim it to be, would take up rather a lot of disk space.  To do a simple infinite-focus shot, apply software filters dynamically, and call it something tangential seems simpler.  and, again, without any depth of field applied, the whole image looks soft and too contrasty.  If it's capturing light, where's the sharpness and detail and why is it so limited?  Real life isn't as soft-focus.  In short, something's amiss with their claims.  The fact an Adobe Flash presentation allowed one to tap on an area and have it focus only adds weight to the claim it's all a series of applied software-based filters behind a low quality sensor.

    I'll remain skeptical, for now.

  • Michael Gmirkin

     You seem to have not understood what's going on... Why is the image low-resolution? Because of how the internal optics are set up. You don't have a single lens focusing light onto the WHOLE imaging sensor. You have a main lens, and an array of micro-lenses sitting over the "sensor." The internal geometry means that each image taken by the section of the sensor under one of the micro-lenses in the array is SLIGHTLY different than each of the other images under each of the other lenses in the array. By then comparing each of the slices against all of the others in real-time, a "light field" is created, describing the characteristics and behavior of light in the scene. However, it is limited by the "size" / "resolution" of the imaging sensor used. If you have a 10MP sensor but it's divided up into 10 'slices' effectively each of the 10 slices is only "1 MP." Certainly you can compare those and get the 'light field' captured from those images. But is one going to get a better-than-1MP image out the other end? Perhaps not. Anyway, it's an acknowledged issue, and future versions will use larger sensors with higher resolution, thus the final light field out the other end will capture greater detail and allow export of higher resolution "flat" images for printing.

    To answer your other question on a locked post it won't let me respond to: Why not include a larger lens? It was undoubtedly a specific design decision. This first model was deigned to be a consumer-level camera. Not prosumer, not professional grade. Good enough to take reasonable images for sharing online and shaking the bugs out of the system, developing the tools necessary for future releases and generating social interactions and word of mouth. As far as the design decision goes, I'm quite certain than it was toward form factor. That is making the unit small enough to easily slip into a pocket and take with you wherever you go. For my part, I'm glad of it, as I can carry it around in my relatively size-limited coat pocket and have it available at a moment's notice when the mood strikes me. Had the lens been larger, the unit would have increased in size and been nowhere near as portable. I think it was a good decision, at the time. It got the product to market as a proof-of-concept and into the hands of more-than-willing users... Who are now enjoying it, some of them pushing the envelope of what can be done with the technology. That said, I'll be glad when they move to make their next generation of products with higher resolution, larger LCD screen with better off-angle and daylight performance, different form factor, larger lens, etc., etc. Until then, I'm more than happy with the product. But, it's definitely NOT for everyone. It's not prosumer or professional grade and was never claimed to be such. It's an interesting "bleeding edge" toy with considerable promise for the future.

    And your assertions that it's simply a "camera trick" are little more than a bald assertion on your part. For instance, the recent perspective shift filter. Take, for instance, the grasshoppers:

    https://pictures.lytro.com/lyt...

    If you took a "flat" image, where the far grasshopper's butt was obscured by the leaf in front of it, you would not, in photoshop, be able to perspective shift the image to add the image information that was not captured by the sensor at the time the image was taken. Certainly, you could take several images from DIFFERENT perspectives and in some fashion stitch them together and create a flash applet to simulate the Lytro effect. But it's not the same thing. For instance you could not do that with a moving target, like a bird that is ins several kinds of motion in a scene. Not merely moving, but flapping its wings, for instance. That is, one could not take a single image, then move to a different perspective and take a second image. Since the moving target would clearly have continued to move between the first image and the new perspective image. However, it is clear that people have taken images of "things in motion" that can clearly still be perspective shifted...

    https://pictures.lytro.com/lyt...

    One could argue speciously that a professional photographer could take several images, including a "background shot" with the camera at the same location and then merge the images. But, one must recall, there is no "background shot" option in Lytro cameras. You click the shutter once, and a single file is generated. That file can be refocused (if the scene geometry is right) and perspective shifted. No additional Photosphop trickery required nor used.

    Take for instance this image:

    https://pictures.lytro.com/lyt...

    Showing colored water droplets falling through space. In perspective shifting it, one clearly gets the visual of 3D space. The back stuff moves less than the foreground stuff, the black-looking mass at the bottom changes perspective. Etc.It's quite clear to me that this image does indeed do as is claimed by the Lytro folks and contains light information in 3D space. Not merely a "photoshopped" "flat" image.

    Yes, I'll agree there are issues to be worked out both physically (higher sensor resolution, better ergonomics) and algorithmically (either how the light field is captured or how it's played back) to increase performance and reduce artifacting. But having used the technology I'm sufficiently convinced it does exactly as it claims to do, current technological limitations on resolution, etc. aside...

  • Alexander

    The light field technology is incredibly exciting. The Lytro implementation, though, is a disappointment. I think in a more "professional" environment this would be much more exciting. RAW-format images are great, you don`t have to care for white balance and such. You can just focus less on the tech and more on the photo. With this, photographers can give their full attention to the framing. Focus, white balance and, to some extent, exposure can all be done in post-processing. This would also reduce cost and weight of lenses.

    Focus is the job of photographers, not something for amateurs to fool around with. 3D is, and will always remain, a gimmick.

    Nikon or Canon, please make use of this technology ASAP! Imagine a high resolution, professional, light field camera. GOD PLEASE!

  • hypnotoad72

    RAW images are what the camera's sensor takes.  When loading into a tool
    (e.g. Photoshop), one can manipulate white balance, levels, saturation,
    and a slew of other factors - even eking out better contrast ratios
    than what the camera initially took. 

    Larger lenses capture more light to be processed by the sensor...  if the Lytro works on capturing light rays, why such a small lens?

    Given this device claims to capture "11 million light rays", whatever that means since one inch that is one atom thick would be comprised of 127 million atoms long, and it's never really said how thick a "light ray" is...  I'd expect a light ray to be very thin...  and given that we can use big microscopes to see at the atomic level, I'd say the size of a light ray is very small indeed.

    And if it's capturing light rays, why is there no infrared, ultraviolet, or other conversion setting?  Along different planes (parallax) between the photographer and subject?    This gadget would surely be far more effective in other fields than... consumer-grade photography... if it can do what is being promised.  We don't see light, but we do see what light reflects off of.  Since there's only air between us and the object we're photographing, why can't this thing show us the delightful smog particles in front of our noses?  Well, if it can't show the building in sharp clarity, don't expect much else, either...

    What the camera does is capture the light that's reflected off an object.  You know, what any camera already does.  And with detail the competition already does a much better job at.  I just don't see the grand innovation, especially as existing competition does better and there's nothing that this thing can do - based on the woolly claims - that would be a boon to medical or other sciences.  Yet.  It's been a couple years since it came out, so who knows...

    Automated filters and processes exist, but it is the human touch and intuition that will never replace a computer.  At least in terms of real art.

    As a professional, I prefer technology to a certain point - but I much prefer to think out the depth of field, f-stop, exposure time, etc.  And I do stifle a chortle when, for example, I'm capturing fireworks and someone is using their 2MP, fully-automatic point'n'shoot junk toy and can't be bothered to turn off the flash.

    Good photographers DO think about the 'tech' and use it to make the most compelling image.  And good photographers also know it's the lens that is the most important factor.  And I don't think highly of the Lytro's at all, and there's little - apart from the song and dance - that tries to make it innovative or revolutionary.  I've only seen data it captures (images). 

    I've not seen a unit upfront, but if the lens is made out of plastic, that should be the ultimate red flag.  Plastic is cheap, more quickly distorts... oh, it discolors over time as well, rendering the sensor or film behind it worthless... but a simple tap on the lens can reveal if it's made out of plastic or glass...

    And as this device has been out for a while, so isn't it interesting that such a small start-up hadn't been bought out by one of the big photographic giants (Nikon, Canon, Minolta (Sony) ) yet?  Why not?  If it's heralded as a revolutionary concept and I owned a big camera company and looked at it and saw how grand it was, I would snap it up in a heartbeat and really invest in it.  Yet it's been a while and nobody has...

    And professional review sites have also criticized image quality as being soft or poor.  That's a red flag. 

    Even more revealing is that this gadget's software lets you put images into "stories".  Not folders.  It's just a coyly marketed toy name. 

    With everything I've read so far, it still comes across as a gimmicky camera, using different terminology.  Other cameras of this size produce similarly soft images but don't have the software-based algorithm to do the focusing trick.  But it's all been done in Photoshop, other camera makers have similar plug-ins to detect and fiddle with faces, etc, so why is the Lytro any different?  Apart from a claim of "11 million light rays"?

    Lastly, it's been said "light field technology is better than HD today".  HD
    video is 1920 x 1080 pixels.  That's not much in the way of
    resolution...  think 2MP.  Film cameras can do 20MP or more... Digital
    sensors have come a long way since their introduction as well, both in
    terms of pixel count, and the factors that mean more (shadow detail,
    color gamut, etc, which film often still does better at...)  Indeed, one
    professional review site stated the "finished photo" was under 1MP in
    size.  and that's half the resolution of HD.  And definitely not sufficient quality for 4x6" prints.  I think they need to capture 11 billion light rays, at the very least, to even start to be seriously competitive...  especially when the aforementioned Photoshop filters start to have trouble at higher resolutions, due to the sheer amount of data to be processed.  Been there, done that as well.  Go into Photoshop, take a 8.5x11" at 300DPI image and zoom in 1000%.  It's a lot more difficult to accurately detect edges at that level of detail than it is a 640x480 (2x1.5", 300DPI) image.  Since 640x480 is approximately 1MP when finalized.

    Which means, if it is a serious device, it has a LONG way to go, and for the price offered I would stay far away from it.

  • Uptown Haberdasher

    They'd do best to focus on price and hardware, next. They seem to have omitted an external memory card slot so they can charge you >$100 for $30 of storage, and the camera itself is very "hands-off" -- no manual control. I'd certainly pull the trigger if I were rich, but for the moment, I'm waiting, cool as this is.

  • Uptown Haberdasher

    yes, it's a step in the right direction, but from what i have read in reviews, it's very cumbersome.

    i guess what i am looking for is for them to make a "pro" version with more physical controls + an SD card slot, and for the price to descend a bit. the current version is more like an ipod (premium price; does a few things well and little else).

  • hypnotoad72

     It's a cool gimmick, but I'd rather see their filter mechanisms sold as Photoshop plugins.  I've been a photographer for years (film, and migrated to digital about 6 years ago) and there is not much that's compelling about this gizmo.  Enough Photoshop magazines and tutorials can do the same depth of field effects, even if they take longer.  The same magazines tell how to do the same masking and lasso/selection effects this software does.  3rd party plugins can also automatically select disparate items and make disparate objects out of them. 

    Then again, I've seen "professional" publications put out full-sized images taken with what must be low-end 3MP cameras, complete with horrible purple fringing.  If nobody strives for quality anymore, and maybe there's no money left in the field to make, maybe all of my impassioned nitpicking is all for naught.