Incredible new smartphone tech to crush DSLRs?

by Hugh Brownstone5 Comments

Suspend your disbelief for a moment if you can: 

Light has the right technology, the right partners, the right money (they just raised $25MM) and the crazy-ass ambition to knock the Canon 5Ds with its 50 megapixel resolution off the pedestal with a freaking smartphone! 

They expect product to be announced in 2015 and shipped in 2016. Three potential flies in the ointment: 1) for the moment, it’s for stills-only;  2) Apple’s clearly at work doing the same thing and will either collaborate with them – or compete; and 3) they have to execute.

Canon isn’t the only one in trouble, just the most obvious: if things go the way co-founders Dave Grannan, CEO and Rajiv Laroia, CTO expect, the shelf life for something like Canon’s just-announced new Video Creators Kit is likely to be even shorter than they expect.


Because after speaking with Dave and Rajiv, I’m convinced that they are moving much faster than I predicted in Apple’s iPhone: The Next Video Revolution, and there’s an excellent chance that the next generation of smartphones with Light technology really will out-perform Rebel-class cameras and their competitors for still images immediately, and video soon enough thereafter.

It seems to me that Light is riding a wave that is about to crest, a wave that has been building as far back as Kodak’s EasyShare (or earlier) and as recently as Moondog Lab’s anamorphic lens adapter for the iPhone and Schneider Kreuznach’s iPro auxiliary lens kit for the iPhone [B&H | Amazon].

The Ambition

Rajiv is perfectly clear and very specific: their target is image quality on par in every way with Canon DSLRs: resolution, dynamic range, low-light sensitivity, noise…and shallow depth of field.

In a smartphone.


The What and How

It can’t be done without screwing up the form factor, right? Tiny photosites can’t possibly gather enough light, right? Have to have a gigundo aperture given such a tiny sensor to get shallow depth of field, right? And even if you automagically somehow manage to do all of this, you lose any possible image personality, because flaws in lenses like a ’61 Angenieux or an anamorphic (that can’t help but introduce flare) are what make them truly special, right?

Not necessarily.

Especially when many of the things Dave and Rajiv speak about have been done before — in one form or another.

Oh – and one more thing: you have to be willing to alter the balance between physics and software to achieve the goal.

Light Phone Concept JPG

Light Phone Concept Photo Credit: Light

Folded Optics

Dave, Rajiv and the Light team first rely on bending rays 90°. This allows thin cameras (including smartphones) to remain thin by enabling the optics to run the width of the camera rather than the depth. While this is new (see, for example this recent Apple patent, it is not new: Kodak implemented folded optics technology beginning with its Easy Share V570 in 2006.

And of course periscopes date back to the 15th century.

Multiple Lens or Lens/Sensor Combinations

As I understand it, Light’s first commercial implementation will consist of 5 instances each of 35mm full frame equivalents of two prime lenses, 35mm and 70mm; and a single instance of a 150mm prime. Each lens will be coupled to its own sensor to create distinct modules.

Competitor LinX (acquired by Apple) apparently uses multiple lenses as well, but more than that I do not yet know.

The Light approach seems to be unique – and difficult — but elements of it (again) are not new: when Kodak introduced the Easy Share V570, it also employed multiple lenses, one wide angle and one 3x optical zoom.

Yes, OK: two is not 11. But when you think about it, isn’t genius the art of seeing or combining the same things in new ways? And in this new way, computational power is king.

cto and co-founder rajiv laroia + ceo and co-founder dave grannan Photo Credit: Light

cto and co-founder rajiv laroia + ceo and co-founder dave grannan
Photo Credit: Light

Image Stitching

Light combines images from these lens/sensor combinations (and multiple images from each pair, offset slightly – again, as I understand it) to stitch together very high resolution photographs. A happy by-product of this process is that their technology captures much more light than a single lens/sensor combination with the same photosite specifications.

The computational sophistication and power to do this well strikes me as the heart of Light’s potential competitive advantage (well, there is one other – hold that thought), and not easy. Rajiv confirmed that the processing required is so demanding that a general purpose chip as found in the first generation offering will allow still image capture only.

Apple was recently granted a patent for sophisticated image stitching at a level of precision I’ve not seen before using image stabilization technology to generate micro shifts on a single sensor for multiple images, but anyone with a smartphone or point and shoot should already be familiar with basic stitching – it’s the concept behind panoramas on cameras and smartphones today.

Software Simulation

If you can’t get shallow depth of field by hanging a huge lens on a smartphone, can you fake it?

Or, more precisely, can you simulate it?

Rajiv says you can.

The Focalyz app is an early example of just such a simulation, but from what Rajiv says, I gather the Light engine is dramatically more powerful and sophisticated, closer conceptually, perhaps, to the Lytro.

Shallow depth of field is perhaps the Achilles Heel of smartphone imaging, so I challenge Rajiv to whatever we might call the bokeh equivalent of the famous Turing Test: if we were to show a professional photographer two images, one taken with their technology and one taken by a Canon 5Ds with an 85mm f/1.4 wide open, would he or she be able to tell which was which?

Rajiv says, “Well, bokeh is typically circular, a function of the diaphragm. We can certainly simulate that; we can simulate anything we want. But we can do even better, because we don’t have to rely on those physics: we can create Gaussian blur.”

Which will be fantastic, an acquired taste, or anathema to most professionals.

I can’t wait to find out.


Partners (the other competitive advantage)

We already knew that one of Light’s partners is Foxconn, one of Apple’s primary manufacturing partners in China. What we didn’t understand before speaking with Dave and Rajiv is the breadth of what that could really mean:

1) Foxconn can buttress their own smartphone (InFocus) for the Chinese market, with potentially far-reaching impacts (read: share erosion) for everyone including Apple and Samsung;
2) Foxconn manufacturers Android phones for other companies too, and can create a feature set for those companies to break out of what Dave calls “the race to the bottom,” the vicious spiral in which commodity phone manufacturers can only compete on price (unless they all adopt it); or
3) Foxconn can license Light technology to any of its manufacturing clients, including Apple (and who knows what kind of exclusive period they might demand in return, or simply absorb all manufacturing capacity).

Light also has an unnamed Asian partner to manufacture plastic lenses.

What we didn’t know before speaking with the guys is that their optics design partner is none other than Moondog Labs. Insofar as we just interviewed Moondog Labs co-founders Scott Cahall and Julie Gersetenberger (who gave away not an inkling of their relationship with Light), we know that in Scott and Julie they have partners who get optics, smartphones and the purposeful introduction of flaws into lens design specifically to give organic cinematic feeling.

Light’s investors are also their partners. Foxconn itself invested in this round, which was led by Formation8. The most interesting portfolio company of theirs given this space? Oculus, the virtual reality company. Watch what happens here. It’s also worth noting that Light raised $9.7 million in an A round just a year ago, with Bessemer Venture Partners as the lead – a very, very interesting VC group whose other investments include companies like Shopify, Gartner, pinterest, Linkedin, and Skype.


A $25 million B round is an outstanding milestone, and Light is already recruiting more engineering talent. But $25 million is also rounding error for Apple.

I wouldn’t underestimate either company.

“When we formed,” Dave says, “Light was mostly a bunch of concepts in Rajiv’s head.” Now they’ll use the funding to move from concept and prototyping  to shipping product.


While they used Aptina sensors for proof of concept, Dave says “Our partners will of course use whatever sensors they want.”

Hmm…Sony RGBw sensor, anyone?

What about Video?

As we wrote above, video requires a LOT more horsepower, and multi-module video will NOT be part of the first product.

“4K, 120fps…these require – and we will be doing — our own chip for those algorithms,” Rajiv says.

We Live in Interesting Times

All of which is to say: exciting times and frightening times – for many parties.

“We’re leveraging disruptive economics of the last five or six years,” Dave says. “It’s amazing how good smartphones have become.”

Even so, Rajiv adds, “You have to remember that this technology is still in its infancy. Imagine what we can do in 20 years, or even five. There are no limits to the technology, but size matters when you talk about going to longer focal lengths.”

“Our aspiration,” Dave tells me as our time together winds down, “is to build products that wouldn’t fit in a smart phone – a tablet has enough space to do more. We want to get to 600mm and a range of tablet products.”

Note to Self: Calm Down

In all of this, it’s good to remember that there’s a great distance between concept and successfully shipping product, between B round and profitability.

Still, we’re excited for Dave, Rajiv and the broader Light constellation; congratulate them on their raise; wish them great success; and will continue to follow them closely.

Note to planet5D Readers: It's OK.

Of course this doesn't spell the end of DSLR or mirrorless hybrids, nor the swan song of the big dogs. Pros need pro features, especially around workflow and protection against the elements, and if you already have great gear, no need to throw it out.

Further, it will take time for these technologies to mature, while we're shooting projects now.

And at the end of a day, we're still talking about a phone.

What efforts like this will do is accelerate the fundamental tilt toward software-inclusive rather than hardware-only solutions (think lens correction software built into camera bodies already), and hasten the demise of point & shoot and the lower end of consumer ILC's. Just like bloated software programs with features most people will never use, casual photographers, dedicated vloggers, and small business owners wanting to do their own videos will soon be better served — and served less expensively — using their smartphones with good mics than even the least expensive ILCs.

Finally, more than ever, great filmmaking will be about story and skill.

Hugh is the author of Apple's iPhone: The Next Video Revolution. Follow him on Twitter (@hughbrownstone) or write to him at [email protected]

(cover photo credit: snap from Light)


  1. Sounds like a replacement for GoPro than anything else.  Interchangable lens systems?  I don’t think so.

  2. Given the smartphone’s near-ubiquity, it’s easy to imagine smartphones with advanced optical systems replacing inexpensive cameras — especially among people who don’t take photography seriously.
    And the idea of a pocketable camera that can do just about anything an SLR can is seductive. But SLRs exist for good reasons, and those reasons (so obvious they need not be stated) are why, over the past 50 years, the SLR has obsoleted every other “serious” camera design (except the view camera, which shares many of the SLR’s “good reasons”). Mirror or not, pentaprism or not, the SLR isn’t going to go away.
    The history of invention is littered with products that never found traction with users. The Lytro is a good example of a solution looking for a problem. (It’s roughly the photographic equivalent of Ambisonic sound recording.)
    “Light combines images from these lens/sensor combinations (and multiple images from each pair, offset slightly – again, as I understand it) to stitch together very high resolution photographs. A happy by-product of this process is that their technology captures much more light than a single lens/sensor combination with the same photosite specifications.”
    True, but if the stitched photosites don’t overlap (as they would have to not do for increased resolution), then there is no noise reduction, *
    “There are no limits to this technology.” Really? There are limits to all technologies, set by the laws of physics.
    * I’ll probably wind up being wrong about this. Consider the use of a few small radio telescopes spread over a large area.

  3. There will always be technical advancements.  I give these guys credit for going down the phone path.  The thing is there is a reason why Canon and Zeiss lenses weigh so much.  And for video anyone that has done it knows that the killer is sound once you get beyond shooting the car, cat, or girlfriend.  For 4K you run right into storage issues because it eats it like crazy.  All of these things get right back to a DSLR body or a camcorder that is far more comfortable and better to use than a phone.

  4. GeorgeSealy Good points, George.  I think the true take-aways are 1)the changing balance between software and hardware, and 2) the shift from single lens/sensor modules to multiple lens/sensor modules.  Frankly, I like the form factor of the ’70’s era Braun Nizo S56 I just picked up on eBay for $150 more than the rest of them – but Super-8 and no sound, well…

    But I AM going to pick up some Super-8 film and see what it looks like for myself!

Leave a Comment