Apple patent

More Resolution without More Pixels? Latest Apple Patent Does Just That

by Hugh Brownstone15 Comments

You want to know why we keep writing about Apple? Because it’s pushing the boundaries of physical optical performance through software faster and harder than anyone else. Latest proof point: the newest Apple patent.

OK: the latest Apple patent is irrelevant to video. It says so, right in the patent.

But using an optical image stabilization processor to offset the sensor by a sub-pixel distance while in burst mode may do incredible things for still photographers.

Fascinating, isn’t it? No more pixels on the sensor (and thus no degradation in what is already a highly limited low light capability in the case of the iPhone), yet using software to effectively capture more pixels for the output side nonetheless.

This is a level of applied creativity which dedicated camera manufacturers must embrace and emulate – either through their own R&D or through licensing. It’s already easier to do slow mo, time lapse and hyperlapse on an iPhone than any professional camera – and that fact (along with others like it) will only be tolerated for so long.

New Apple patent: more resolution without more pixels Click To Tweet

Apple camera patent would allow high-resolution photos without sacrificing image quality

Apple patent

 

Via 9TO5MAC:

A clever patent granted today could allow future iPhones to have the best of both worlds, allowing higher-resolution photos without squeezing more pixels into the sensor…

The secret is effectively to use burst-mode to shoot a series of photos, using an optical image stabilization system – like that built into the iPhone 6 Plus – to shift each photo slightly. Combine those images, and you have a single, very high-resolution photo with none of the usual quality degradation. Or, in patent language:

“A system and method for creating a super-resolution image using an image capturing device. In one embodiment, an electronic image sensor captures a reference optical sample through an optical path. Thereafter, an optical image stabilization (OIS) processor to adjusts the optical path to the electronic image sensor by a known amount. A second optical sample is then captured along the adjusted optical path, such that the second optical sample is offset from the first optical sample by no more than a sub-pixel offset. The OIS processor may reiterate this process to capture a plurality of optical samples at a plurality of offsets. The optical samples may be combined to create a super-resolution image.”</em”

Read full article at 9TO5MAC “Apple camera patent would allow high-resolution photos without sacrificing image quality”

 

Note: it is our policy to give credit as well as deserved traffic to our news sources – so we don't repost the entire article – sorry, I know you want the juicy bits, but I feel it is only fair that their site get the traffic and besides, you just might make a new friend and find an advertiser that has something you've never seen before

(cover photo credit: snap from 9TO5MAC)


Additional Stories You Will Want To Read:

Comments

  1. Alan1250

    My stabilization system has been my tripod, and I’ve been “offsetting” it with my toe in between captures to get super resolution frames for several years.  Didn’t think kicking a tripod would be patentable, but guess I was wrong.

  2. William Sommerwerck

    Alan1250  I assume you’re joking, as you would have to do some moderately complex calculations to figure how much to shift the camera.
    Even if it did work (which I doubt), you’d have the problems of image movement or changes in illumination between shots. “Tickling” the sensor doesn’t eliminate these, either, but the shots are taken in quick succession, thus minimizing them.

  3. Alan1250

    No actually I was quite serious.  All that is required to do super resolution is to be sure that images are not pixel incident.  With all the pixels in an image advance image processing can compute the amounts of shifting, scaling and rotation to precisely align the images to subpixel values.  After that it has many input pixels in the neighborhood of each output pixel position.  Those are used mathematically to produce the output pixel value(s).  It’s all just a matter of math and software.  I’m currently using PhotoAcute to put the images together and double the resolution.  My 18 Mpix camera turns into a 60-70 Mpix one.  The scene just has to hold still while I click (and tap) my way through 5-10 HDR triplets.

  4. William Sommerwerck

    Alan1250  I’m not gainsaying the possibility. I just don’t see how you can do it “by hand” (or “by foot”, as you jokingly state). How much are you physically displacing the camera between shots?

  5. Alan1250

    William Sommerwerck Could be quite a few pixels.  It doesn’t matter.  The software looks at the whole image and aligns it.  Sony and now Apple just simplify the alignment process by somewhat controlling the offset rather than computing it.

  6. William Sommerwerck

    Alan1250 William Sommerwerck  I understand how images can be stitched. But the idea that a random “jiggle” of the camera will position the pixels in an “in-between” position that actually improves resolution is not plausible.

  7. Alan1250

    William Sommerwerck Suggest you Goggle “Super Resolution” and keep an open mind so you can separate what is “not plausible” and what you don’t understand yet.

  8. William Sommerwerck

    Alan1250 William Sommerwerck  ” All that is required to do super resolution is to be sure that images are not pixel incident.” Please explain how randomly kicking the tripod produces this.
    You have appeared to have latched onto a misinterpretation of the techniques used to produce “super resolution”. There is nothing in the Wikipedia article that describes how wiggling a camera can add real detail to an image.
    What is probably happening in your case, is that adding up multiple exposures reduces noise, thus better-revealing the detail already in the image. The Wikipedia article shows an excellent example.

  9. Alan1250

    William Sommerwerck  OK William.  I’m going to give this another go.  Trust me I know what Super Resolution is.  (I have some of my own image processing things on file with the Patent Office.  May or may not have been military related.  Cant’ say.)

    This is the way I explain super resolution when I’m giving photography presentations:  Stand in front of your microwave while something is cooking (with the light on).  Try to look through that grid in the door to see what’s cooking.  Holding you head still and trying to see inside is like taking a single digital picture.  It’s limited by the grid in the door.  But now move your head around, and your getting multiple looks through the door and your brain puts the much more detailed image together.

    Tapping the tripod in between exposures just displaces the camera so it’s not perfectly the same image as before.  Hope this helps.  Alan

  10. planetMitch

    Alan1250 William Sommerwerck Alan – thanks for that explanation. William, hopefully that will help. I trust Alan knows what he’s talking about and that it works. We have all learned something new and cool today!

  11. William Sommerwerck

    Alan1250 William Sommerwerck
    What we have is failure to communicate. I’ve seen this
    problem in other groups.
    Someone describes “A”, and says it’s true (which it is).
    Then he describes something related to A — call it “B” — and says it’s also
    true. But it isn’t.
    Captain Crank (yo!) comes along and points out that B isn’t
    true. The original poster repeats over and over that A is true. When Crank
    reminds him that A isn’t what he’s talking about, he is again told that A is
    true.
    The claim Alan makes, that one can gain genuinely improved
    resolution merely be kicking the tripod between shots, then averaging the
    images), looks absurd on the surface. But closer inspection reveals that it not
    only looks absurd, it is absurd.
    Those who haven’t read the Superresolution article should do
    so (including Alan).
    en.wikipedia.org/wiki/Superresolution
    The article has two photos showing a basic method for
    resolution enhancement using multiple shots. When identical shots are summed,
    noise (which is random) tends to cancel. (The same technique is used to remove tape hiss without degrading the recording’s high frequencies.) This more-clearly reveals image
    detail, but it doesn’t actually add detail to the image. It makes the detail
    already there more visible.
    If one kicks the tripod, then has the software align the
    images (as Alan suggests), you have overlapping, near-identical images. The
    result is noise reduction — not an increase in “real” resolution.
    To gain a true resolution increase, you do not want
    overlapping images. (Think about it before reaching for the keyboard.) You want
    to “fill in the blanks”.
    While you have Wikipedia open, look at the Bayer filter
    article.
    en.wikipedia.org/wiki/Bayer_filter
    Almost all sensors use the Bayer filter pattern (Sigma’s
    Foveon sensor being the principal exception). The Bayer layout assumes that, as
    the eye is more sensitive to green light than red or blue, it sees more detail
    in green wavelengths, so the sensor should have more green pixels (than those of other colors).
    Note that each green pixel is surrounded by four “unused”
    pixel locations. Similarly, the red and blue pixels are bordered by nine vacant
    pixels. “Filling” them would produce a true resolution enhancement.
    This is easily done by shifting the sensor one pixel left or
    right (both shifts aren’t needed — think about it) and one pixel up or down.
    To fill the red and blue pixels, two additional diagonal shifts are required.
    Three shots would completely populate the array for green
    Completely populating the array would require a total of five shots.
    Alan’s technique is wrong for at least two reasons.
    1.To work, the
    pixels must be displaced a specific amount. How does a random kick guarantee
    this?
    2.The resulting
    images must have their pixels interleaved — not averaged.
    Over thirty years ago, during a boring meeting, I came up
    with a brilliant method of using two ADCs to vastly increase resolution (eg,
    two 16-bit ADCs could be wired to produce 32-bit resolution). My intuition told
    me this could not possibly be true. A few minutes of careful re-thinking
    revealed the errors.
    Wanting something to be true does not make it true. Everyone
    makes mistakes, and a good scientist or engineer is just as critical of his own
    work as he is of others’.

  12. Alan1250

    William Sommerwerck I think the key piece that you are missing is in your vision of the pixel grid in question.  You are thinking of the grid of the sensors resolution.  Now imaging a much finer more precise grid overlaid on that.  The images can be micro-aligned to the precision of the finer grid, regardless of how they were displaced.  Then for every finer grid position, one can mathematically select an output value using the several input samples in the neighborhood.

    Beyond this explanation, I’m going to stop trying to explain super resolution and it’s various implementations.  It seems you are either denying it exists or don’t understand it.  There are hundreds of papers out their and many proven implementations.  The recent ones by camera vendors have just simplified the sampling process, so they can do all the math in real-time. 

    Just because you don’t understand it, doesn’t mean it’s not true.

  13. William Sommerwerck

    Alan1250 William Sommerwerck  Alan, all I can tell is that you have badly misinterpreted the diffraction-reduction approach to improving resolution. You cannot expect that two photos, with randomly displaced pixels, will permit this.
    As for my denying super resolution’s existence — point to where in my posts I said any such thing.
    Also note that I explained one method of producing it by exploiting the Bayer layout. It does exist, and I do understand it, better than you do. It doesn’t work that way.
    Just because you think you understand something, doesn’t mean it’s true.

  14. William Sommerwerck

    Alan1250 Everyone is ignorant about something.
    Fifty years ago, while still in high school, I came up with a brilliant idea for transmitting huge amounts of data in a limited space, using time-division multiplexing. I sent a rambling description to an Army installation near DC.
    Several weeks later, I received a courteous letter explaining why such a system cannot work. Sampling is a form of AM modulation, generating sidebands. If you want to transmit information at a 50kHz rate, you have to have at 50kHz bandwidth.
    I learned a lesion, one that has stuck with me, but many other people have yet to learn.

Leave a Comment