You want to know why we keep writing about Apple? Because it’s pushing the boundaries of physical optical performance through software faster and harder than anyone else. Latest proof point: the newest Apple patent.
OK: the latest Apple patent is irrelevant to video. It says so, right in the patent.
But using an optical image stabilization processor to offset the sensor by a sub-pixel distance while in burst mode may do incredible things for still photographers.
Fascinating, isn’t it? No more pixels on the sensor (and thus no degradation in what is already a highly limited low light capability in the case of the iPhone), yet using software to effectively capture more pixels for the output side nonetheless.
This is a level of applied creativity which dedicated camera manufacturers must embrace and emulate – either through their own R&D or through licensing. It’s already easier to do slow mo, time lapse and hyperlapse on an iPhone than any professional camera – and that fact (along with others like it) will only be tolerated for so long. [bctt tweet=”New Apple patent: more resolution without more pixels”]
Apple camera patent would allow high-resolution photos without sacrificing image quality
A clever patent granted today could allow future iPhones to have the best of both worlds, allowing higher-resolution photos without squeezing more pixels into the sensor…
The secret is effectively to use burst-mode to shoot a series of photos, using an optical image stabilization system – like that built into the iPhone 6 Plus – to shift each photo slightly. Combine those images, and you have a single, very high-resolution photo with none of the usual quality degradation. Or, in patent language:
“A system and method for creating a super-resolution image using an image capturing device. In one embodiment, an electronic image sensor captures a reference optical sample through an optical path. Thereafter, an optical image stabilization (OIS) processor to adjusts the optical path to the electronic image sensor by a known amount. A second optical sample is then captured along the adjusted optical path, such that the second optical sample is offset from the first optical sample by no more than a sub-pixel offset. The OIS processor may reiterate this process to capture a plurality of optical samples at a plurality of offsets. The optical samples may be combined to create a super-resolution image.”</em”
|Note: it is our policy to give credit as well as deserved traffic to our news sources – so we don't repost the entire article – sorry, I know you want the juicy bits, but I feel it is only fair that their site get the traffic and besides, you just might make a new friend and find an advertiser that has something you've never seen before|
(cover photo credit: snap from 9TO5MAC)