Would Olympus / Panasonic use a 16MP sensor derived from A7riv?

zanydroid

Mu-43 Regular
Joined
Feb 13, 2019
Messages
126
Location
San Francisco, CA
The pixel pitch of higher sensor formats has finally reached M43 (see: 90D and A7Riv sensors), which might mean that we'll soon get some technology scraps from the big boys.

The A7riv sensor is a lot more interesting as an example, since the 90D's pixels are roughly equivalent in quality to the 20MP M43 sensor.

With the A7riv, you can just take a M43 sized crop out of the middle, and get better image quality than the 16MP M43 sensor. What are the chances that Panasonic/Olympus would use a M43 size version of this sensor? Would you be interested in a camera that has this?

Looking at DXOmark for A7riii (the iv is supposed to maintain parity at the pixel level)
DR: 13.5 stop vs 12.8
Base ISO: 100 vs 200

Advantages:
- Easier 4K processing with less subsampling. The pixels are less noisy, which would compensate for higher noise from less subsampling.
- ISO 100 is more convenient in some shooting situations (e.g. less need for ND, capture wider tonal range in some scenes)

Possible issues are:
- Reduction in megapixel count. Hard to sell (though Nikon pulled this off with no issues with the D500)
- High ISO performance is a regression for Olympus cameras; the FF sensor corresponds to 825 ISO with a M43 crop, which matches Panasonic but is lower than Olympus's 1300
 

CO_yeti

Mu-43 Regular
Joined
Mar 25, 2019
Messages
49
Location
Denver, CO
Do you think it's a fact Sony is holding back tech? The B2B space is very different than many realize and Sony's R&D arm makes money by advancing tech whether they use it in their cameras or not. I think the more likely option in the current 20mp M43 sensors are very good. It doesn't make much financial sense to push for a incremental improvements. Instead, the next benchmark will be a very good prosumers level 8k sensor. That is where the revenue lies and where investments have been made. It seems like we are on the cusp on these sensors making it to market.

Also, I find I produce much better results when I let my computer/RAW converter handle noise reduction than worry about the sensor's high ISO performance. I think there are easier gains on the software side and those benefit from the extra data of higher MP images.
 

Reflector

Mu-43 Hall of Famer
Joined
Aug 31, 2013
Messages
2,153
Looking at DXOmark for A7riii (the iv is supposed to maintain parity at the pixel level)
DR: 13.5 stop vs 12.8
Base ISO: 100 vs 200
That measurement assumes the entire image under such and such conditions (downscaled to some specific resolution and treated like a print), rather than at the pixel level.

You would actually get worse off performance, here's the A7R3 in APS-C mode from Bill Claff's data up against the E-M1II, knock off around 1/2 to 1EV to get a Micro Four Thirds rectangle from the sensor. Meaning ISO 100 barely has a little more dynamic range (nothing that two exposures bracketed or stacked would solve) while everything else above ISO 200 suffers. You can pull the data for the A7R4 up and it looks a little worse off actually, but only a little.
photons to photos aps-c vs micro four thirds.jpg
Subscribe to see EXIF info for this image (if available)
 
Last edited:

zanydroid

Mu-43 Regular
Joined
Feb 13, 2019
Messages
126
Location
San Francisco, CA
That measurement assumes the entire image under such and such conditions (downscaled to some specific resolution and treated like a print), rather than at the pixel level.
Thanks for the correction, that was super useful to know. That explains why the APS-C cameras, which should be at the same tech node as the FF, have ~1EV worse in DXOMark. Also, based on the correct interpretation of those units, matching previous performance numbers with a pixel pitch shrink is a little less impressive.

Getting Base ISO of 100 would help with some applications, but that .5-1EV loss in the higher ISOs.

Do you happen to know why there are 2 classes of 20MP M43 sensors? IE the EM1mkII is known to have the highest low light performance, while there are a bunch of cameras that test a noticeably worse under his methodology and dxomark. Interestingly, according to Bill Claff's data (I believe he runs his software on RAW files imaging standard test charts from review websites?) the EM1X is actually worse than the EM1mkII
 

Reflector

Mu-43 Hall of Famer
Joined
Aug 31, 2013
Messages
2,153
Thanks for the correction, that was super useful to know. That explains why the APS-C cameras, which should be at the same tech node as the FF, have ~1EV worse in DXOMark. Also, based on the correct interpretation of those units, matching previous performance numbers with a pixel pitch shrink is a little less impressive.

Getting Base ISO of 100 would help with some applications, but that .5-1EV loss in the higher ISOs.
But it really wouldn't if you scaled the A7R3/4 sensors up, their slower readout is a bad tradeoff and I actually wouldn't be able to tell you what their highlight recovery characteristics are like. Really, we actually had the "state of the art" sensors at the current size short of jumping to that weird exotic slightly oversized sensor used in the GH5s which is some kind of 48mp or so sensor that's been quad-pixel-binned and the very, very latest releases of the past year for other industrial application (with the stacked DRAM and so on).

If you own an E-M5 and E-M1II you'll find the tolerance to overexposure is different. I found that the 16mp sensor in the E-M5 (whether this is from a lack of PDAF sites or otherwise) is more "tolerant" of extreme overexposure detail recovery. I've pulled off 4-5EV overexposed shots on the E-M5 where all but the specular highlights are "burned" out, whereas the E-M1II I'd say is about 2EV max on the upper limit (which makes sense since "ISO 64" is a 1-2/3EV bump over the lowest at 200). My understanding is that the E-M1II uses those PDAF sites as some kind of weird noise reduction thing as it doesn't use them as imaging data due to them being PDAF sites.

We're kind of already ahead of the APS-C curve relative to Sony's APS-C sensors. The X-T3 is only parity, so if you cut out a Micro Four Thirds rectangle on it, then you'd actually be worse off. Not only that, the sensor only reads at 1/30s still. That 32.5mp EOS 90D/M6II sensor? It's already half an EV worse than Micro Four Thirds, so you're looking at a -1.5EV disadvantage. At some point, you're better served with keeping the current performance characteristics and having a global shutter or stacked readout that can occur at 1/320s or faster that takes two shots with one being slightly over or underexposed and then having it use that to resolve for the highlights/shadow information that you couldn't capture.

That or trying to bump the QE of the sensors. The E-M1II from my understanding is somewhere around converting 3 out of 4 photons into recorded data. It has one of the highest QEs for sensors as far as I know, if you go over most of the APS-C sensors and 135 format sensors, they're somewhere around 50-60%, not something around 70-80%. If you want anything remotely close for sensor QE, it's the 1" (latest) sensors used in the RX100 family. We're probably better off if Sony scaled the stacked 1" designs if we want a bump in megapixels (with a slight noise disadvantage per pixel along with a dynamic range penalty of 1EV) or scaled the architecture up (same or maybe a little better noise performance by maybe 1/2EV? Maybe the dynamic range penalty still exists but is "less worse") while keeping the stacked DRAM element.

Do you happen to know why there are 2 classes of 20MP M43 sensors? IE the EM1mkII is known to have the highest low light performance, while there are a bunch of cameras that test a noticeably worse under his methodology and dxomark. Interestingly, according to Bill Claff's data (I believe he runs his software on RAW files imaging standard test charts from review websites?) the EM1X is actually worse than the EM1mkII
There's two specific 20mp sensors: The PEN-F which is the slower 20mp sensor with I think a 1/30s readout and the E-M1II and E-M1X sensor that has a 1/60-80s readout. The E-M1X from my understanding may have some differences on the back end in regards to readout and/or Olympus may have set the gray point (metering) differently in it by a little. If you look at Bill Claff's data, the E-M1II stays in analog gain all the way up to ISO25600 while the E-M1II hits a brick wall at 12800 (I believe it actually is 6400 if you look at the "native" range on the camera) where it goes to (software) multiplication of the exposure.

Check DPR's comparator tool, the E-M1X seems to show less noise by approximately a half stop. So it does look like there's a gray point thing going on possibly. You'll also notice it holds up at ISO 12800-25600 better, just by a little. Granted, it is just noisy at that point but the E-M1X has has a chance of recovering a shot where light is a critical factor.
 

mawz

Mu-43 Regular
Joined
Mar 2, 2013
Messages
110
I suspect any actual differences in the E-M1X are in the processing chain.

Most folks think that a RAW file shows exactly what's read off the sensor, this hasn't been the case in years. The processing chain has a massive effect on the RAW performance of the camera and the E-M1X's extra horsepower can be used to do further processing that the E-M1.2 and E-M5III don't have the extra CPU cycles for. That's probably good for the noise performance.
 

zanydroid

Mu-43 Regular
Joined
Feb 13, 2019
Messages
126
Location
San Francisco, CA
We're kind of already ahead of the APS-C curve relative to Sony's APS-C sensors. The X-T3 is only parity, so if you cut out a Micro Four Thirds rectangle on it, then you'd actually be worse off. Not only that, the sensor only reads at 1/30s still. That 32.5mp EOS 90D/M6II sensor? It's already half an EV worse than Micro Four Thirds, so you're looking at a -1.5EV disadvantage. At some point, you're better served with keeping the current performance characteristics and having a global shutter or stacked readout that can occur at 1/320s or faster that takes two shots with one being slightly over or underexposed and then having it use that to resolve for the highlights/shadow information that you couldn't capture.
I definitely have wanted faster readout and auto-stacking for my travel video. You can't really control lighting in those sorts of situation (other than restricting your subject and framing to avoid problematic situations). Even going to the biggest sensor available for to get the DR isn't enough (plus if I'm not mistaken I also have to sacrifice DOF for that approach), and adding a video light would be nuts since I'm traveling for fun, not filming a tourism board ad. Maybe ZCam will put something out like this before Panasonic does.

1/160 (as on the A9) or 1/320 is already getting close to the speed of a mechanical shutter, so not sure whether there would be any incentives to go higher than that. For stacking and tiling applications, it's helpful to have multiple back-to-back readouts, preferably with as little extra latency as possible. I'm not sure how quickly that stacked sensor can do back-to-back full readouts; they only push it to 20 fps RAW, with additional EVF-quality readouts for the 60fps blackout free display. Though maybe an extra exposure even at just 3.7MP & reduced bit depth would provide an improvement in image quality.

Also, the current buffering technology that they use for the stack results in a noticeable decrease in imaging performance, so you're worse off in some situations.
 

Reflector

Mu-43 Hall of Famer
Joined
Aug 31, 2013
Messages
2,153
I definitely have wanted faster readout and auto-stacking for my travel video. You can't really control lighting in those sorts of situation (other than restricting your subject and framing to avoid problematic situations). Even going to the biggest sensor available for to get the DR isn't enough (plus if I'm not mistaken I also have to sacrifice DOF for that approach), and adding a video light would be nuts since I'm traveling for fun, not filming a tourism board ad. Maybe ZCam will put something out like this before Panasonic does.

1/160 (as on the A9) or 1/320 is already getting close to the speed of a mechanical shutter, so not sure whether there would be any incentives to go higher than that. For stacking and tiling applications, it's helpful to have multiple back-to-back readouts, preferably with as little extra latency as possible. I'm not sure how quickly that stacked sensor can do back-to-back full readouts; they only push it to 20 fps RAW, with additional EVF-quality readouts for the 60fps blackout free display. Though maybe an extra exposure even at just 3.7MP & reduced bit depth would provide an improvement in image quality.

Also, the current buffering technology that they use for the stack results in a noticeable decrease in imaging performance, so you're worse off in some situations.
If you're able to stack at faster than 1/320 rates, then you're functionally able to soft emulate ND filter like effects or effectively are able to go far, far below ISO 200 or effectively have an auto-HDR/"infinite DR" sensor. The metering merely has to determine how long of an exposure causes for whatever highlight in the image to blow out and then it would just auto-bracket a readout range and "reassemble" the entire scene afterwards.

For a mental image: Imagine imaging a scene at 1/32000 to 1/320, you would basically be able to preserve the strongest highlights to bringing out the things in the shadows of daylight lighting. Of course, if you can read it out super, super fast, your effective shutter time is still something around, say, 1/120s. Motion would still freeze, but you would be able to compress far, far in excess of 20 stops into one scene without much artifacting as the capture occurs so quickly that there would be functionally no "motion" to all but actual fast moving things.

If you want to do that in a smart way, you can effectively use something like a matrix of spots for metering a scene and then selectively combine it as a grid of images to avoid motion artifacts rather than just brute force blending based on trying to emulate an averaged exposure on the extremes of a scene. Some smartphones already take advantage of this for their HDR modes by using insanely high readout speeds and precapture (like pro capture) a stream of images where they can use it to neutralize motion. You're just committing when you hit the shutter button in the end.

The A9's sensor is relatively old, if you look at the later generations of the RX100 the penalty doesn't seem to be there or otherwise it negated whatever relative image quality gains could be achieved. I would rather see something that can read out at stupidly fast rates like a quarter sized A9 sensor. If it could read at 1/320 to 1/640 then you will have effectively replaced a mechanical shutter in totality, the 1/640 case making it so you can begin to get the computational imaging benefits like the aforementioned.

I already use the E-M1II's 60fps shutter for bracket-stacking scenes. I've uploaded some examples before, but by being able to take a 5EV bracketed image and then selectively merging all but the blown highlights I end up with an image with what is effectively 7.75 exposures worth of light (when compared to a 0EV metered scene) in terms of data. It utterly blows away noise in the shadows and it also recovers details in the highlights that you wouldn't see otherwise.

See this post for an example of the extreme amount of dynamic range that can be gained from a bracketed stack:
https://www.mu-43.com/threads/how-else-would-you-shoot-this-frame-in-these-conditions.105769/page-3#post-1323707
 
Last edited:

zanydroid

Mu-43 Regular
Joined
Feb 13, 2019
Messages
126
Location
San Francisco, CA
If you're able to stack at faster than 1/320 rates, then you're functionally able to soft emulate ND filter like effects or effectively are able to go far, far below ISO 200 or effectively have an auto-HDR/"infinite DR" sensor. The metering merely has to determine how long of an exposure causes for whatever highlight in the image to blow out and then it would just auto-bracket a readout range and "reassemble" the entire scene afterwards.

For a mental image: Imagine imaging a scene at 1/32000 to 1/320, you would basically be able to preserve the strongest highlights to bringing out the things in the shadows of daylight lighting. Of course, if you can read it out super, super fast, your effective shutter time is still something around, say, 1/120s. Motion would still freeze, but you would be able to compress far, far in excess of 20 stops into one scene without much artifacting as the capture occurs so quickly that there would be functionally no "motion" to all but actual fast moving things.

If you want to do that in a smart way, you can effectively use something like a matrix of spots for metering a scene and then selectively combine it as a grid of images to avoid motion artifacts rather than just brute force blending based on trying to emulate an averaged exposure on the extremes of a scene. Some smartphones already take advantage of this for their HDR modes by using insanely high readout speeds and precapture (like pro capture) a stream of images where they can use it to neutralize motion. You're just committing when you hit the shutter button in the end.
Yeah, this stuff is definitely what I've been dreaming of, though it'll be years before it shows up on camera cameras. Phones can already do this at 16MP / 4K with processors cheap enough to put in consumer electronics (apart from the fixed engineering costs, which may not be viable for low volume cameras), and some people claim that the DR + noise reduction from overlapping stacks is enough to make the pictures comparable to a 1" sensor camera.

Might just need the right marketing / user education if the fast readout has an IQ penalty for traditional photography. It would be great if a stacked sensor could have variable IQ vs readout speed. E.G., assuming it uses analog storage cells, allow those cells to be subdivided 2 or 4 ways for different operating modes. All cells linked together for maximum performance for a single, or use the smaller cells individually to allow 2 or 4 frame bursts at max readout speeds.

I already use the E-M1II's 60fps shutter for bracket-stacking scenes. I've uploaded some examples before, but by being able to take a 5EV bracketed image and then selectively merging all but the blown highlights I end up with an image with what is effectively 7.75 exposures worth of light (when compared to a 0EV metered scene) in terms of data. It utterly blows away noise in the shadows and it also recovers details in the highlights that you wouldn't see otherwise.
Cool, good to know that the 60fps is compatible with bracketing modes. Which modes does it work with?
 

Reflector

Mu-43 Hall of Famer
Joined
Aug 31, 2013
Messages
2,153
Yeah, this stuff is definitely what I've been dreaming of, though it'll be years before it shows up on camera cameras. Phones can already do this at 16MP / 4K with processors cheap enough to put in consumer electronics (apart from the fixed engineering costs, which may not be viable for low volume cameras), and some people claim that the DR + noise reduction from overlapping stacks is enough to make the pictures comparable to a 1" sensor camera.

Might just need the right marketing / user education if the fast readout has an IQ penalty for traditional photography. It would be great if a stacked sensor could have variable IQ vs readout speed. E.G., assuming it uses analog storage cells, allow those cells to be subdivided 2 or 4 ways for different operating modes. All cells linked together for maximum performance for a single, or use the smaller cells individually to allow 2 or 4 frame bursts at max readout speeds.
1"? I believe one of the Pixels is purported to be closer to an APS-C exposure in some cases but what significant advantage it does have is the ability to generate a HDR by effectively recombining the entire image as grid like sections. Right now with the E-M1II, I manually use Photoshop's selection tools to pick highlights and to mask them out of the overexposed shots or to manually override them with the most underexposed shot (that has been exposure pulled to equalize everything out with the "proper" exposure) before I merge the entire stack together. The method the Pixel uses is relatively more efficient than what I'm doing but nothing stops the camera's metering from knowing when a part of an image is completely blown out and unrecoverable.

I wish this was explored more seriously in photography as computational imaging done with the relatively larger sensors on our cameras can make them hit hard as actual 6x4.5cm sized medium format sensors in regards to the amount of light gathered. For reference, the E-M1II and G9 as cameras could be said to be able to gather more photon data per second than most cameras on the market. The A9 being one of the few others that would be in that metric for the 20fps capability given the sensing area. Even a 1" camera could be a true monster of a camera in regards to image capture with something like the latest RX100s where they're almost firing off the electronic shutter in a video like fashion (much like how our eyes work).


Cool, good to know that the 60fps is compatible with bracketing modes. Which modes does it work with?
The E-M1II will do focus bracket and stacks as fast as the lens can be driven often (autofocus, aperture in the case of some lenses where it opens up between captures). Typical exposure bracketing works as well with the 60fps shutter but not the "in camera HDR" jpeg generating function. Those fire off the mechanical shutter for no real reason.
 
Links on this page may be to our affiliates. Sales through affiliate links may benefit this site.
Mu-43 is a fan site and not associated with Olympus, Panasonic, or other manufacturers mentioned on this site.
Forum post reactions by Twemoji: https://github.com/twitter/twemoji
Copyright © 2009-2019 Amin Forums, LLC
Top Bottom