I'd ask Bill to explain.Can anyone shine me their light on what 12-bit ADC (analog-to-digital converter) vs 14-bit ADC means and what the impact is on the GX 80/85's IQ at 100 ISO vs 200 ISO.
- Both a stop and a bit are a factor of two. A two bit ADC reduction for a one stop decrease in ISO implies a degradation which follows a square law. That's inconsistent with Bill's data showing improvement at ISO 100. Also, most sources of ADC degradation.
- Unless I've missed something, the only bodies Panasonic makes with 14 bit raw are the GH5 and GH5s. While the ADC and raw bit depths need not match, it would be a curious choice to allocate GX80 BOM to 14 bit ADC and then truncate to 12 bit raw rather than implementing 12 bit ADC and taking profit.
- As Scott mentioned, it's unclear which Panasonic bodies produce 10 bit raw when using electronic shutter and which (if any) produce 12 bit. If the GX80 is 10 bit then tests based on electronic shutter would presumably be even less likely to show sensitivity to a 12 versus 14 bit ADC depth.
More generally, descriptions of raw bit depths tend to focus on the value of 2^(bit depth) and talk about however many more millions of discrete values are available given n more bits. In practice, I don't find this approach to produce a useful conceptual model.
- The R, G, and B values read from the sensor are coordinates in the camera's colour space. Using a higher resolution ADC which yields more bits (and flowing those bits into a raw file) resolves the colours more precisely but is quite unlikely to change the colour space's gamut. In general, new colours become available when the gamut is enlarged, which usually associated with technology increments in sensors.
- If, for the sake simplicity, we assume RGB tuples have values in the range of [0, 1] then 12 bit ADC provides values which can change in steps of 0.000244. With 14 bits, the precision is 0.000061. If, also for the sake of simplicity, we assume the image is shown on a display with the same colour space as the camera, an 8 bit display renders it with a precision of 0.003906 and a 10 bit one with 0.000977. In this (over)simplified description, the smallest difference human eyes tend to pick up is around 0.008, which is roughly 7 bit. The purpose of all the additional bits in displays and images is to allow sufficient latitude during image capture, processing, and rendering to maintain the eye as the limiting factor.
- In practice, the sensor, image, display, and eye all have different colour gamuts and gammas. The gamuts and precisions for red, green, and blue also differ. Additionally, the actual range provided by a sensor is more like [0.06, 0.95] than [0, 1]. Within the present discussion, the net result of all this complexity is 1-2 additional bits are useful for converting ADC output to perceptual colours. Another 0-3 bits can be helpful for lifting dark areas to show shadow detail. Other contrast increases in processing can require perhaps 0-1 bits. 8-10 bit ADC is therefore usually good enough, though 12 bit is nice to have. 14 bit is more about accommodating fairly unusual cases, such as recovery from major errors in exposure or extreme changes in image processing.
Could you share your measurements? As Joris has noted, this claim is inconsistent with Bill Claff's and DxOMark's results for Panasonic bodies. I'm also curious as to how you reached -3 dB rather than -6 dB.The downside is that at ISO 100 a very small drop in DR will result.
It does, but it's not supported by Bill's data for the GH5, G85, or GX80. It is partially supported for the GH5s and G9.Hmm, does this suggest that gradients would be more rough at ISO100?