Gambian sidling bush
- Feb 25, 2017
Noise sources vary. Some implementations do averaging fairly explicitly (e.g. Helicon's method A weighted average) and filter and merge operators are common. Picolay, for example, is currently on a 5x5 multiresolution Gaussian (2020-04-20). There's loads of papers in the image fusion literature on Laplacian pyramids and depth mapping. The other likely substantial consideration is frame alignment transforms rarely have integer translations, very likely have scales somewhat below unity, and may include rotations. So the pixels being analyzed and merged are resampled.I was laboring under the delusion that the noise was "generated" only by digging up any tiny amount that was latent in the sources...
A stacker's merge operator(s) need not have a noise gain greater than unity but they are subject to one or more estimates of a pixel's value. These contain at least transformed noise from the image sensor, maybe artifacts from image processing in the chain before the stacker, transformation parameter error, resampling error, accumulated numerical error, and probably other things I'm not thinking of at the moment. Photographers often aren't diligent about checking their inputs so, if there are upstream problems, my experience is it's likely they'll be misattributed to the stacker. In particular, when shredding 4k video files it's my sense compression artifacts in the input frames are the dominant issue. But, at minimum, there are going to be numerical differences between stackers working with 16 bit integers and those using single precision floating point. In most cases I wouldn't expect precision to be important but other factors might be.
Rik describes Zerene's pyramids as typically increasing noise and depth mapping as having a noise gain between -3 and 0 dB. However, this doesn't automatically transfer to other stackers or necessarily across Zerene releases. Descriptions of pmax from 10-13 years ago indicate it didn't attempt any noise rejection. You might be thinking of that.