Question regarding crop factor and pixels?

Discussion in 'Open Discussion' started by dulaney22, Sep 6, 2010.

  1. dulaney22

    dulaney22 Mu-43 Regular

    Aug 18, 2010
    After reading and reading and reading about crop factors, focal length, field of view and depth of focus, I still have one question. While I understand that the u4/3 sensor only collects half the light image directed through the lens and therefore has a half crop, doesn't that also mean that you have 12.3 million pixels in that cropped image? If so, how is this a disadvantage? It would seem to me that you basically have 33% more pixels than the full frame M9 for the same field of view.

    I may have this completely wrong, which is why I ask.
  2. kevinparis

    kevinparis Cantankerous Scotsman

    Feb 12, 2010
    Gent, Belgium
    you are so wrong on so many levels and its too late at night for me to explain.


    1) you dont have more pixels than a M9

    2) a u4/3 sensor gets the same amount of light as an m9

  3. dulaney22

    dulaney22 Mu-43 Regular

    Aug 18, 2010
    Kevin, I'm quite certain I'm understanding it wrong. Let me clarify. I don't mean to say it's better than a Leica, just using its 18 million pixel sensor as an example. What I'm asking is, if you put a 50mm Summicron on the u4/3, you would get 12.3 million pixels in the FOV. If you used the same lens on an M9, you would have a much wider FOV, but for the cropped image displayed by the u4/3, you would have more pixel density. I realize that if I used the corresponding Leica 100mm it would have more pixel density.

    I understand that both would get the same amount of light, just that the u4/3 sensor is only half the size of the full frame and thus the image projected is cropped. That may be wrong too, but it's my understanding.
  4. kevinparis

    kevinparis Cantankerous Scotsman

    Feb 12, 2010
    Gent, Belgium
    reading this again in the morning I think I see what you are getting at..... but think you are placing too much value on pixel density being a good thing.

    for any format the higher the pixel density the smaller the light gathering element in the sensor becomes. The smaller this light gathering element is, the less sensitive to light it is, as it can gather less photons. The lower the sensitivity the more the signal has to be amplified electronically in order to get a reading - the more amplification required to boost the signal for high ISO the more noisy the image gets.

    so there is a trade off between having more less sensitive elements or less elements with higher sensitivity.

    Look at a camera like the Nikon D700 - only 12 mega pixels on a full frame - capable of taking very sharp images... but also because the pixels are relatively large it has superb high ISO

    hope this helps a bit

    • Like Like x 2
  5. kiynook

    kiynook Mu-43 Regular

    Aug 16, 2010

    Is this why Canon decided to change from 14MP in G10 to 10MP in G11?...get better performance...

  6. dulaney22

    dulaney22 Mu-43 Regular

    Aug 18, 2010
    Yes, that helps and makes sense. Thank you. I figured there was a trade off, but didn't know it was light related.
  7. dulaney22

    dulaney22 Mu-43 Regular

    Aug 18, 2010
    Kevin, I appreciate you describing the photon capturing process as relates to pixel density. For the rest of us dummies, here's a good article I found on the subject.

    Clarkvision: Does Pixel Size Matter

    It seems the more I read and study, the less I thought I knew . . . LOL!!
  8. panystac

    panystac Mu-43 Regular

    Sep 14, 2010
    Tokyo, Japan
    It has to do with noise and dynamic range.
    Let's suppose there's a Full Frame sensor and a 4/3 sensor, both with 12M pixel.

    The FF sensor has 4 times the area of the 4/3 sensor. If the same lens throws the same brightness of light onto each sensor, then each pixel on the FF is receiving 4 times as many photons as the 4/3.

    When pixels are not receiving any light, they will still spontaneously release a few electrons, which we will see as "noise" on our resulting images.

    This may not matter too much in bright light, but when it's almost dark, the number of photons hitting each pixel becomes important.

    Let's say for a 4/3 sensor for a particular low light level, 5 photons hit the a particular pixel, and the pixel releases 5 electrons, plus 1 spontaneous electron. The "signal to noise ratio" is 5:1.
    For the same low light level on a FF sensor, 20 photons hit a pixel, and it releases 20 electrons, plus 1 spontaneous electron.
    The "signal to noise ratio" is 20:1

    So even though the number of M pixels is the same, the FF sensor has much better low light performance than 4/3.