1. Reminder: Please use our affiliate links for holiday shopping!

Photo Physics 101 v.0.02

Discussion in 'Open Discussion' started by pheaukus, Jul 23, 2012.

  1. pheaukus

    pheaukus Mu-43 Regular

    178
    Jun 22, 2012
    Photo Physics in a Nutshell v.0.02

    The function of a camera is not to capture light. Cameras are image information receivers. If you do not see the difference please read this post carefully. Thank you.

    Why I post this thread
    On this forum there are many questions asked and answers given regarding the purchase and practical concesquences of different kinds of photo equipment. The given answers are at times misleading. This may even be the case for answers coming from seasoned photographers. I believe the reason for this is that some of the physics involved in photography may be counterintuitive to what one experiences in photography praxis. I hope this post can provide newcomers with a clear explanation of photo physics, and that it can help more experienced photographers to purchase and use their equipment based on better comprehension.


    A kind request
    This post took a long time to write. The information contained in it took me quite some time to collect and figure out. Please be gentle in reading and reacting. Please read it all and take some time to comprehend what was meant by the author. Please provide errata and name typos so the text can be improved. Suggestions from native English speakers are very welcome. In the near future I will add more illustrations and some equations so this post becomes an easier read.
    Thank you in advance!



    Introduction: Information Painters

    What differentiates photography from painting?
    A painter perceives reality, makes up their mind about it, and rebuilds it in a manual process from scratch on a white piece of canvas. Photographers on the other hand capture an image of reality in a way which only adds filtering. While a painter conducts a process that involves analysis and synthesis, a photographer lets the camera take over image synthesis so they can focus completely on analysis. This does not mean a photographer is any more or less of an artist then a painter - they just have found a way to focus on entirely different aspects of artistry. Photography is all about selection and filtering. This is made possible by modern technology, which I why I personally think photography is cool: Photography is the first truly modern art form.

    At this point one can raise the question: what is it that photographers filter?
    What is their medium? The obvious answer, photographers work with light, only is half the truth. Light is just the input of the photographic process. Its output is information. In a film camera, information from the incoming light signal is directly inscribed into the film grains. In a digital camera, light is converted to electric energy which then is quantified in bits of information. There is a direct relation between the amount of incoming photons, the electric current they result in and the amount of information we can quantify from that current. That relation is linear, but it has limits. There may be too few incoming photons to detect, or there may be so many incoming photons that the electric current is outside of our detectable range. The range between these limits of detection is called dynamic range.

    So what do photographers do to information in order to create art?
    Photographers work with the structure of information. Some aspects of the information structure have to do with how the human eye and brain see the world. Other aspects have to do with the photo subject, perspective and framing. And finally, there are a few objective technical aspects, the imaging parameters. This post is dedicated to clarify what the effect of each of those parameters is on the photograph. It is my hope that this post will help forum members to improve their skills as information painters.

    The following parts of this post introduce imaging parameters starting from the imaging surface, that is the sensor or film used to convert incoming light signal to information. The description then moves on towards towards the photographed subject. With each step, the information in the photograph will experience a modulation of different kind. With each part, a new parameter is introduced and explained, building forth on the explanations provided before. The progession resembles building the information receiver camera from the inside out, adding part by part.

    The imaging parameters are:
    1. Shutter Speed (S)
    2. Film Speed (ISO)
    3. Signal to Noise Ratio (SNR)
    4. Imaging Surface Area (Format)
    5. Depth of Field (DOF)
    6. Aperture (A)



    1. Shutter speed (S)

    Shutter speed is the amount of time the imaging surface - sensor or film - is exposed to an incoming light signal. With longer shutter speed we may loose some of that signal's information to motion blur. This loss is not per se bad. Based on artistic consideration, a photographer may want to introduce blur to indicate that there is movement happening. Motion blur also allows the photographer to emphasize specific parts of the photograph.


    2. Film Speed (ISO)

    This parameter is named after the ISO (International Organization for Standardization) standards ISO 5800:1987 and ISO 12232:1998 which define ISO for film and digital photography respectively. It is also referred to as film speed. For film photography, ISO means that a film exposed to light of a specified density will change brightness at a speed defined by the standard. When a 50% grey card is photographed, the film speed standard ensures that with a properly chosen shutter speed the film will have a 50% grey tone. In analogy, a digital imaging system has to be tuned such that after being exposed to the light for the correct amount of time, it provides a quantified value that stands for 50% brightness.


    3. Signal to Noise Ratio (SNR)

    This is a parameter mostly spoken about in digital photography even though it applies to film photography in equal measure. SNR is a measure for the proportional amount of noise that occurs in the detection of an incoming signal of light. Better SNR means that the camera can provide more of the information contained in an incoming light signal.

    With SNR there seems to be some confusion about the way it relates to the amount of pixels on a camera sensor. Therefore SNR will be explained here in more depth especially for digital imaging. If two pixels with same SNR receive the same light signal, they will provide a quantifiable amount of information. For each pixel, that amount of information is equal to the amount of incoming light signal multiplied by SNR. The amount of total information received by our two-pixel system is the amount of information per pixel times two, because we have two pixels. We can combine the information received by the two pixels in a procedure called pixel binning . With pixel binning we combine the two signals received from the two pixels. The signal in the two pixels will be synchronous, while the noise of each pixel is random. The two signals will add up while the noise contained in the two signals will diminish one another. By binning the two pixels we have only one signal left but it has a much better SNR. As a result, the amount of captured information is the same.


    4. Imaging Surface Area (Format)

    So far, all introduced parameters dealt with either time or infomation. At this point, spatial extension is added: imaging parameters that are based on length, distance, and areas. The imaging surface area, commonly known as format, is a measure of the size of a camera's sensor or film slides.

    Format is a touchy topic on the m43 forum. To some people mentioning format suggests an attack on the m43 format because there are larger cameras in existance. To other people, mentioning format means supporting m43 because some other cameras are smaller than m43. It is not my intention to attack or defend anybody's choices. I am plainly interested in providing factual information so people can base their decisions on facts.

    In relation to the parameters mentioned before, there is general agreement that shutter speed is not affected by format. I have however noticed some confusion with regards to the implications of the interrelations of ISO, SNR and format on the amount and quality of information captured. Therefore I will explain these interrelations in detail.

    The basic point of confusion is about the amount of light signal that is captured by a camera of certain format at same shutter speed, same light intensity and same film speed. The intuitive understanding of some photographers seems to be that a larger camera does not capture more information. This is however wrong. So far the most simple illustration I have found of the actual working of format is to view a film slide at molecular level. A film slide is a carrier material, e.g. celluloid or some kind of plastic, covered with a chemical film that is sensitive to light exposure. The chemical film contains molecules that change color when they are hit by photons. For a film to be ISO 100, a specified percentage of its molecules will have to be hit by photons within the time given by shutter speed in order for it to turn 50% gray. The density of these molecules on an ISO100 film does not change with film size or format. That is so because film size is no part of the standard ISO definition of film speed. Each square mm of the film will have the same amount of molecules which each have to receive a photon to react. That means that a larger area of film, a larger camera format, will have to receive more incoming light signal to achieve the same tone values. A larger format therefore will provide more light information to the photographer to work with.

    So a larger format sensor will always receive more incoming light signal, all other parameters being the same. Does that means it always provides more imaging information? In the next part, we will see that this is not the case.


    5. Depth of Field (DOF)

    Up to now all introduced imaging parameters describe phenomena occurring on directly on the imaging surface, i.e. on the sensor or film slide. With this part the explanation of imaging parameters moves out of the plane into three-dimensional space.

    It is commonly accepted that a larger camera will provide an image with less DOF, all other parameters being the same. DOF blurs imaged subjects before and behind the focus area. This DOF blur contitutes a loss of imaging information. The following drawings illustrate how DOF works.

    Please note that the lines of light indicate only the outer boundaries of the light beams involved in taking the photo. In reality, all light emitted from one point on Mr. Smileys face that travels towards the lens surface will be redirected by the lens towards one point on the imaging surface.

    mr_smiley.

    On top we have Mr. Smiley. Mr Smiley emits light equally in all directions. Even each infinitesimally small point on the surface of Mr Smiley's face emits light in all directions.

    Then we see how Mr Smiley is photographed with a small camera. We see that Mr Smiley's face covers the entire imaging surface. This is so because of both the focal length of our lens and the distance between the camera and Mr Smiley. A lens of a specific focal length has a specific field of view (FOV). With a different focal length, we would see more or less of Mr Smiley appear on the imaging surface.

    On the bottom we see how Mr Smiley is photographed with a large camera. We have the same FOV. This is possible because a lens of larger focal length is attached to the larger camera.

    So how does DOF occur?
    For the larger system, the angle under which light from Mr. Smileys face approaches the lens is different. This is increasingly so towards the outer egdes of the lens. Look at the difference between angles alpha and beta in the illustrations. While both cameras can capture a flat object with equal sharpness, the effect of moving a subject before of behind the focal plane will be much stronger with the larger camera. DOF will be more shallow the closer we move the camera to Mr Smiley's face. If we move the camera further away from Mr Smile, the more DOF we have and the more of him is in focus. That is why in macro photography we have a very shallow DOF. And it is why in Astrophotography we have a very deep DOF.

    So smaller cameras will always be sharper in depth, all other parameters being the same. Why, then, is shallow DOF so valued by photographers? This is because DOF is one of the photographers's strongest means to structure image information. Small DOF allows a photographer to isolate elements of the image for artistic reason. DOF does not occur in equal measure in all types of photography. For astrophotography DOF does not matter at all because all light coming from a galaxy will hit the lens under an almost perpendicular angle. For astrophotography, a larger lens or mirror will always result in better images. For practical reasons, the light coming from a telescope will be concentrated on a small sensor surface though. Otherwise the amount of incoming light hitting the sensor surface per square mm would not be not strong enough to be detected. The signal would be outside of the dynamic range of the sensor. This concentration however is something which only rarely occurs in camera systems. For macro photography DOF matters a lot. A compact camera will be able to capture sharpre images of an insect, while a large format camera art same magnification will have a focus area so shallow that only a slice of the insect's eye may be in focus.


    6. Aperture (A)

    This is the last imaging parameter to explain. Aperture is a measure of the proportion of light signal entering the lens surface which is allowed to reach the imaging surface. Aperture is achieved with a diafragma that stops down the entrance pupil of the lens. This has the effect of changing angles alpha and beta in the illustrations.

    Aperture has two effects. For one, it allows the photographer to compensate for changing light conditions at same shutter speed. The second effect of aperture is that is directly related to DOF. We can stop down the larger camera until angle beta equals angle alpha and the larger camera captures an image with same DOF. This does however not mean that now the same image is captured. For an equal amount of light to reach the imaging surface shutter speed will have to be reduced. This possibly introduces motion blur.


    Conclusion: What about exposure?

    Exposure is a secondary imaging parameter derived from aperture and shutter speed. It is not a basic parameter of photo physics, even though it is very useful in the daily praxis of photography. While the effects of DOF and motion can be judged directly in the view finder, exposure helps photographers to keep the more abstract parameters ISO, shutterspeed and aperture in balance. Because exposure on its own does not indicate the amount of motion blur or DOF, it is no indicator of the amount of information contained, or the amount of photons detected, in a photograph.

    Of the mentioned parameters, format is a subject that causes many arguments of the forums. From the explanations provided above, it can be concluded that a a scaled up camera will always lead to more captured image information for the following situations: still scenes (landscapes), very distant objects (astrophotography) and flat objects (copying). A scaled down camera will always capture more image information in photography of very near subjects that have relatively large depth (macrophotography). An oddity occurs with photography of very dark objects (astrophotography) where the light collected by a large lens or mirror has to be concentrated within a small surface area in order to be detectable. Considering that astronomic observatories are generally not pocketable, I believe it is safe to count them among the large cameras though.

    Please note that image information is an objective technical term that has close to nothing to do with artistic image quality. The mentioned imaging parameters are only objective technical influences on photographic art. Any photographer will deem other parameters that relate to practicality and artistic expression as equally or more important, depending on their subject matter and personal preference. Good comprehension of the effects of the mentioned imaging parameters on the structure of imaged information however will empower photographers to realize their artistic intentions to the fullest extent.
     
    • Like Like x 11
  2. DeeJayK

    DeeJayK Mu-43 Hall of Famer

    Feb 8, 2011
    Pacific Northwest, USA
    Keith
    You've obviously put a lot of time and thought into this post, which I respect and appreciate. I read through it and everything you stated appears correct, at least to my semi-newbie knowledge level.

    However, I'm not really clear on who your intended audience is for this post. Sure, a base of knowledge on the science behind the art of photography can be helpful, but I'm not sure that I got anything out of the post that I can apply to actually making images.

    Your largest point seems to be that effect that sensor size has on depth of field, but I'm not sure this Wikipedia entry doesn't do a better job of explaining that relationship. While this issue of DOF of :43: (particularly as compared to that achievable by cameras with a larger sensor) is a common topic of discussion on this forum, I don't feel that there are many around here who are disputing the scientific facts that you are presenting.

    I feel like the subject(s) that are covering are really too broad to be explained in a single forum post. Perhaps breaking this out into multiple, more tightly focused threads or posts would make it more approachable. For example, why not remove the shutter speed, film speed and aperture discussion into its own post that examines the relationship between those three variables in exposing an image. Perhaps a link to something like Camera Sim which allows the user to instantly see the relationship between these attributes and it's impact on the image captured would be helpful.

    Just my 2 cents, feel free to take or leave my suggestions.
     
    • Like Like x 1
  3. pheaukus

    pheaukus Mu-43 Regular

    178
    Jun 22, 2012
    Thanks for reading and for your suggestions!

    The reason I put this information here is mainly to help people to base their purchases on the right decisions. Most of the info mentioned here is indeed not important in taking pictures most of the time.

    I know there are already a lot of tutorials and introductions to photography out there. I have noticed that most of them are aimed strictly at photographic praxis and therefore do not provide some of the facts mentioned here. I believe that this causes misunderstandings more commonly than one might think. Therefore I do not want to repeat those tutorials. I'd rather 'shed light' on some things most tutorials tend to 'underexpose'.

    / Edit - add on: My main point is to communicate the concept of information as objective measure for imaging and to use it as basis for explaining photo tech. To a photographer, this may be a fresh way of looking at things because it is somewhat an outsider's perspective. To a newcomer, it may provide a compact set of knowledge that augments the standard tutorials and sites like the ones you mention. I should indeed place some links in the first post so these can be found more easily.

    One rather common misunderstanding is that exposure is equaled to the amount of light signal, independently of format. People may also overlook that larger lenses will capture more light from the same subject and can deliver it to a larger sensor without optical magnification. If people are very convinced about their understanding of such matters, they tend to have developed a slight misunderstanding of other factors as well, e.g. that aperture in some way equals out the deviations caused by the other misunderstandings. In such cases, a simple link to this thread may help communicate the missing bits and pieces all at once.
     
  4. mister_roboto

    mister_roboto Mu-43 Top Veteran

    637
    Jun 14, 2011
    Seattle, WA, USA
    Dennis
    Anything with stickman face drawings will always make it better.
     
    • Like Like x 3
  5. David A

    David A Mu-43 All-Pro

    Sep 30, 2011
    Brisbane, Australia
    I skimmed the original post. Most of it is probably correct but I had a hard time reading and making sense of it. Statements like the following drive me crazy:

    "That is so because film size is no part of the standard ISO definition of film speed. Each square mm of the film will have the same amount of molecules which each have to receive a photon to react. That means that a larger area of film, a larger camera format, will receive more incoming light signal and therefore provide more light information to the photographer to work with."

    The reason a larger film format receives more light is not because there are more molecules receiving light, and a larger sensor doesn't receive more light because of its size or the number of pixel sites.

    You and I were arguing this point in another thread which has since died. I'll admit it—you were right in claiming that the larger format receives more light but for the life of me I couldn't grasp the reason from your explanation, and the reason for it is still obscured by this explanation.

    I managed to work out for myself what the reason was a couple of days after our last exchange. It's simple. As format size increases, the focal length of the lens required to produce the same field of view increases. That doesn't affect exposure (on that we did agree) so as far as exposure goes, a 25mm lens on a M43 camera at F/4 and 1/1000 sec will produce exactly the same results as a 50mm lens on a FF camera at F/4 and 1/1000 sec. I was wrong in assuming that both lenses were passing the same amount of light to the sensor because aperture and shutter speed were the same.

    The reason the FF sensor receives 4 times the amount of light than the M43 sensor is because the diameter of the aperture pupil of the 50mm lens at F/4 is 12.5mm and that of the 25mm lens at F/4 is 6.25mm. An aperture diameter that is twice the diameter of another is 4 times the area of the smaller opening and passes 4 times more light. So, the larger FF sensor receives 4 times the light of the M43 sensor in this example, and is also 4 times the area of the M43 sensor.

    That I can understand, but any mention of aperture diameter and its effect on the amount of light passed to the sensor, and the change in size of aperture diameter when using the same exposure settings with lenses of equivalent field of view at different format sizes was totally missing from the explanation in the previous thread. Your quite correct assertions made absolutely no sense to me because the reasons given also made no sense.

    The simple explanation, the correct explanation, is that more light falls on the larger format film/sensor receiving the same exposure as the film/sensor in the smaller format simply because the size of the hole through which the light passes to get there, the diameter of the aperture opening, increases with the increase in focal length required to produce the same field of view on the larger format.

    In other words, the amount of light falling on a larger piece of film, all other things being equal, isn't greater because there are more molecules in the film layer of the larger piece of film. It's greater because the diameter of the aperture of the lens being used to expose the film is larger than the diameter of the aperture of the lens being used to expose the smaller piece of film if both lenses are producing the same field of view.

    Your statement suggests that the amount of light falling on the film depends on the number of molecules in the film layer, and as that increases with increases in format size, the amount of light falling on the film increases. The amount of light for a correct exposure certainly increases for that reason, and the exposure required for both the large and small formats is exactly the same, but the thing which ensures that the larger format film/sensor receives more light isn't a characteristic of the film or sensor, it's the fact that the diameter of the aperture opening required to give the same exposure will be greater in the longer focal length lens being used on the larger format camera and more light passes through an opening as the diameter of the opening increases.

    Your claim is certainly correct. I don't think your explanation is.
     
    • Like Like x 3
  6. pheaukus

    pheaukus Mu-43 Regular

    178
    Jun 22, 2012
    I am sorry for that and will try to improve the text accordingly.

    / Edit: The pixels or molecules are signal receivers. No matter how much light hits the imaging surface, or how large the imaging surface is, the amount of signal received depends on the amount of signal receivers. The amount of information that can be derived from received signal depends on SNR.

    Imagine a camera setup where, at the push of a button, and everything else staying same, we can exchange a smaller sensor for a larger sensor. The larger sensor with same density of signal receivers will receive more light signal than a smaller sensor simply because of its of larger size. For sake of simplicity we can reduce the entire camera setup from this scenario. This type of reduction is an annoying habit of physicists. But you are right, I did not formulate above paragraph as clear as it could be. I will have to emphasize the causal relations between the facts mentioned...
    It is better to state that, in order to be properly exposed according to ISO standard, the film will have to receive a specific amount of light signal. The amount of light needed grows with the format.

    At this point, illustrations should help a lot as well.

    At the time I had not figured most things, which certainly caused many of my attempts at explanation to be confusing if not wrong. I noticed some things would not work if the same amount of light were captured in a larger camera, and tried to point them out without fully understanding all their relations myself.

    I noticed that, that's why I tried to make everything clear for myself first and then write it down in a more organized way. I am happy to see that I succeeded at least to some degree, and have learned some important things along the way. This would not have happened without the other discussion you mentioned.

    / Edit: even though I was not aware of the actual terminology, I did ask you to simply look at camera lenses for systems of different sizes. Just in case that that point would not come across, I also mentioned that the relationship between exposure, ISO and format would not make sense if the amount of incoming light were equal for larger sensors. More incoming light signal can be received only if the amount of signal receivers is increased as well.

    / Edit: The larger film slide is hit by more light because it has a larger surface area. It receives more light signal because it contains more signal receivers.

    I tried to introduce the parameters one by one, building up on one another. In the build-up of my explanation, at that point, the lens, the subject, and aperture opening are not yet introduced. The interrelation of ISO, format and SNR can be defined without going into the third spatial dimension, but maybe it needs more care than I have put into it so far.

    / Edit: please note that at that point I am writing about smaller and larger sensors, not about smaller and larger cameras.

    It is true that the approach I chose here is counter-intuitive to the making of photographs. Still I would say all these factors are interrelated and an explanation can start at any point. My idea was to turn things a bit inside out to highlight factors one might miss otherwise. When an explanation starts at the sensor or film slide, attention can be directed towards the fact that photographers are recording information. Then one can trace back where that information comes from and how it was modulated in each step. I think most photographers are very used to think in terms of light and not so much in terms of information and may find this to be an interesting albeit unusual approach. This is why I also treat DOF and motion blur as types of information loss rather than light phenomena.

    Maybe we can both agree that my explanation at this point is at least not as clear as should be. I hope that after some improvement you may also find it to be correct.

    / Edit: I am wondering whether in skimming you may have missed the entire information part, which is the main point of my original post. The function of a camera is not to capture light. Cameras are light information receivers. That is an important difference.

    I really appreciate your posts, thank you very much!
     
  7. pheaukus

    pheaukus Mu-43 Regular

    178
    Jun 22, 2012
    I wonder whether by skimming you may have missed the main point of my post. I added some more explanation to my reaction to your post, above this post.
     
  8. David A

    David A Mu-43 All-Pro

    Sep 30, 2011
    Brisbane, Australia
    I'm not clear what the distinction you're making is. At the moment I'm a bit with Marshal McLuhan that "the medium is the message". I think all cameras capturing the same field of view, regardless of film frame/sensor size and number of "information receivers" in the film frame/sensor, receive/capture exactly the same signal but there are differences in how much information different cameras can extract from that signal.

    My views—I could be wrong but let's see if someone can explain clearly where and how I am wrong if that's the case:

    1- "The larger film slide is hit by more light because it has a larger surface area. If it did not have more signal receivers as well, it would not receive more light signal though."

    The larger film slide/sensor needs more light because it has a larger surface area. It doesn't get more light because it has a larger surface area. It gets more light because to capture the same image on a larger film slide/sensor you need a longer focal length lens in order to capture the same field of view and since the physical diameter of the aperture is mathematically related to the focal length of the lens, the physical diameter of the aperture increases proportionately with focal length and focal length required increases proportionately to film slide/sensor size so the actual amount of light passed to the larger film slide/sensor increases because the larger physical diameter of the lens aperture passes more light during the exposure.


    2- " The pixels or molecules are signal receivers. No matter how much light hits the imaging surface, or how large the imaging surface is, the amount of signal received depends on the amount of signal receivers it contains. The amount of information that can be derived from received signal depends on SNR."

    I think this is only partially correct. What the number of receivers determines is the resolution of the image and I'm inclined to think that that is a result of the size of the "information packages" which are captured rather than on the amount of signal captured. I think the actual signal captured, the amount of light of different wavelengths and intensities, is the same for a 12 megapixel sensor of a given size as it is for a 24 megapixel sensor. The number of information receivers doesn't determine how much signal is captured, but how well that signal is sampled and that is one of the things which determines how much information you can extract from the signal. The other is SNR.


    In summary:

    I agree that a camera is an information capturing device.

    I agree that larger film frame/sensor sizes require a greater amount of light but not that the size of the film frame/sensor determines how much light is captured.

    I don't agree that the number on information receivers on the film frame/sensor determines how much signal is captured.

    So in my view:

    The size of the film frame/sensor in conjunction with the ISO setting jointly determine how much signal/light needs to be captured to get an appropriate exposure.

    The physical diameter of the lens opening, determined by the combination of lens focal length and F stop chosen, together with shutter speed, determine how much light/signal is actually captured.

    The relationship between focal length and film frame/sensor size ensures that the amount of light/signal captured by a given set of exposure parameters (aperture, shutter speed and ISO setting) is appropriately scaled for the requirements of the film frame/sensor size.

    The number of information receivers in the film frame/on the sensor is one of 2 factors determining how much information can be extracted from the signal, SNR being the other factor.

    That's my understanding at the moment.
     
    • Like Like x 1
  9. mattia

    mattia Mu-43 Hall of Famer

    May 3, 2012
    The Netherlands
    Interesting piece, although it strikes me as somewhat confused in terms of structure.

    You begin with the least important part of the image chain, namely the camera. Except you start with shutter speed, then move to technical sensor specific parameters (ISO, SNR) before shifting to capture area-related factors (sensor size and, to a lesser degree, depth of field), before moving to one of the more difficult but most crucial factors, aperture.

    For me, it makes far more sense to work in from the most important thing in photography: the subject, and the light emanating from or reflected off of said subject. Which means explaining aperture, most importantly the concept of stops, and then adding shutter speed and ISO into the mix; understanding that shifting any of these parameters by one stop (for shutter and ISO, this means halving or doubling the value) will halve or double the amount of light you let in, and that interplay between these three factors lies at the heart of exposure, which is key. Understanding how to maintain proper exposure while achieving the visual/artistic impact you want. It is a precondition for optimal information collection.

    In short, interesting though your approach is, I don't think it does anything but confuse the issue and make it less easy for photographers to understand what they're doing when exposing an image.
     
    • Like Like x 1
  10. pheaukus

    pheaukus Mu-43 Regular

    178
    Jun 22, 2012
    Edit: have a look at angles alpha and beta in the illustration to my explanation of DOF. The face emits a light signal which has equal density in all directions. The larger camera will be hit by a wider angle of light as emitted by the face. That is why it will receive more light signal.

    Imagine the smaller and the larger sensor are taken out of their cameras and placed in front if a source of uniform, parallel light. The larger sensor will be hit by more light (a larger number of photons) because it covers a larger area that intersects the stream of light.

    Let me try to formulate it in the most clear and correct way:
    A 12 megapixel sensor will receive 12 million signals with 12 million signal receivers. A 24 megapixel sensor will receive 24 million signals with 24 million signal receivers. How much information is contained in those signals depends on SNR and the number of pixels.

    You are right, I did not include sampling and colour depths yet. This should be a good thing to add because it will also clarify the advantages of ETTR.

    That's true, the number of sensors just determines how many individual signals are captured. All individually received signals however can be understood to constitute one combined signal. With this combination I do not mean pixel binning but that one can see the signal at a larger scale. This combined signal has same SNR as each pixel individually and same information content as all pixels combined.

    The funny thing about RAW files is that they are just recordings of the combined signal, so their bit count is not equal to the amount of received information. Because of that, it is possible to apply noise reduction algorithms that work for specific types of signal loss that occurred at different stages of the imaging process - sharpening, denoise, and so forth. With these software filters, it is possible to further reduce information loss from the raw signal and get files that contain more pure imaging information.

    I think you are absolutely correct in all instances. I also think that does not mean I am wrong :wink: Classic misunderstanding.
     
  11. pheaukus

    pheaukus Mu-43 Regular

    178
    Jun 22, 2012
    I would not say it is an absolute truth that the camera or the subject are the most important thing in photography.
    Photographs are very appreciated as well. :wink:

    In my text I attempt to explain where the photograph comes from. I hope this alternative way of looking at photo imaging may augment, not replace the classical way of describing photography. I believe that knowledge of information theory becomes ever more important to photographers.

    Looking at photography this way provides an objective measurement for gauging and comparing camera equipment. Unlike image quality, the amount of obtainable image information is a quantifiable, objective and comparable measurement. That means that next to personal preferences, one can determine the absolute advantages of one system over another and choose based on grounded knowledge. It also may provide deeper understanding of many post-processing options.
     
  12. flash

    flash Mu-43 Hall of Famer

    Apr 29, 2010
    1 hour from Sydney Australia.
    Gordon
    Question:

    Why are the Mr Smilie's different sizes? Assuming that it's the same subject with two different formats and that it's the same subject, if you want to have Mr Smilie's face the same size in both frames you'll have exactly the same angle of view from both systems. That means a longer lens on the larger format. Light will hit the sensors from the same angles in both cases, regardless of the sensor size (assuming that the ratio is the same and with 35mm and 4/3 it isn't). Or you could move one of the cameras to have the subject size, in frame, the same, but your drawing doesn't show that. Or you could have everything the same (camera, subject distance, exposure settings) and Mr Smilie would take up less of the frame on the larger sensor, but then the angle of view, and field of view would be different.

    Personally, I prefer the analogy that photographers are sculpting with light, rather than the obvious but incorrect comparison with painters (drawing with light). A painter starts with nothing and creates everything in the frame. Sculptors start with a solid block and then remove the bits that aren't part of the artists vision. Photographers are like that.

    Gordon
     
    • Like Like x 1
  13. pheaukus

    pheaukus Mu-43 Regular

    178
    Jun 22, 2012
    As I write beneath my illustrations, I assume same distance, different camera size, and different focal length so that the FOV is the same. As long as both cameras have same FOV and are at the same distance from the face, the angles will be different as described. This is independent of the distance of the cameras to the face. Even when we photograph a galaxy angle beta would still be larger than angle alpha. Hence large telescopes.

    If you do not believe me I suggest you take two cameras of different format in your hands and see what happens. I have done so :redface:


    Maybe sculpting and painting are equally different from photography. Painting as you describe it is generative by addition, sculpting is generative by subtraction. The sculptor's block of stone is a homogenous material, all information has to be added by the sculptor's hands. There are even additive sculpting techniques (clay) and subtractive painting techniques :wink:
     
  14. flash

    flash Mu-43 Hall of Famer

    Apr 29, 2010
    1 hour from Sydney Australia.
    Gordon
    "While a painter conducts a process that involves analysis and synthesis, a photographer lets the camera take over image synthesis so they can focus completely on analysis".

    I think I disagree with this point. If you take sysnthesis to mean, the making of something new by combining other elements, that is. For example by either adding or removing light from part of a scene I can combine them in a way that they appear to be a single object or make an object dissapear altogether. I can make a white background look like a blue one by adding a blue light. I can use a long shutter speed to blur water or create star trails (neither is visible with the naked eye). Using a different lens will change the relationship between objects in a scene.

    Gordon
     
    • Like Like x 1
  15. pheaukus

    pheaukus Mu-43 Regular

    178
    Jun 22, 2012
    True, a photographer can do much more than "just" take photos. I should reformulate that part. Post processing can also be a generative process.

    / Edit: In order to create a photographic work of art though, it is sufficient to "just" take photos. A sculptor who presents an unmodified rock would not be a sculptor but a concept artist.

    I think the shutter speed you mention constitutes a filter which results in loss of imaging information (the stars are blurred) for the sake of meaning and aesthetics.
     
  16. pheaukus

    pheaukus Mu-43 Regular

    178
    Jun 22, 2012
    I just had a look at your site and have to agree, you are a sculptor of images :smile: I definitely have to reformulate that part.
     
    • Like Like x 1
  17. David A

    David A Mu-43 All-Pro

    Sep 30, 2011
    Brisbane, Australia
    If the face takes up the same proportion of both sensors, then angle of view for both sensors is the same. Both cameras are being hit by the same angle of light. The larger sensor doesn't receive more light because it is being hit by a wider angle of light. It is receiving light from the same collection angle but that light is passing through an aperture of greater physical diameter, even though it is an aperture of the same F number, because the focal length of the lens on the larger sensor is longer.

    And no, the face does not emit a light signal which has equal density in all directions. For a start it reflects rather than emits light, and secondly, it would take a sphere being struck by light of equal intensity from every direction to reflect an equal density of light in all directions. Faces are not spherical and are rarely lit by an equal intensity of light from every direction.



    True, but then we are no longer talking about photography because photography requires a camera. How a sensor behaves outside of a camera is not necessarily a guide to how it behaves inside a camera.

    In the camera the larger sensor does not get struck by more light because it has a greater area. It gets struck by more light because of the physically greater aperture opening that the longer lens delivering light to the sensor has.

    A 25mm lens on an M43 camera and a 50mm lens on a FF camera both receive light from the same angle. AT F/4 the diameter of the aperture on the 25mm lens is 6.25mm, that of the 50mm lens on the FF camera is 12.5mm. That 12.5mm diameter aperture opening passes 4 times the light to the FF sensor than the 6.25mm diameter passes to the M43 sensor.



    No. I have an E-P3 with 12 megapixel sensor and an E-M5 with a 16 megapixel sensor. Both require the same exposure because the sensors have the same physical size. Each sensor receives the same number of photons during the exposure, the same number of signals, and each of the photons received by each sensor contains exactly the same amount of information. Both sensors receive identical signals. They simply gather that data in a different way, one in 12 million bundles, the other in 16 million. Both gather all of the signal.

    I'm not certain that it does clarify the advantages of ETTR but clearly you need to include sampling because every pixel, or every molecule of photosensitive compound in a film negative, is a sampling device aggregating data from a number of pixels. Each pixel does not receive one signal, but it does pass one set of values derived from all of the signals it receives.

    No, the number of sensors doesn't determine how many individual signals are captured. It determines how many samples are taken from the number of signals received. All signals received are captured, regardless of the number of pixels doing that capturing.

    No—the data coming from each pixel not a recording of the combined signal, it is single set of values derived by aggregation.

    Nice to know you think I'm right but I'm afraid that I think that if one of us is right the other is wrong. Our respective statements are not identical in meaning, not different ways of saying the same thing. We are each saying quite different things.
     
  18. flash

    flash Mu-43 Hall of Famer

    Apr 29, 2010
    1 hour from Sydney Australia.
    Gordon
    That was a poor example. How about this. You photograph a ball bearing rolling slowly across a table. I can use a shutter speed of 1/500. But how would that look any different to a photo of a stationary ball bearing. My second photo is at 1/2 a second. In this image the ball bearing is blurred. I have CREATED a sense of movement. I could refine the image further by adding second sync flash. But in my opinion I have created information by changing the shutter speed. I've created a sense of movement. More importantly I have told a different story to the first shot.

    Gordon
     
  19. flash

    flash Mu-43 Hall of Famer

    Apr 29, 2010
    1 hour from Sydney Australia.
    Gordon
    I kind of disagree with both of you. The most important part of photography is intent. If you don't know what you want a photograph to say there's no lens, sensor or aperture in the world that's going to make that image work the way you want it to, because you don't know what you want. All technical decisions should be made based on two things. Telling the story and working within the limitations of the equipment you have to tell it. Once you know what you want to say, learning the technical crap is easy.

    Gordon
     
  20. pheaukus

    pheaukus Mu-43 Regular

    178
    Jun 22, 2012
    If that were so, the surface of the face would be a mirror. Light that hits the face is bounced back mostly as diffuse reflection which travels in all directions.

    The shape of the face and lighting does not matter. As long as the face is not a black hole or a mirror, it will reflect incoming light diffusely.

    As long as the light conditions within the cameras are the same with respect to the sensor surfaces, the behaviour of the sensors does not change when they are placed in the cameras. Analogy: a 500 horsepower engine will be a 500 horsepower engine that delivers a specific amount of force to the gears, independently of the gears, axles and tires of a car.

    So if the smaller sensor is placed in the larger camera, it will also be hit by more light?

    True. The total signal is directly related to the number of pixels. To calculate the total amount of information in bits, multiply the number of pixels with the colour depth and that with SNR. The 16 million pixel sensor will probably have a worse SNR, causing the amount information to be the same. If the 16MP sensor has a better SNR per pixel, it can collect more imaging information and is an objectively better choice of gear.

    That is what I mean by quantification.

    I do not think that sampling will clarify ETTR on its own, but once it is introduced, ETTR can be explained more easily.

    Signal. It is correct to call a photon a signal, and it is also correct to call a pixel a signal receiver, and to call a sensor a signal receiver. "A picture or image consists of a brightness or color signal, a function of a two-dimensional location." (from Wikipedia:Signal)

    And one can aggregate {a set [of all sets (of pixel values derived from electrical charges { generated by the cumulative effect of incoming photons })]} and call this a single set of values derived by aggregation too.

    At this moment, we are :frown: