Photo Physics in a Nutshell v.0.02 The function of a camera is not to capture light. Cameras are image information receivers. If you do not see the difference please read this post carefully. Thank you. Why I post this thread On this forum there are many questions asked and answers given regarding the purchase and practical concesquences of different kinds of photo equipment. The given answers are at times misleading. This may even be the case for answers coming from seasoned photographers. I believe the reason for this is that some of the physics involved in photography may be counterintuitive to what one experiences in photography praxis. I hope this post can provide newcomers with a clear explanation of photo physics, and that it can help more experienced photographers to purchase and use their equipment based on better comprehension. A kind request This post took a long time to write. The information contained in it took me quite some time to collect and figure out. Please be gentle in reading and reacting. Please read it all and take some time to comprehend what was meant by the author. Please provide errata and name typos so the text can be improved. Suggestions from native English speakers are very welcome. In the near future I will add more illustrations and some equations so this post becomes an easier read. Thank you in advance! Introduction: Information Painters What differentiates photography from painting? A painter perceives reality, makes up their mind about it, and rebuilds it in a manual process from scratch on a white piece of canvas. Photographers on the other hand capture an image of reality in a way which only adds filtering. While a painter conducts a process that involves analysis and synthesis, a photographer lets the camera take over image synthesis so they can focus completely on analysis. This does not mean a photographer is any more or less of an artist then a painter - they just have found a way to focus on entirely different aspects of artistry. Photography is all about selection and filtering. This is made possible by modern technology, which I why I personally think photography is cool: Photography is the first truly modern art form. At this point one can raise the question: what is it that photographers filter? What is their medium? The obvious answer, photographers work with light, only is half the truth. Light is just the input of the photographic process. Its output is information. In a film camera, information from the incoming light signal is directly inscribed into the film grains. In a digital camera, light is converted to electric energy which then is quantified in bits of information. There is a direct relation between the amount of incoming photons, the electric current they result in and the amount of information we can quantify from that current. That relation is linear, but it has limits. There may be too few incoming photons to detect, or there may be so many incoming photons that the electric current is outside of our detectable range. The range between these limits of detection is called dynamic range. So what do photographers do to information in order to create art? Photographers work with the structure of information. Some aspects of the information structure have to do with how the human eye and brain see the world. Other aspects have to do with the photo subject, perspective and framing. And finally, there are a few objective technical aspects, the imaging parameters. This post is dedicated to clarify what the effect of each of those parameters is on the photograph. It is my hope that this post will help forum members to improve their skills as information painters. The following parts of this post introduce imaging parameters starting from the imaging surface, that is the sensor or film used to convert incoming light signal to information. The description then moves on towards towards the photographed subject. With each step, the information in the photograph will experience a modulation of different kind. With each part, a new parameter is introduced and explained, building forth on the explanations provided before. The progession resembles building the information receiver camera from the inside out, adding part by part. The imaging parameters are: 1. Shutter Speed (S) 2. Film Speed (ISO) 3. Signal to Noise Ratio (SNR) 4. Imaging Surface Area (Format) 5. Depth of Field (DOF) 6. Aperture (A) 1. Shutter speed (S) Shutter speed is the amount of time the imaging surface - sensor or film - is exposed to an incoming light signal. With longer shutter speed we may loose some of that signal's information to motion blur. This loss is not per se bad. Based on artistic consideration, a photographer may want to introduce blur to indicate that there is movement happening. Motion blur also allows the photographer to emphasize specific parts of the photograph. 2. Film Speed (ISO) This parameter is named after the ISO (International Organization for Standardization) standards ISO 5800:1987 and ISO 12232:1998 which define ISO for film and digital photography respectively. It is also referred to as film speed. For film photography, ISO means that a film exposed to light of a specified density will change brightness at a speed defined by the standard. When a 50% grey card is photographed, the film speed standard ensures that with a properly chosen shutter speed the film will have a 50% grey tone. In analogy, a digital imaging system has to be tuned such that after being exposed to the light for the correct amount of time, it provides a quantified value that stands for 50% brightness. 3. Signal to Noise Ratio (SNR) This is a parameter mostly spoken about in digital photography even though it applies to film photography in equal measure. SNR is a measure for the proportional amount of noise that occurs in the detection of an incoming signal of light. Better SNR means that the camera can provide more of the information contained in an incoming light signal. With SNR there seems to be some confusion about the way it relates to the amount of pixels on a camera sensor. Therefore SNR will be explained here in more depth especially for digital imaging. If two pixels with same SNR receive the same light signal, they will provide a quantifiable amount of information. For each pixel, that amount of information is equal to the amount of incoming light signal multiplied by SNR. The amount of total information received by our two-pixel system is the amount of information per pixel times two, because we have two pixels. We can combine the information received by the two pixels in a procedure called pixel binning . With pixel binning we combine the two signals received from the two pixels. The signal in the two pixels will be synchronous, while the noise of each pixel is random. The two signals will add up while the noise contained in the two signals will diminish one another. By binning the two pixels we have only one signal left but it has a much better SNR. As a result, the amount of captured information is the same. 4. Imaging Surface Area (Format) So far, all introduced parameters dealt with either time or infomation. At this point, spatial extension is added: imaging parameters that are based on length, distance, and areas. The imaging surface area, commonly known as format, is a measure of the size of a camera's sensor or film slides. Format is a touchy topic on the m43 forum. To some people mentioning format suggests an attack on the m43 format because there are larger cameras in existance. To other people, mentioning format means supporting m43 because some other cameras are smaller than m43. It is not my intention to attack or defend anybody's choices. I am plainly interested in providing factual information so people can base their decisions on facts. In relation to the parameters mentioned before, there is general agreement that shutter speed is not affected by format. I have however noticed some confusion with regards to the implications of the interrelations of ISO, SNR and format on the amount and quality of information captured. Therefore I will explain these interrelations in detail. The basic point of confusion is about the amount of light signal that is captured by a camera of certain format at same shutter speed, same light intensity and same film speed. The intuitive understanding of some photographers seems to be that a larger camera does not capture more information. This is however wrong. So far the most simple illustration I have found of the actual working of format is to view a film slide at molecular level. A film slide is a carrier material, e.g. celluloid or some kind of plastic, covered with a chemical film that is sensitive to light exposure. The chemical film contains molecules that change color when they are hit by photons. For a film to be ISO 100, a specified percentage of its molecules will have to be hit by photons within the time given by shutter speed in order for it to turn 50% gray. The density of these molecules on an ISO100 film does not change with film size or format. That is so because film size is no part of the standard ISO definition of film speed. Each square mm of the film will have the same amount of molecules which each have to receive a photon to react. That means that a larger area of film, a larger camera format, will have to receive more incoming light signal to achieve the same tone values. A larger format therefore will provide more light information to the photographer to work with. So a larger format sensor will always receive more incoming light signal, all other parameters being the same. Does that means it always provides more imaging information? In the next part, we will see that this is not the case. 5. Depth of Field (DOF) Up to now all introduced imaging parameters describe phenomena occurring on directly on the imaging surface, i.e. on the sensor or film slide. With this part the explanation of imaging parameters moves out of the plane into three-dimensional space. It is commonly accepted that a larger camera will provide an image with less DOF, all other parameters being the same. DOF blurs imaged subjects before and behind the focus area. This DOF blur contitutes a loss of imaging information. The following drawings illustrate how DOF works. Please note that the lines of light indicate only the outer boundaries of the light beams involved in taking the photo. In reality, all light emitted from one point on Mr. Smileys face that travels towards the lens surface will be redirected by the lens towards one point on the imaging surface. On top we have Mr. Smiley. Mr Smiley emits light equally in all directions. Even each infinitesimally small point on the surface of Mr Smiley's face emits light in all directions. Then we see how Mr Smiley is photographed with a small camera. We see that Mr Smiley's face covers the entire imaging surface. This is so because of both the focal length of our lens and the distance between the camera and Mr Smiley. A lens of a specific focal length has a specific field of view (FOV). With a different focal length, we would see more or less of Mr Smiley appear on the imaging surface. On the bottom we see how Mr Smiley is photographed with a large camera. We have the same FOV. This is possible because a lens of larger focal length is attached to the larger camera. So how does DOF occur? For the larger system, the angle under which light from Mr. Smileys face approaches the lens is different. This is increasingly so towards the outer egdes of the lens. Look at the difference between angles alpha and beta in the illustrations. While both cameras can capture a flat object with equal sharpness, the effect of moving a subject before of behind the focal plane will be much stronger with the larger camera. DOF will be more shallow the closer we move the camera to Mr Smiley's face. If we move the camera further away from Mr Smile, the more DOF we have and the more of him is in focus. That is why in macro photography we have a very shallow DOF. And it is why in Astrophotography we have a very deep DOF. So smaller cameras will always be sharper in depth, all other parameters being the same. Why, then, is shallow DOF so valued by photographers? This is because DOF is one of the photographers's strongest means to structure image information. Small DOF allows a photographer to isolate elements of the image for artistic reason. DOF does not occur in equal measure in all types of photography. For astrophotography DOF does not matter at all because all light coming from a galaxy will hit the lens under an almost perpendicular angle. For astrophotography, a larger lens or mirror will always result in better images. For practical reasons, the light coming from a telescope will be concentrated on a small sensor surface though. Otherwise the amount of incoming light hitting the sensor surface per square mm would not be not strong enough to be detected. The signal would be outside of the dynamic range of the sensor. This concentration however is something which only rarely occurs in camera systems. For macro photography DOF matters a lot. A compact camera will be able to capture sharpre images of an insect, while a large format camera art same magnification will have a focus area so shallow that only a slice of the insect's eye may be in focus. 6. Aperture (A) This is the last imaging parameter to explain. Aperture is a measure of the proportion of light signal entering the lens surface which is allowed to reach the imaging surface. Aperture is achieved with a diafragma that stops down the entrance pupil of the lens. This has the effect of changing angles alpha and beta in the illustrations. Aperture has two effects. For one, it allows the photographer to compensate for changing light conditions at same shutter speed. The second effect of aperture is that is directly related to DOF. We can stop down the larger camera until angle beta equals angle alpha and the larger camera captures an image with same DOF. This does however not mean that now the same image is captured. For an equal amount of light to reach the imaging surface shutter speed will have to be reduced. This possibly introduces motion blur. Conclusion: What about exposure? Exposure is a secondary imaging parameter derived from aperture and shutter speed. It is not a basic parameter of photo physics, even though it is very useful in the daily praxis of photography. While the effects of DOF and motion can be judged directly in the view finder, exposure helps photographers to keep the more abstract parameters ISO, shutterspeed and aperture in balance. Because exposure on its own does not indicate the amount of motion blur or DOF, it is no indicator of the amount of information contained, or the amount of photons detected, in a photograph. Of the mentioned parameters, format is a subject that causes many arguments of the forums. From the explanations provided above, it can be concluded that a a scaled up camera will always lead to more captured image information for the following situations: still scenes (landscapes), very distant objects (astrophotography) and flat objects (copying). A scaled down camera will always capture more image information in photography of very near subjects that have relatively large depth (macrophotography). An oddity occurs with photography of very dark objects (astrophotography) where the light collected by a large lens or mirror has to be concentrated within a small surface area in order to be detectable. Considering that astronomic observatories are generally not pocketable, I believe it is safe to count them among the large cameras though. Please note that image information is an objective technical term that has close to nothing to do with artistic image quality. The mentioned imaging parameters are only objective technical influences on photographic art. Any photographer will deem other parameters that relate to practicality and artistic expression as equally or more important, depending on their subject matter and personal preference. Good comprehension of the effects of the mentioned imaging parameters on the structure of imaged information however will empower photographers to realize their artistic intentions to the fullest extent.