Images of Nature - Instruction - Workshops
Views of Nature Photography
Digital Sensors and Lens Performance
Digital SLRs come in two flavors of sensors, APS and full frame. The full frame behaves just like the 35mm film cameras. In other words the image from a normal lens is projected onto the film so that what the lens “sees is what is on the sensor. The APS size sensors have a crop factor and thus show a magnification ranging from 1.3x to nearly 1.7x. We discussed this is a previous digital corner so if you what a refresher, wander over to our website and review “Sensor Size and Magnification”
This is a real advantage for wildlife in that we get the extra focal length without losing the light like you would with a teleconverter. We pay the penalty with wide angle however.
There is another, more subtle advantage to this smaller size. The lens still projects the image on the sensor as if it was a 35mm film plane. That means that those parts of the image which would normally be around the edge of the 24mm by 36mm rectangle are not recorded by the sensor pixels. That in turn, means that those problems we used to see with lenses (especially inexpensive lenses) are gone. The soft focus, distortion and vignetting are off of the recording surface.
Practically, this means that you can open up your lens to a larger aperture without seeing the edge problems that full frame or film would see. Your lens’ “sweet spot” just got better in that what you used shoot at say f8 to reduce distortion, you may now be able to go down to f5.6 or even f4.
Check out your camera system and see how much improvement you can get!
How do digital sensors work?
Most of us have made the shift from film to digital over the past years. When we were shooting film a number of us also experienced the sights and smells of the darkroom and so we had a pretty good idea of how film worked. Light interacted with photosensitive chemicals in the film emulsion and during the developing process other chemicals stabilized the transformed images on a negative or transparency. Digital technology is not all that much different. As I have stated many times before in this column and well as in my classes, the only difference between film and digital image making is the medium on which image is captured. Let’s talk digital!
In the digital sensor world, photosensitive has a different meaning that in the film world. Digital sensors are electronic parts (integrated circuits or IC’s) that have a physical structure that allows incoming light to generate electric signals. By the way IC’s are typically referred to as chips in the industry, so I’ll be mixing terms. These signals are conducted away from the sensor site (the picture element or pixel) by very tiny wires that are part of the IC. The wires take the signal to amplifiers (on the same chip). The amplifiers boost the very tiny signal to a level that can be manipulated and digitized by yet more circuits. The output of the digitizing circuit is a light level, period. This is because the individual sensor sites on a digital sensor are monochromatic; they only see light in terms of intensity not color. So why are all digital cameras only black and white?
The clever design engineers who develop sensor technology also have a pretty good understanding of the human eye and how we perceive color. This is actually a carryover from the color film technology where the designers used color sensitive layers in the emulsion.
The basic sensor is a grid pattern of structures that convert light energy (photons) into electrical energy (electrons) in a way that is not all that different from solar cells. The ability to add color to the image is done by filtering the light before it strikes the sensor. Our old friends red, green and blue (RGB) are at work here. The light is filtered to allow those colors to strike specific sensors and when final signal is digitized into a light intensity level, the tiny little computer chip in the camera can correlate that intensity to a color, and thus adds color to the data file for that sensor site. If you were to magnify the filter structure on a camera, you would find about 25% of the sites detect red, 25% blue and 50% green. This ratio was established to allow the sensor to more closely match the response of the human eye and thus make further “post processing” easier.
After all of the sites have been scanned for light intensity and the camera settings have been added, the data is ready to be stored as a RAW image. As a side thought, a 10 megapixels sensor has about 10 million sites, imagine how fast that little computer is working if you can should about 8 images a second.
Several camera and lens manufacturers offer features on cameras or lenses that compensate for camera shake or movement. The methods do vary from one manufacturer to another.
The most common, and probably the most successful method, is the use of sensors within the lens. These are known as Image Stabilization (Canon), Vibration Reduction (Nikon), and Optical Stabilizer (Sigma). Within the lens is a set of sensors that detect small movement and correct for it by moving a small optical element in the opposite direction of the shake or movement.
Other companies (such as Konica Minolta) employ a similar function in the camera body and move a prism that is located between the lens and the image sensor.
Video cameras use a digital method where the image is retrieved from different pixels on the sensor to compensate for vibration or camera movement. That works well in the video arena but has significant image blurring in still work.
The movement compensation feature was originally designed to allow slower shutter speeds while hand holding the camera and still produce sharp images. Typical claims are an apparent increase of two to three stops.
Use of this technology is not without drawbacks. There is an added weight and cost factor for the lens based approach. The in camera version also adds cost to the body but does allow use of many more lenses.
When using the stabilization capability on a tripod, there is a potential problem. If the camera and lens are very stable, the electronic circuits in the lens may become slightly unstable and cause the image to blur a small amount. Some lenses have tripod sensors and correct for this. Others have a recommendation in the manual suggesting that the feature be turned off when the lens is on a tripod. As with all photographic “rules” there is a lot of controversy about this. The stabilization feature can compensate for movement and for vibration induced by tripping the shutter at slow speeds. Even when mounted on a tripod, the ability to reduce apparent shutter vibration can be a valuable tool.
The best approach is to do some research before buying or do some testing if you already own one of these lenses or cameras.
Is the feature worth the money and extra weight? In our opinion, YES. We have a 100-400 IS zoom from Canon and love it. Everyone we have talked with has a similar feeling about that particular lens. We’d be happy to publish accounts (positive and negative) concerning member’s experiences with this or any other stabilized lens.
Megapixels and image quality
The larger the number of mega pixels, the better the image right? After all, when we all learned the basics of photography we learned a few axioms. Lens quality was number one and then film grain which we could relate to ISO film speed. Well, welcome to world of high tech. The number of pixels in a camera's sensor is not a good indication of the ultimate image quality, nor is the lens quality. The design engineers have added something a whole lot more difficult to measure with a simple number. What is it? Software!
Pixels are small light sensitive elements that convert light (photons actually) into electricity (electrons). A series of filters on top of the sensor determines the basic color information and then some electronic circuitry near the light sensitive areas of each pixel amplify the signal and send it on to the micro computer chip. There are some nasty characteristics of the electronic devices that convert light to electricity. First, the smaller the pixel, the less efficient they are, meaning it takes more photons to generate a given amount of electrons. This means smaller pixels don't work as well in low light as do larger ones. Also the amount of surface area that gathers lig! ht is reduced because space is needed for the electronic circuits that amplify the signal. Worse yet, small pixels tend to generate more noise proportionally than larger ones.
This is where we look to the software. Each camera manufacturer has developed their own image processing software that turns the electrical signals into an image. This software does an amazing amount of work in a very short time. Among other things, it has noise reduction algorithms, routines that integrate the signals according to color and the capability to smooth out the edges of pixels. Only a few companies make sensors and signal processing chips, but each major camera company has its own proprietary software. Not only that, but even in one company software can vary between camera models. The most important capability of the software is the noise reduction, as it is the most difficult thing to do well.
How does this impact the camera buyer? Well, when you are looking at point and shoot digital cameras, don't just go for the 22 MP. A 16 MP may give you a better image. Research the web and the magazine rack to get reports on image quality before buying. Also, the sensor pixels in digital SLR's tend to be bigger so the impact of noise is reduced and the image quality improved by most all camera company software.
A number of members asked about sensor cleaning, so the Digital Corner did some research.
We got 1.1 million hits on a Google™ search of “sensor cleaning digital cameras”. We also queried Nikon’s™ and Canon’s™ websites. There are two schools of thought. Canon™ and Nikon™ say use clean, dry air from a squeeze bulb. Don’t use compressed air or anything that touches the sensor. (Actually you can’t really touch the sensor; the surface that is exposed is the optical low pass filter that’s over the sensor itself.)
The other school of thought was summarized in about 35 pages of text and images on the website http://www.cleaningdigitalcameras.com/ . This site is number one on the Google™ search. It is a very good reference on all of the methods, with pros and cons spelled out clearly. Our conclusion is that if you can’t clean all of those annoying blotches and dust spots using an air bulb, you can be very brave (or maybe cavalier) and use one of the methods mentioned on the website, OR you can take your camera to a professional and let someone else assume the liability.
Once you have it clean, there are a few good rules of thumb for keeping it clean. Don’t change lenses in a dusty environment, minimize the amount of time the camera is exposed to the open air w/o a lens installed, etc. Sorry there are no magic formulas, but like everything else in photography, there are always tradeoffs!
This month we'd like to address two of the things a lot of photographers fairly new to digital photography find perplexing. First, why do digital cameras give an apparent magnification and what are the tradeoffs? The apparent magnification of a digital camera starts with the relative size difference between 35 mm film and the electronic sensor used for digital image capture: "35 mm" film has an effective image size of 24 mm high and 36 mm wide for a "horizontal" image. When we photograph something through a lens, we record a certain size image of the subject on the film.
Digital cameras have sensors that vary in size, and except for a few very high end cameras, the sensor is smaller than the 35 mm image size. In the case of the Canon EOS 7D™, the sensor is about 15 mm high and 22.5 mm wide. If you do a quick calculation you'll see that the dimensions of the 35 mm image are 1.6 times bigger then the digital sensor.
If we assume the same conditions when we photograph the same subject, that is camera to subject distance and focal length of the lens, the image on the digital sensor will be the same size as the image on the film plane. That's just standard photographic optics. But remember the size of the sensor is smaller than the 35 mm film frame.
Now the real impact of digital! The software in the camera enlarges the image to give an equivalent 35 mm image size.
In doing so, this software magnifies the image on the sensor by the same amount that is needed to make the image sensor look like the 35 mm image, in our example of the Canon EOS 7D™, this is 1.6x.
OK, we now have a l.6x magnification, what did it cost? If we had used a 1.6 teleconverter we'd have lost some of the image because the angle of view would have been decreased as the effective focal length of the lens increased, the same happens with the digital, but in this case, the information that the lens gathered was focused beyond the edges of the sensor so it was lost; the same effect as reduced angle of view. The second thing we lose with a teleconverter is light, namely the effective f stop of the lens is increased by about 1 stop. (f4 to f5.6 for example) In the case of a digital camera, this is not the case. The camera will still show the f stop as the same. BUT, the resolution of the sensor (number of pixels per unit of area) is fixed so a slight increase in what is equivalent to grain will be seen. Digital camera noise reduction software does a very good job of smoothing out this grain effect, so the apparent magnification gained is pretty close to free.
Now let's think about a few other things that may have slipped by in our discussion. First is the aspect ratio. That's a fancy mathematical term for the relative size of the horizontal and vertical dimensions. 24 x 36 or 15 x 22.5 have the same ratio, 2 to 3. This has a real impact in printed image size and can readily explain the popularity of printing image in 8 x 12 size instead of the venerable 8 by 10. 8 by 12 does not require cropping of one dimension. When digital scanning and printing became popular, the long held 8 by 10 dimension was challenged and quickly abandoned.
The second thing to think about is some of the new lenses being marketed. If you look at the magazine adds for some new products, such as Canon EF-S™ lenses, you'll see a note indicating these lenses are only for digital cameras like the Canon EOS 7D™. This is because they focus the image not to a full 24 by 36 mm area but to the size of the image sensor. Remember we said earlier that the equivalent of reduced angle of view was due to the information falling off of the edge of the sensor? This doesn't happen with these new lenses. The effect if these lenses were used with a film camera body, assuming the computer in the camera would allow the photo to be taken, would be a smaller image on the film plane.