Zur Navigation | Zum Inhalt
FVCML0208 10

Library Conference Papers

Evaluating Digital Cameras

PDF PDF (1.2 MB)

Dietmar Wueller · Image Engineering Dietmar Wueller · Augustinusstraße 9d · 50226 Frechen · Germany
Electronic Imaging Conference 2006

 

ABSTRACT
The quality of digital cameras has undergone a magnificent development during the last 10 years. So have the methods to evaluate the quality of these cameras. At the time the first consumer digital cameras were released in 1996, the first ISO standards on test procedures were already on their way. At that time the quality was mainly evaluated using a visual analysis of images taken of test charts as well as natural scenes. The ISO standards lead the way to a couple of more objective and reproducible methods to measure characteristics such as dynamic ranges, speed, resolution and noise. This paper presents an overview of the camera characteristics, the existing evaluation methods and their development during the last years. It summarizes the basic requirements for reliable test methods, and answers the question of whether it is possible to test cameras without taking pictures of natural scenes under specific lighting conditions. In addition to the evaluation methods, this paper mentions the problems of digital cameras in the past concerning power consumption, shutter lag, etc. It also states existing deficits which need to be solved in the future such as optimized exposure and gamma control, increasing sensitivity without increasing noise, and the further reduction of shutter lag etc.
Keywords: digital photography, image quality, noise, dynamic range, resolution, ISO speed, shutter lag, power consumption, SFR

 

INTRODUCTION
When the first digital consumer cameras, such as the Casio QV 10, were launched in 1995, it was clear that the image quality was not acceptable but that technological development would be fast. Soon afterwards cameras, such as the Kodak DC series and the first Canon PowerShot with a larger pixel count and better image quality, were launched. At that time it was already clear that digital photography would be the future and that taking pictures on film would be replaced by electronic sensors. However many people said that if the quality would ever be as good as that of film, it would take a long time. That was the problem at photokina 1996: nobody could predict when digital cameras would be good enough to replace analogue ones. But, with the launch of the Olympus Camedia cameras, it was clear to the experts that this would only take a few years. At that time, magazines such as the German “Color Foto” started thinking about an objective test procedure to test digital cameras and I was asked to develop a test stand for digital cameras.

 

WHY DO WE NEED TESTS FOR DIGITAL CAMERAS?
The reactivation of the photographic market caused by digital photography raised a lot of interest in the industry, as well as among the publishers of photographic magazines, and of course, among consumers. Everybody wants to participate in that market and the creativity in selling products by creating new terms and phrases for the technical specifications seems to be unlimited. Manufacturers need test procedures to ensure, and to increase, the quality of their products. In order to make sure that the creativity of the marketing people relates to the real world and to help customers to find the right cameras for their specific applications, the magazines also need standardized procedures for testing digital cameras.

 

TESTS BASED ON VISUAL ANALYSIS
Most of the magazines started by taking pictures from various scenes and viewing the images on a monitor or judging printed paper outputs. Since neither Photoshop nor most of the printers supported color management until version 5.0, the quality of the images the testers looked at was often limited by the restrictions and the calibration quality of the output devices. Because the tester has to make sure that the output or viewing device and its calibration is not the bottle neck for determining the image quality of a camera, the tests were not easy to handle.
At the same time, the lighting conditions and surrounding temperature of the scenes used to evaluate the cameras often varied and the results were in many cases not reproducible and comparable between the different cameras. So what was urgently needed by the magazines was a standardized test scene with a constant illumination photographed under constant conditions. This constancy is required whenever numerous cameras are to be compared over a certain period of time.
Another aspect is that of creating a scene consisting of objects which allow the doubtless evaluation of a certain image quality aspect even if a tester is in a bad mood. These objects are required especially for aspects such as resolution measurements, dynamic range, white balancing and color, and they are not easy to find. For a real measurement using a software to analyze an image instead of the human eye we have the same requirements: constant and appropriate illumination (e.g. standardized Daylight D55 or tungsten light 3050°K as specified in ISO 7589 (1)), constant temperature (23 +/- 2° (2)), relative humidity between 30 and 70% (2), and suitable objects to measure the values and the right software for the analysis.
The approach of using the visual analysis of a single test scene or multiple “real images” of various scenes always conflicts with the requirement of most magazines to have a single number for the classification of a specific image quality aspect. Sometimes the magazines even want a single number to classify the complete camera. If we look at colors for example, although it is not easy, it might be possible to classify the reproduction of a single color in an image. But how can one come up with an objective number representing the complete color reproduction quality of the camera without any measurements? If we look at the same aspect for resolution, we find that this might be easier because we can define a so-called limiting resolution. Taking a picture of the ISO 12233 resolution chart we can look at it and find a frequency limit, up to which point the lines can be seen as separated lines. But if we look at Figure 1 we see that this task is often not as easy as we expect it to be.

figure 1
Figure 1. The visual analysis of limiting resolution is often not as easy as we expect it to be.

 

Another problem using the visual approach is that we usually do not know the characteristics of the camera because that’s what we want to test. The testers may know the limitation of the output device but they need to be very experienced to judge at the borderline whether they still see the camera characteristics or if the results are limited by the device.

 

MEASURING THE CHARACTERISTICS
In order to anticipate a major aspect: measuring specific image quality aspects helps a lot to characterize a camera but these measurements always have to be related to the real world images and there are numerous aspects which we will never be able to measure. This is why it is impossible to completely test a camera without taking pictures of natural scenes.
We at Image Engineering based our measurement procedures on the ISO standards (1, 2, 3, 4, 5, 6) which were already on their way at the time we built the first test stand in 1997, and which served as a good starting point. In order to get a sufficient  and others may be recommended or optional. The two characteristics which seem to be most important are to measure the OECF (2) (opto electronic conversion function) and the resolution (5), but a few others belong to this group as well.

The following aspects are mandatory:
  • Resolution
  • Sharpness
  • OECF
  • White balancing
  • Dynamic range (related scene contrast)
  • Used digital values
  • Noise, signal to noise ratio

 

Recommended values are:

  • Distortion
  • Shading / Vignetting
  • Chromatic aberration
  • Color reproduction
  • Unsharp masking
  • Shutter lag
  • Power consumption
  • Aliasing artifacts
  • Detailed noise analysis
  • Compression rates
  • Exposure and exposure time accuracy and constancy
  • ISO speed
Optional values may be:
  • Color resolution
  • Battery life
  • Detailed macro mode testing (shortest shooting distance, max. scale, distortion)
  • Flash capabilities (uniformity, guiding number etc.)
  • Auto focus accuracy and constancy
  • Startup time
  • Image frequency
  • Video capabilities (pixel count, resolution, frame rate, low light behavior)
  • View angle, zoom range (at infinity and shorter distances)
  • Hot pixels
  • Display (refresh rates, geometric accuracy, color accuracy, gamut, contrast, brightness, visibility in sunlight)
  • Metadata (Exif, IPTC)
  • Watermarking
  • Spectral sensitivities
  • Bit depth of raw data
  • MMS capabilities for mobile phone cameras (resolution, frame rate, compression etc.)
  • Optical stabilization

 

Since this paper can not cover all of the procedures in detail, I would like to refer to our white paper which can be
downloaded from http://digitalkamera.image-engineering.de/index.php/Downloads .

 

TEST CONDITIONS
The test conditions should be set up in such a way that they represent the typical conditions present in the real-world use of the camera. If the camera is tested in a mode unusual for the specific application, or under unusual conditions, the result may not represent the real image quality achievable for this application.

The setup should fulfill the following requirements:

  • Check the factory or standard settings of the camera for plausibility (sharpening, contrast, speed, saturation, white balancing) with specific care for reproducibility.
  • Uniform illumination of the test chart: the use of an Ulbricht integrating sphere (app. 98% uniformity) is recommended for all measurements which require uniformity. If a light box is used the non uniformity has to be compensated by a calibration image.
  • For photographic applications, color temperature and spectral distribution has to fit the requirements of ISO 75891.
  • For photographic applications, the exposure value (4) for the OECF and color measurements should be EV7 or higher.
  • Test charts for resolution, distortion, and chromatic aberration measurements should be at least 40 x 60 cm or bigger. For test charts the integrating sphere may be replaced by a typical reprographic daylight illumination.
  • Temperature should be 23°C +/- 2°C and relative humidity should be 50% +/- 20%.
  • For visual inspection and comparison, a calibrated and profiled monitor is required.
  • If results are required for different lighting conditions, these conditions should be typical and the lighting conditions have to be specified.

figure 2
Figure 2. A sample set up for a uniform illuminated transparent test chart.

 

 

THE IMPORTANCE OF OECF MEASUREMENTS
The OECF describes how the camera transfers the illumination on the sensor into digital values in the image. This information or at least the images of the test chart are necessary to answer the following questions:

  • What is the maximum contrast in a scene that can be captured by the camera in all its tonal details (dynamic range)?
  • Is the white balancing o.k.?
  • Does the camera use all possible digital values in the image?
  • Is there a gamma or tonal correction applied to the captured linear image?
  • What is the signal to noise value for different grey levels?
  • What is the ISO speed of the camera?

A picture of a single chart answers all these questions.

The camera OECF (opto electronic conversion function), as specified in ISO 14524 (2), is measured using a test chart with patches of different grey levels aligned in a circle around the center.

figure 3
Figure 3. The OECF chart of ISO 14524 combined with the noise patches of ISO 15739.
This is a special version with 20 grey levels.

 

The OECF comparison of the digital SLR cameras (Figure 4) shows that the cameras use different level corrections to transform the scene luminances into digital values. Since the images are exposed in a way that the highest luminance patch reaches the saturation level, the differences in the curves are the starting point in the darkest patches and the shape of the curve. An early starting point together with a highlight compression (Fujifilm S3 Pro) shows a high dynamic range of a camera. This was a result we expected when Fujifilm came up with the SR-Sensor type of the Super CCD. The other digital SLRs are about 1 f-stop behind. An internal test showed that the dynamic range of the S3 Pro also exceeds that of the processing chain negative film printed on paper by more than 2 f-stops. So the dynamic range of a professional digital camera is higher than that of film/paper.

 

figure 4

Figure 4. The OECF comparison of the digital SLR cameras indicates that the Fujifilm S3 Pro has a higher dynamic range because the curve starts at lower log luminance values. To keep the highlights we find highlight compression expressed by a shoulder in the curve. The Canon EOS 5D tries to do the same in the highlights but fails to start earlier in the low lights.

 

PIXEL COUNT
Unfortunately people think that resolution is the same as pixel count and, in fact, the pixel count as the sample rate is a limiting factor for resolution. With increasing pixel count, the other parts of the imaging system are more and more the bottle neck for the resolution, i.e. the ability of a camera to capture fine detail, especially because the manufacturers try to keep the sensor size as small as possible.

figure 5
Figure 5. The rise of pixel count for consumer cameras over the last 12 years

 

figure 6
Figure 6. Pixel count for digital SLR cameras

 

 

LIMITING RESOLUTION
When we started in 1997, the 640 x 480 pixel cameras used a 1/3” sensor. The related pixel size calculated from these dimensions is 7 μm. Current entry-level consumer cameras use sensors of about the same size but instead of 300,000 pixels the sensors have 6,000,000 pixels. This of course decreases the pixel pitch to 2.1 μm leading to a lower sensitivity, a higher noise level and requires lenses with a much higher resolution (11).

 

figure 7
Figure 7. Limiting resolution for center and corner of cameras sorted by pixel count

 

figure 8
Figure 8. Average limiting resolution for center and corner in combination with the Nyquist limit


Figures 5 and 6 show the limiting resolution for cameras at different positions in the image. The limiting resolution in this case is the frequency where the MTF of the camera reaches a 10% contrast value. Figure 5 shows the deviation of the limiting resolution for a variety of actual 4, 6, and 8 megapixel cameras. Some of the 4 megapixel cameras are better than other 6 megapixel devices and it is even worse if the 6 and 8 megapixel cameras are compared. Looking at the numerous cameras we found that the higher the pixel count, the greater the average distance to the Nyquist limit.

 

figure 9
Figure 9. The MTF was measured using 9 modulated Siemens stars

 

figure 10
Figure 10. The MTF for the Olympus C-480 shows a centering problem. The stars on the right side 1, 2, and 8 show a lower contrast than the others.

 

When the pixelpitch (the distance between two pixel centers) decreases, the required level of accuracy for mounting the lenses on top of the sensor increases. Therefore, we now detect more centering and alignment problems than we found a couple of years ago. Also the loss in sharpness – given as the contrast for low frequencies – and in resolution from center to the corners is increasing and a significant quality difference can be found depending on the quality of the lens.

figure 11
Figure 11. The comparison of the MTFs for a camera with a good and a bad lens.

 

 

CHROMATIC ABERRATION
Small pixels also support the visibility of color fringes in the image corners caused by chromatic aberration of the lenses. The chromatic aberration can be measured locating the position of an edge in the image corner for all three color channels separately and calculating the difference for the green reference channel.

figure 12
Figure 12. The visibility of chromatic aberration of the lens is stronger for smaller pixels.

 

NOISE REDUCTION
Some manufacturers try to solve the noise problem by using a noise suppression filter instead of a larger sensor with larger pixels. These filters are usually a trade-off between noise reduction and resolution. The picture taken with the Fujifilm E900 (Figure 13) shows the contrast and frequency-dependent noise reduction. The high contrast structures in the facades of the building and the signboard at the train station show all the expected details. In the trees however the low contrast structures are gone.

figure 13
Figure 13. An image taken with the Fujifilm FinePix E900 at ISO 400.

 

DIFFRACTION
The sensor size also causes diffraction problems at low apertures. To calculate from which f-stop these problems begin, we have to look at the Rayleigh Resolution Criterion (circa 1879). If we combine this with the discrete positioning of a sensor, it appears that the pixel pitch has to be at least the required distance of the two Airy circles, namely the radius of the circle. This distance is calculated using:

figure 14
Figure 14. Diffraction theory.

 

figure 15
Figure 15. Diffraction circle for different f-stops

 

figure 16
Figure 16. Calculation from which f-stop the diffraction problems begin

 

figure 17
Figure 17. Images taken with a 2.1 micron pixel pitch at f-stop 4, 5.6, and 8


These calculations and the image in Figure 17 demonstrate that diffraction is a problem with current cameras and their small pixel pitches. These problems begin from f-stop 4 and get worse for smaller apertures.

 

THE IDEAL DIGITAL SLR
The functionality and image quality of existing digital SLR camera equals, without a doubt, the quality of analogue cameras. But there is still some room for improvements.

Live image
Some applications require a live image. For example, when a camera is mounted on a microscope or the “Wolfenbuttel Book Reflector”, it may not be possible to look through the viewfinder. Or, cameras capturing images of a scientific experiment may be located in another room for safety reasons. If a live image were available on the implemented pivoted LCD panel, as well as the video output of the camera, it would help tremendously. From there it is just a small step to enable the camera to provide video  cameras in scientific applications.
Other problems occur if the optical viewfinder is replaced by an electronic viewfinder. It is common knowledge that
electronic viewfinders suffer from low resolution and bad visibility under bright lighting conditions. And there is also an
issue of low refreshing frequencies, if the camera is set to a burst mode.

Autofocus accuracy
A study we made in 2004 showed that in many cases the autofocus accuracy of digital cameras is not sufficient. The reason is the required level of precision and adjustment procedures for the “extra” autofocus sensor.

Manual focus
Existing digital SLRs all suffer from lack of ability to manually focus, because the previous screens used to focus in older SLR cameras were replaced by screens without the capability to focus on microprisms or split screen indicators.

Image Quality
The dynamic range, noise and speed of existing sensors are already very good. The best sensor in terms of dynamic range is of course the Fujifilm super CCD version SR. But there is still a lot to do in the areas of colour and luminance. Most cameras still expose the images according to 18% grey which is the technology in the analogue world. For a digital image the exposure should be adjusted to the highlights which should remain unclipped with some restrictions if the contrast in the scene exceeds the range the camera is able to capture. Image processing should be adjusted to the scene-contrast to match the luminance-appearance of the human eye (e.g. the foreground in a sunset scene). The colour as well, should follow a colour-appearance model to create a “pleasing image”. Each camera should have a mode that allows the matching of colours to the original scene, as closely as possible. In many situations, however, it might be better to have a specific output referred rendering in the camera as well. Resolution no longer matters as much as it did in the past because all the cameras with 8 and more megapixels have a sufficient resolution for 95% of applications. At the moment I am more concerned about the shrinking in pixel size because this leads to higher noise, lower sensitivity, lower dynamic ranges, and last but not least, diffraction problems at lower apertures. Therefore, the pixel pitch should be higher than 5 microns in order to avoid these problems.

Metadata
It would also be very nice to add at least some of the metadata to the images right after capturing them. This of course is already the case with the technical metadata specified in the Exif standard 8. However, if an interface would enable the photographer to add some description metadata, i.e. author, date, place, and part of the caption as well, it would be especially useful for journalists.

Raw
From the user’s point-of-view, a standardised image RAW format like Tiff/EP or Adobe DNG would be very useful for integrating the image processing into the various workflows required for different applications. Unfortunately, this is more a political issue and there are no technical restrictions for implementing a standard format.

 

A proposed solution for the live picture and focus problems
First of all, to enable the sensor for fast autofocusing, as well as to achieve sufficient image frequencies for video capture, the sensor used in that camera should have a good windowing capability. This means that it should not be necessary to read out a complete or even a quarter of an image to acquire the data for a good autofocus. It should be possible to strip down the read-out to “a few” selected pixels for exposure measurement and autofocus and for the appropriate video resolution. This windowing would allow fast signal processing and video modes up to the required 30 frames per second.
Using such a sensor would replace the camera’s complete autofocus system, which in turn would lead to higher focus accuracy due to the fact that the same sensor can be used for both focus-measurement and imaging. To increase the focus speed even more, it might be helpful to locate the objects by using an additional active autofocus system for prefocussing.
Since electronic viewfinders are not sufficient in camera use in any situation, an optical viewfinder is still necessary. A mirror with a reflection of 30%-50% has been proposed in order to ensure live preview on the LCD or video output, as well as the use of an optical viewfinder. Since the glass plate mirror will introduce spherical as well as chromatic optical errors, it is better to mount it on a pivot as is common on a conventional SLR or dSLR camera. The errors may be small enough to still have a sufficient preview and video quality, but for high resolution images blur and colour fringes may appear. To minimize errors, it should be investigated as to whether or not a certain shape of glass on the back of the mirror or a coating will lead to a reduction. It should swing up or downwards to avoid these problems when capturing the final image. Turning the mirror out of the optical path during exposure will also increase the illuminance on the sensor by about one f-stop.

Sony with its DSC-R1 showed that a fast autofocus system based on the imaging sensor in the camera is possible and the principle of CMOS technology allows a sufficient windowing, although it can not be found in the actual sensors. Hopefully we will find these sensors in the future.

figure 18

Figure 18.
Principle of a conventional SLR or digital SLR system

 

figure 19
Figure 19. Principle of the proposed new system

 

CONCLUSION
Most of the problems we find in today’s digital consumer cameras are related to their small sensor and pixel sizes. Some of these problems are resolution limitations, limited dynamic ranges, noise problems, chromatic aberration, diffraction limits, etc. To keep the cameras small it seems to be necessary to solve these problems using intelligent image processing algorithms. But as we have seen, these algorithms also have their limitations. Therefore my expectations for the future are that the pixel count should stay on the level of actual cameras for smaller cameras. The sensors should be larger for advanced and bigger cameras. Furthermore, I would like to see new ideas like the ones mentioned above for the digital SLRs, instead of keeping the old SLR principle.

 

ACKNOWLEDGMENTS
Thanks to Don Williams, Kevin Matherson, Jack Holm, Peter Burns, Sabine Süsstrunk, for insightful talks on the theoretical and practical aspects of the above test procedures and standards.

 

REFERENCES

  1. ISO 7589, Photography – Illuminants for sensitometry – Specifications for daylight and incandescent tungsten
  2. ISO 14524, Photography — Electronic Still Picture Cameras — Methods for measuring opto-electronic conversion functions (OECFs)
  3. ISO 12231, Photography — Electronic still-picture imaging — Terminology
  4. ISO 12232, Photography — Digital still cameras — Determination of exposure index, ISO speed ratings, standard output sensitivity and recommended exposure index
  5. ISO 12233, Photography — Electronic still-picture cameras — Resolution measurements
  6. ISO 15739, Photography — Electronic still-picture imaging — Noise measurements
  7. Jack Holm, Adjusting for the Scene Adopted White, IS&T’s 1999 PICS Conference
  8. JEITA, EXIF 2.2 standard, http://www.jeita.or.jp/english/standard/html/1_4.htm
  9. Don Wlliams, Debunking of Specsmanship, www.i3a.org
  10. Peter D. Burns, Slanted-Edge MTF for Digital Camera and Scanner Analysis, PICS conference 2000
  11. Don Williams and Peter Burns, Diagnostics for Digital Capture using MTF, PICS conference 2001
  12. Albert J.P. Theuwissen, Small is beautiful! Yes, But Also for Pixels of Digital Still Cameras ?, PICS conference 2002

Proposal for a Standard Procedure to Test Mobile Phone Cameras

PDF PDF (752 KB)

Dietmar Wüller · Image Engineering Dietmar Wüller · Augustinusstraße 9d · 50226 Frechen · Germany
Electronic Imaging Conference 2006

 

ABSTRACT
Manufacturers of mobile phones are seeking a default procedure to test the quality of mobile phone cameras. This paper presents such a default procedure based as far as possible on ISO standards and adding additional useful information based on easy to handle methods. In addition to this paper, which will be a summary of the measured values with a brief description on the methods used to determine these values, a white paper for the complete procedure will be available.
Keywords: digital photography, image quality, noise, dynamic range, resolution, ISO speed, shutter lag, power consumption, SFR

 

MEASURING THE CHARACTERISTICS
In order to anticipate a major aspect: measuring specific image quality aspects helps a lot in characterizing a camera but these measurements always have to be related to the real world images and there are numerous aspects which we will never be able to measure. So it is impossible to completely test a camera without taking pictures of natural scenes. We at Image Engineering based our measurement procedures on the ISO standards (1, 2, 3, 4, 5, 6), which were already on their way at the time we built up the first test stand in 1997, and which served as a good starting point. In order to get a sufficient characterization of a digital camera, a couple of characteristic values seem to be mandatory and others may be recommended or optional. The two characteristics which seem to be most important are to measure the OECF 2 (opto electronic conversion function) and the resolution (5) but a few others belong to this group as well.

The following aspects are mandatory:
  • OECF
  • White balancing
  • Dynamic range (related scene contrast)
  • Used digital values
  • Noise, signal to noise ratio
  • Resolution (limiting resolution center, corner)
  • Sharpness

 

Recommended values are:

  • Distortion
  • Shading / Vignetting
  • Chromatic aberration
  • Color reproduction
  • Unsharp masking
  • Shutter lag / Startup time
  • Aliasing artifacts
  • Detailed noise analysis
  • Compression rates
  • Exposure and exposure time accuracy and constancy
  • ISO speed
Optional values may be:
  • View angle, zoom range (at infinity and shorter distances)
  • Hot pixels
  • Detailed macro mode testing (shortest shooting distance, max. scale, distortion)
  • Flash capabilities (uniformity, guiding number …)
  • Image frequency
  • Video capabilities (pixel count, resolution, frame rate, low light behavior)
  • MMS capabilities for mobile phone cameras (Resolution, frame rate, compression etc.)
  • Display (refresh rates, geometric accuracy, color accuracy, gamut, contrast, brightness, visibility in sunlight)

 

The following values may be tested if available and applicable.

  • Optical stabilization
  • Auto focus accuracy and constancy
  • Metadata (Exif, IPTC)
  • Watermarking
  • Spectral sensitivities
  • Bit depth of raw data
  • Power consumption
  • Battery life
  • Detailed noise analysis
  • Color resolution

Since this paper can not cover all of the procedures in detail, I would like to refer to our white paper which can be downloaded from http://digitalkamera.image-engineering.de/index.php/Downloads .

 

CAMERA SETTINGS
The measured values are influenced by the settings of the camera. To ensure the correct interpretation of the data, it is necessary to mention the proposed settings. Two different ways are used among testers to ensure for the correct settings:

  • One uses the predefined factory settings because most users do not change these settings and they should be carefully selected by the manufacturer.
  • The other way is to select settings which provide optimum image quality.

For this proposal the first way is selected because the typical camera phone user is inexperienced in photography. Both ways can, but do not necessarily lead, to the same settings. For example, the manufacturer may choose a high compression JPEG file format as the standard format to ensure high speed image storage and minimum file sizes. The compression leads to an image quality which may not be the optimum image quality of the camera and therefore leads to different results. For proposed test procedure we chose the settings to maximize the image quality provided by the camera.


If it is possible to adjust the parameters we propose the following settings:

Parameter Intention
Sensitivity ISO 100 as the lowest typical sensitivity to minimize noise and ISO 400, if provided by the camera, to get an idea how the camera behaves at high sensitivity levels.
File format For OECF measurement, a file format which stores uncompressed data is preferred to avoid compression artifacts and the possible impact of the compression algorithm on the camera’s noise. For all other aspects, JPEG in highest quality shall be used.
Resolution Maximum sample rate is used which shows as many details as possible.
White balancing Auto White balancing to avoid color casts and to see if the auto balancing works well.
Sharpening Sharpening shall be the default value selected by the manufacturer because of the trade-off between noise and contrast/resolution.
Contrast / Gamma Standard contrast setting shall be selected. Only in a few cases, the contrast may be varied to achieve the maximum dynamic range.
Color The color setting which is optimized to achieve the best color reproduction of the original. If not available use default.
Flash Generally the flash is switched off. It may be switched on only for those situations where it is explicitly needed.
Macro The macro setting is needed only for the determination of the maximum scaling.
Focus Usually the autofocus is activated to get sharp images.


The image transfer from the camera phone to the PC will depend on the existing interfaces. The tester shall check for the interfaces in the following order and use the first one which is available for the test candidate.

  1. Memory-Card
  2. USB
  3. Bluetooth
  4. Irda
  5. MMS

 

TEST CONDITIONS
The test conditions should be set up in a way that they represent the typical conditions present in the real world use of the camera phone. If the camera is tested in a mode unusual for the specific application or under unusual conditions, the result may not represent the real image quality achievable for this application.

The setup should fulfill the following requirements:

  • Check the factory or standard settings of the camera for plausibility (sharpening, contrast, speed, saturation, white balancing) with specific care for reproducibility.
  • Uniform illumination of the test chart: the use of an Ulbricht integrating sphere (app. 98% uniformity) is recommended for all measurements which require uniformity, if a light box is used the non uniformity has to be compensated by a calibration image.
  • For photographic applications, color temperature and spectral distribution has to fit the requirements of ISO 7589 (1).
  • For photographic applications, the exposure value4 for the OECF and color measurements should be EV7 or higher.
  • Test charts for resolution, distortion, and chromatic aberration measurements should be at least 40 x 60 cm or bigger. For test charts the integrating sphere may be replaced by a typical reprographic daylight illumination.
  • Temperature should be 23°C +/- 2°C and relative humidity should be 50% +/- 20%.
  • For visual inspection and comparison, a calibrated and profiled monitor is required.
  • If results are required for different lighting conditions, these conditions should be typical and the lighting conditions have to be specified.

figure 2
Figure 1. A sample set up for a uniform illuminated transparent test chart.

 

 

OECF MEASUREMENTS
The OECF describes how the camera transfers the illumination on the sensor into digital values in the image. This information or at least the images of the test chart are necessary to answer the following questions:

  • What is the maximum contrast in a scene that can be captured by the camera in all its tonal details (dynamic range)?
  • Is the white balancing o.k.?
  • Does the camera use all possible digital values in the image?
  • Is there a gamma or tonal correction applied to the captured linear image?
  • What is the signal to noise value for different grey levels?
  • What is the ISO speed of the camera?

A picture of a single chart answers all these questions.

The camera OECF (opto electronic conversion function), as specified in ISO 14524 (2), is measured using a test chart with patches of different grey levels aligned in a circle around the center.

figure 3
Figure 2. The OECF chart of ISO 14524 combined with the noise patches of ISO 15739.
This is a special version with 20 grey levels.

 

figure 3
Figure 3. A sample OECF curve for a camera with a high dynamic range.

 

 

DYNAMIC RANGE
The dynamic range describes the contrast in a scene the digital camera is able to reproduce. It is determined from the OECF. In order to measure the dynamic range, the lightest point is chosen at the illumination level where the camera reaches its maximum output value. The darkest point is the illumination level where the signal to noise level passes the value of 3. The dynamic range is the contrast between the lightest and the darkest point given in contrast, f-stops or densities. ISO 15739 defines the dynamic range on the basis of a signal to noise level of 1. According to our experience, this definition causes problems with cameras which have a flat curve in the dark areas.
In order to measure the dynamic range of a digital camera, a test chart is required consisting of a contrast which exceeds the dynamic range of the camera. Since current cameras have a high dynamic range, the 12 grey patches defined in ISO 145242 are not enough to get the required information, especially in the dark areas. Therefore, we have designed a chart consisting of a contrast of 10,000:1 distributed over 20 grey patches. Since the ISO standard requires a spectrally neutral chart, the preferred material to create the chart is silver halide line film with a screening to achieve the required grey density. Since the screening may lead to increased noise levels or artifacts in the images of high resolution cameras, the chart should fit an appropriate image height and the cameras may, in contrast to the procedure defined in the standard, be slightly defocused.

 

WHITE BALANCING
If the automatic white balancing works well, the curves for the 3 channels should lie on top of each other. If the average difference is greater than 5 digital values, the images show a visible color cast. For OECFs determined under tungsten light, a slight color cast should remain in the image to keep the atmosphere (7). The curve should also start at the digital value of 0 and go up to 255 to utilize the complete contrast available in an 8 Bit image.
In order to make sure that white balancing works under different lighting conditions, the grey patches of a Gretag Macbeth Color Checker SG image taken with the camera shall also be used to determine the quality of white balancing. In order to check this, the saturation shall be calculated from the a and b coordinates of the Lab values for these patches using the formula

formel C

For a perfect white balance, C should be 0.

 

NOISE AND ISO SPEED
From the exposure data and the luminance level where the camera reaches the clipping, the ISO saturation speed value can be calculated as given in ISO 12232 (4). For the noise related speed values, the signal to noise level has to be calculated for each patch first. The software we use to do this was a Photoshop plug-in and is now a stand-alone software program in combination with an Excel spreadsheet.
In order to minimize the expense on testing cameras, we combined the two test charts for OECF and noise measurements 6 into a single high contrast chart keeping the densities of the noise patches as given in annex A of ISO 15739. The ISO total noise values are determined by our software program and a signal to noise ratio is given as an average of the three centered patches in the chart (Figure 2).

 

RESOLUTION
Unfortunately people think resolution to be the same as pixel count and, in fact, the pixel count as the sample rate is a limiting factor for resolution. With increasing pixel count, the other parts of the imaging system are more and more the bottle neck for the resolution, i.e. the ability of a camera to capture fine detail, especially because the manufacturers try to keep the sensor size as small as possible.
Since the beginning of our camera testing, we always had problems measuring the resolution of digital cameras. We started with visual analysis and ran into the problems described in Figure 4. Then we tried the slanted edge analysis with the SFR algorithm as defined in ISO 12233 (5, 10, 11), and found that it does not represent the resolution of a camera if sharpening or other image processing algorithms are applied to the edges in the image.

figure 1
Figure 4. The visual analysis of limiting resolution is often not as easy as we expect it to be.

 

figure 5

Figure 5.
This SFR shows the correct analysis of the slanted edge but it does not represent the MTF of the system.

 

A possibility in determining visual resolution, using a more reliable analysis than a human observer, is a software developed by Mr. Hideaki Yoshida from Olympus. The software uses the ISO 12233 resolution chart (Figure 4) and is downloadable from the CIPA website (http://www.cipa.jp/english/hyoujunka/kikaku/cipa_e_kikaku_list.html#). This software program only determines the limiting resolution of the hyperbolic structures in the chart. In order to determine the limiting resolution at different positions in the image or to determine the sharpness equal to the contrast at low frequencies, a different method has to be used.
Therefore ISO technical committee 42 working group 18 discusses the implementation of another method using a test chart with 9 modulated Siemens Stars distributed over the image. A detailed description of this method and a free program software can be found in the white paper downloadable from the Image Engineering website. (4)

figure 9
Figure 6. The location of the 9 modulated Siemens stars.

 

figure 7

Figure 7.
Each star has a modulation which is sine shaped in reflection and surrounding grey patches to linearize the data.

 

figure 8

Figure 8.
The MTF for a typical camera with the limiting resolution at 10% contrast

 

The limiting resolution shall be determined as the frequency where the MTF (modulation transfer function) measured with the modulate Siemens Star method passes by the contrast value of 10%. The value shall be reported for the center star and as an average value for the 4 stars in the corner. If only one resolution value is needed, it shall be calculated as an average value from the limiting resolution in the center and the average value for the corners.

 

SHARPNESS
The sharpness of an image is represented by the contrast at low frequencies. Therefore the contrast shall be determined from the MTF as a single value averaged from the contrast at the frequency of 200 Lp/Ph and 400 Lp/Ph (line pairs per picture height). In order to indicate the loss of sharpness from the center to the corners, the value shall be reported separately for the center star and as an averaged value of the corners. For a zoom lens, the averaged values over Tele, Wide angle, and standard zoom position, shall be reported.

 

DISTORTION
The distortion shall be measured as SMIA TV distortion in wide angle position as the bending of a horizontal line on the top and bottom of the image in relation to the image height as specified in the SMIA (13) specification, §5.20

SMIA TV Distortion = 100( A-B )/B ; A = ( A1+A2 )/2

 

figure 9

Figure 9.
The SMIA TV distortion.

 

SHADING / VIGNETTING
An image of a uniform illuminated diffuser plate shall be taken at wide angle position, with open aperture, with the camera close to the plate, and the focus set to infinity. If manual focusing is not available, the diffuser plate shall be photographed with auto focus. The uniformity of the plate has to be greater than 95% and the size shall be at least twice the size of the diameter of the lens.
The exposure shall be adjusted to a maximum digital value between 150 and 200. None of the digital values shall be close to 255 in an 8 Bit image. The shading shall be calculated as

formel shading

Dmax = maximum digital value in the test image
Dmin = minimum digital value in the test image

In order to report the results in f-stops, it is necessary to calculate the f-stops by using the OECF

formel shading

LL Dmax =corresponding log luminance values to Dmax determined from the measured OECF
LL Dmin =corresponding log luminance values to Dmin determined from the measured OECF

If an output oriented value is required, it shall be calculated based on the gamma of sRGB.

 

CHROMATIC ABERRATION
The chromatic aberration shall be measured locating the center point of a cross (e.g. from the distortion image) in the image corner and report the chromatic aberration as the maximum distance in pixel between the blue and green, and the red and green channel.

 

COLOR REPRODUCTION
To achieve a nice looking image is one thing, the exact reproduction of the original scene another. The nice looking image can only be determined by a statistic analysis of the opinions of a large number of people reviewing images. The color reproduction quality can be measured. In order to do this an image of the Gretag Macbeth Color Checker SG is taken (for mobile phone cameras in the current stage, it may be necessary to use the standard Color Checker instead of the SG because of the low number of pixels). The colors of the chart are known or measured as XYZ and Lab values measured with a spectral photometer. If the manufacturer does not report anything else, we assume that the images taken with the camera are provided in the sRGB color space as defined in IEC 61966-2-1, because this is the color space specified in Exif 2.1 as output space for digital cameras using the Exif file format. The taken images are transferred to Lab representing the color reception of the human visual system and the color distance ΔE, as well as the components ΔL, ΔC, and ΔH, can be measured. The average ΔE value shall be reported as color reproduction quality. The lower the Δ values are, the better the color reproduction of the original scene.

figure 10

Figure 10.
To visualize the color differences a file can be used which has the values of the original chart in the lower right corners and the value of the test image in the upper left. The Photoshop sample file can be downloaded from 14.

 

SHUTTER LAG
Everybody who ever took a picture with a consumer digital camera knows the effect. Nothing happens after pressing the button and then after a few seconds, an image is exposed and another photo of somebody’s legs has been taken. Part of the time in between is needed to focus and part is needed to adjust the exposure. The overall time from pressing the button until the image is exposed is called shutter delay or shutter lag. In the past shutter lag could be measured using the timescale of the power consumption. Nowadays the peaks in power consumption when pressing the exposure button are often not visible. Therefore, we had to find another way to determine shutter lag. We came up with a method using a panel with 100 LEDs which are “running” in a selectable frequency. This means that every LED is illuminated for a defined period of time until the next one lights up. The LEDs start running after a micro switch fixed on the exposure button is pressed. The first LED lit on the image taken by the camera defines shutter lag.
In order to include the focus time, the camera shall focus on an object close to infinity and then be turned to face the LED panel at a 1.5m distance. This ensures a defined focus situation and the reproducibility. In order to exclude the autofocus, it shall either be switched off or the camera shall focus on the LED panel first before the image is taken.

figure 11

Figure 11.
An LED panel is user to measure shutter lag.

 

STARTUP TIME
A snapshot situation occurs which needs to be captured. If the camera needs a long time to get ready, the situation will be gone. Therefore, the time needed to get ready to shoot after activating the camera in the phone is an important value which can be determined using a simple stop watch.

 

COMPRESSION RATES
Since all camera phones use JPEG compression to reduce the file sizes of the images, and the compression is a lousy one, it is interesting to know the compression rates of an image with fine details (e.g. resolution chart). These compression rates shall be given in percent of the size for the uncompressed image with a color depth of 8 Bit per channel.

 

EXPOSURE AND EXPOSURE TIME ACCURACY AND CONSTANCY
A low contrast scene (between 60:1 and up to 100:1) shall be set up with an overall reflectance of 18%. A patch with an 18% reflection shall reach a digital value between 100 and 130 (for gamma 2.2 the target value is 117). No highlights shall be clipped.

 

VIEW ANGLE, ZOOM RANGE (AT INFINITY AND SHORTER DISTANCES)
The view angle shall be measured using a test chart with a specific width (min. 1 m) and by taking an image of this chart from a defined distance. The view angle can be calculated from the width of the test chart in relation to the complete image width.

figure 12

 

Wimage = wchart (Image width [pixel] / chart width [pixel])
2α = 2arctan ((wchart (Image width [pixel] / chart width [pixel]) / D)/2)

 

HOT PIXELS
Pixels with unusual behavior are pixels or pixel clusters which show a digital value significantly above the typical noise level (usually >20) for an image taken in complete darkness at a specified exposure time (if possible). Further investigation is necessary.

 

FLASH CAPABILITIES (UNIFORMITY, GUIDING NUMBER LIGHT SOURCE, ETC.)

Light source
It is necessary to know whether the camera phone uses a flash or an LED as the additional light source for dark scenes.

EV/guiding number
At a distance of 1m to the phone, the light is measured using an exposure meter. In the case of an LED or another continuous light source, the exposure value is measured. In the case of a flash as the light source, the guiding number is measured and transferred into an exposure value using 1/60 of a second as sync time for the calculation.

Uniformity
How uniform the flash or LED illuminates the exposed scene is another criterion for the quality of images taken with the additional light source. An image of a neutral grey wall shall be taken from a distance of 2m. The measurement uses the same analysis as described in shading/vignetting.

 

IMAGE FREQUENCY

[Frames per second]
Image frequency is determined by how fast after taking one image the camera is ready to shoot the next one. We call this the continuous shooting frequency. It is measured using the LED panel again in JPEG highest quality mode. When the camera phone is ready to take a photo, press release buttons of the camera and the chronometer (or LED panel) in the same time. Then take another picture as fast as possible. The result is in the second picture.

 

VIDEO CAPABILITIES (PIXEL COUNT, RESOLUTION, FRAME RATE, LOW LIGHT BEHAVIOR)

Pixel count
The number of pixels [horizontal x vertical] in the video stream

Frames per second good lighting conditions
The number of frames per second captured in video mode. After a conversion into typical video codecs such as AVI, numerous video editing programs are able to analyze the frequency of the captured stream. Good lighting conditions in this case mean an illumination with more than 1000 lx in the scene.

Frames per second bad lighting conditions
The number of frames per second captured in video mode. After a conversion into typical video codecs such as AVI, numerous video editing programs are able to analyze the frequency of the captured stream. Bad lighting conditions in this case mean an illumination with 50 lx in the scene.

 

MMS CAPABILITIES FOR MOBILE PHONE CAMERAS (RESOLUTION, FRAME RATE, COMPRESSION ETC.)
Resampling for MMS (yes/no; pixel count; file size/compression)
Due to limiting bandwidth, some of the mobile phone cameras resample the images to lower pixel counts and higher compressions. So the pixel count, as well as the file size, indicates the loss of quality in this case.

 

DISPLAY (REFRESH RATES, GEOMETRIC ACCURACY, COLOR ACCURACY)

Refresh rate
The refresh rate of displays can be tested by using a frequency generator together with a LED. The frequency can be varied in the typical expected refresh rate. If the LED is constantly illuminated on the display of the camera phone, the LED and the refresh rate have the same frequency which can be determined from the display of the frequency generator.

Display accuracy

The display accuracy is a statement whether the display and the captured image are identical. It is possible that the display shows only a part in the center, or in another region of the captured image.

Color accuracy
The only way to measure color accuracy of the display is to use a spectral photometer and measure a single color on the display. This is a time consuming method and it includes the characteristics of sensor image processing and display. In combination with the impact of the surrounding illumination, measuring this value does not seem to be worth the effort.

 

VISUAL DETERMINATION OF A LETTER AND A TEST SCENE
An A4 or letter-sized page of a newspaper shall be captured with autofocus and automatic exposure to check if the text is readable. A test scene shall be photographed which consists of highlights, shadows, spot colors, flash tones, skin tones, and fine details in low and high contrast areas.

 

ACKNOWLEDGMENTS
Thanks to Don Williams, Kevin Matherson, Jack Holm, Peter Burns, Sabine Süsstrunk, for insightful talks on the theoretical and practical aspects of the above test procedures and standards. Thanks to my colleagues Uwe Artmann, Christian Loebich, and Rebecca Stolze for their excellent job in transferring our needs into software and spread sheets.

 

REFERENCES

  1. ISO 7589, Photography – Illuminants for sensitometry – Specifications for daylight and incandescent tungsten
  2. ISO 14524, Photography — Electronic Still Picture Cameras — Methods for measuring opto-electronic conversion functions (OECFs)
  3. ISO 12231, Photography — Electronic still-picture imaging — Terminology
  4. ISO 12232, Photography — Digital still cameras — Determination of exposure index, ISO speed ratings, standar output sensitivity and recommended exposure index
  5. ISO 12233, Photography — Electronic still-picture cameras — Resolution measurements
  6. ISO 15739, Photography — Electronic still-picture imaging — Noise measurements
  7. Jack Holm, Adjusting for the Scene Adopted White, IS&T’s 1999 PICS Conference
  8. JEITA, EXIF 2.2 standard, http://www.jeita.or.jp/english/standard/html/1_4.htm
  9. Don Wlliams, Debunking of Specsmanship, www.i3a.org
  10. Peter D. Burns, Slanted-Edge MTF for Digital Camera and Scanner Analysis, PICS conference 2000
  11. Don Williams and Peter Burns, Diagnostics for Digital Capture using MTF, PICS conference 2001
  12. Albert J.P. Theuwissen, Small is beautiful! Yes, But Also for Pixels of Digital Still Cameras ?, PICS conference 2002
  13. http://www.smia-forum.org/specifications/characterization.html
  14. http://digitalkamera.image-engineering.de/index.php/Downloads

Measuring Scanner Dynamic Range

PDF PDF (482 KB)

Dietmar Wüller · Image Engineering Dietmar Wüller · Cologne · Germany
IS&T's 2002 PICS Conference

 

Different Methods
The use of scanners to provide digital image files is rapidly growing. Currently there is no standardized method to determine the dynamic range of scanners. Therefore the data reported in technical specifications can be determined using different methods. An ISO 21550 Standard to measure the ability of scanners to reproduce tones especially in the dark areas of the original is currently under development (in an early working draft stage). At the present time most of the manufacturers report the dynamic range calculated from the bit depth of the implemented A/D conversion using the formula:

formel 1 (1)

D = reported dynamic range given in Densities
B = Bit depth of A/D conversion


This dynamic range is usually higher than the actual capabilities of the scanner because nowadays in most cases the A/D converters are no more the bottle neck in the signal processing chain of a scanner.
Other manufacturers look at a scanned gray scale and report the density of the darkest patch that differs from the next patch with a lower density. If the gray scale does not consist of a density range large enough to determine the capabilities of the scanner some add a film with a uniform density to reach higher density values.

 

Possible Bottle Necks for the Dynamic Range
Recently the A/D conversion in combination with the scanning speed was the bottle neck. Since these components got cheaper, faster, and better this has changed. Today the light source together with the sensitivity, the quality of the analog components, and flare are the bottle necks especially of film scanners. The dynamic range of scanners for reflective media is usually higher than the density range of the scanned media. Only a few low cost scanners or scanners in multipurpose devices are not capable to reproduce the 2.x densities of the reflective material.

 

Available Gray Scales and Material Problems
When we started in 1998 it seemed to be easy to measure the dynamic range but a number of problems appeared since then. One difficulty e.g. is to produce a test chart with homogenous patches covering the needed density range on a material similar to the usual film scanned on the specific scanner. A second one is the fact that a number of film scanners show a significant difference in the dynamic range depending on the scanned material.

 

Commercially Available Gray Scales
The following gray scales are commercially available an the usability for measuring the dynamic range was checked.

  1. Agfa reflection silver gray Scale (Type: G6T5E) 23 x 215 mm consists of 20 patches with densities up to 2.0.
  2. Agfa transmission silver gray Scale 26 x 162 mm consists of 30 patches with densities up to 4.3.
  3. X-Rite silver gray scale (P/N 381-25) for calibration of densitometers 21 x 125 mm consists of 21 patches with densities up to 3.9.
  4. IT8 7.1 and 7.2 test charts on RA 4 and E6 based material of different manufacturers for scanner calibration.

 

Q-factor
When we tested the Nikon film scanners we found that with our Agfa gray scale these scanners were not able to differentiate patches with densities higher than 2.3. A result that we could not believe. From Nikon we got the information that the geometry of illumination may be the reason for the this behavior. So we tried to find a material that has a Q-factor – relation between density measured with parallel illumination and the density with diffuse light source - of about 1. Not taking care about the spectral uniformity of the transmission we tried typical color reversal material in combination with a Filter consisting of a uniform density of 1.5. We found that the dynamic range for E6 material is significantly higher than the one with silver based black and white film. This means that a scanner of this type will cause problems scanning typical black and white film but will lead to good results with material developed in a C41 or E6 process.

 

Density Range of the Gray Scale on Film
The maximum density of the X-Rite gray scale in some cases is not high enough. The Agfa film scale consists of a maximum density which is high enough for most scanners. The IT8 targets do not consist of a maximum density suitable for measuring the dynamic range. Manufacturers using this chart combine it with a film of a uniform density. The required range is from film base density of about 0.1 up to densities higher than 4.0.

 

Requirements for Noise Measurements
To measure noise of a scanner the frequency of the grainy structure in the test chart has to be higher than the geometric sample rate of the scanner usually given as resolution in dots per inch or per mm. According to the digital Camera noise measurement (ISO 157391) it should be at least 10 times higher. This will cause problems with actual test chart materials. Especially if the scanner under test is a film scanner of the new generation. A possible work around may be a (molecule based) diffusor filter in the optical way between Chart and scanner.

 

Summary of the Test Chart Material Characteristics
The test charts for measuring the dynamic range have to fulfill the following requirements:

  1. Maximum density higher than 4.0.
  2. Density steps between the patches not larger than 0.2.
  3. Known Q-factor which shall be reported together with the results.
  4. A spectral uniform transmission from 380 up to 780 nm.
  5. Fine grain structure for measuring noise.
  6. Possible low cost production and a chart layout that allows automatic or at least semi automatic analysis.

 

How to Measure Dynamic Range and Related Values
For the determination of dynamic range and the related values the test chart has to be scanned. If the Q-factor or/and the absolute Dmax value shall be measured as well scans of the additional charts have to be made. A number of ten scans per chart will minimize errors caused by temporal noise or mechanical tolerances. If the scanner software allows to store the linear raw data this data shall be stored and used for the determination of the different values. Therefore the analysis software has to be able to analyze a color depth of 16 Bit per channel. To avoid mistakes caused by interpolation of the scanner software the scanning resolution shall be:

formel 2 (2)

R = scanning resolution
Rmax = maximum scanning resolution of the scanner
i = integer value

figure 1
Figure 1. A possible test chartscanned with a scanner which has a good dynamic range (for better visualization the gamma of the scan was enhanced using a value of 2.0)

 

For each patch of each scan not less than 64 x 64 pixels shall be used to determine the average digital value for the patch and the standard deviation. The standard deviation is needed for measuring noise as explained later.

 

Dynamic Range
The dynamic range is determined from the function given by the density values of the patches in the test chart and the resulting, averaged digital values from the ten scans (2).

figure 2
Figure 2. A typical OECF function of an 8 Bit output scanner

 

If the scanner shows clipped values in the lightest patches of the chart the first patch below the maximum digital value shall be chosen as the minimum density Dmin. There are three different ways to determine the maximum density Dmax which have to be discussed among the experts:

  1. The darkest patch with a difference in the averaged digital value of at least 1 compared to the next lighter patch.
  2. The darkest patch which shows a visual difference in comparison to the next lighter one by using a gamma correction.
  3. The darkest patch showing a signal to noise ratio larger than a given minimum value e.g. 1 calculated as shown in the signal to noise section. (3)


The most objective way seems to be the third mentioned. Only the minimum value should be discussed.
The dynamic range DR of the scanner is then given as:

formel 3 (3)

The dynamic range has to be calculated separately for each Channel R,G, and B. If only one value is reported the different values should be weighted by using the formula:

formel 4 (4)

 

Q-factor
If the influence of the illumination shall be determined as well the procedure has to be repeated with a test chart on a material with a high Q-factor. The default chart shall be made off a material with a Q-factor close to 1.

figure 3
Figure 3. The dynamic range of Nikon scanners is limited to 2.3 densities if a material with a high Q-factor like silver based black and white film is scanned (for better visualization the gamma of the final scan was enhanced using a value of 2.0).

 

Non linear 8 bit data
If the scanner does not provide linear raw data the automatic adjustment can be used in combination with a gamma modification to lighten the dark areas. The use of a Gamma value of 2.0 in the software lead to good results.

 

Absolute Dmax
In some cases the maximum density of the dynamic range differs from the absolute maximum density the scanner can reproduce. If the scanner software determines an underexposed slide or an overexposed negative it may be able to adjust the exposure time or the amplification level which does not lead to a reproduction of higher contrasts in one image.

figure 4
Figure 4. A scan without (upper) and with (lower) a gamma enhancement on raw data using the scanning software.

 

Therefore the way to combine a test chart with a low maximum density with a uniform density film may lead to wrong results for the dynamic range but leads to the related value of what we call the absolute Dmax.

 

Signal to Noise
If the Signal to noise ratio is used to define Dmax for the dynamic range it has to be calculated for every patch using the standard deviation and the average value for each patch. The formula used for the calculation will be defined in the standard and is similar to the formula used in ISO 12232.

formel 5 (5)

D = input Density of the patch.
g(D) = average of the incremental gain to the next lower and next higher density (the rate of change in the output level divided by the rate of change in the input density).
s = standard deviation of the monochrome output level values or weighted color output level values (for color cameras), taken from a 64 by 64 pixel area.

An evaluation had to be made if the formula leads to the same results for different ways to scan the chart. So we tried it on 8 Bit output data with and without gamma correction and found that within in the measuring tolerances the values for the S/Nx are the same.
Looking at the same density/patch for each scanner which has to be below the Dmax for all the scanners -the signal to noise ratio for this patch is a useful value to compare the signal quality of the scanners.

 

Conclusion
To avoid confusion caused by the different published values for dynamic range an ISO standard based procedure for measuring dynamic range and related values is needed. Developing this procedure got more complicated than we thought at the beginning of our work. Especially finding a spectral uniform material with a low Q-factor is a problem that seems to be solvable but is not yet solved. A number of different aspects mentioned in this paper have to be taken into account.

 

References

  1. ISO 15739 Photography - Electronic still-picture imaging -Noise measurements
  2. ISO 14524 Photography - Electronic Still Picture Cameras -Methods for measuring opto-electronic conversion functions (OECFs)
  3. ISO 12232 Photography -Electronic still picture cameras -Determination of ISO speed, chapter 6.2.1
  4. ISO 12641 Graphic technology -Prepress digital data exchange -Colour targets for input scanner calibration.
  5. Robert Gann, Desktop Scanners – image quality evaluation, Prentice-Hall 1998

 

Biography
Dietmar Wueller studied photographic sciences from 1987 to 1992 at the University of Applied Sciences Cologne (Germany). Studies were followed by scientific work in the area of Light-and Colour-measurement at the Institute for Light and Building Technique in Cologne and work in a prepress Company. In 1995 he opened a training an testing centre for digital image processing. Since 1997 Dietmar Wueller has been testing digital cameras and scanners for german magazines and manufacturers. He became the german representative for ISO TC42 WG18 in summer 2000. 166

Three Years of Practical Experience in Using ISO Standards for Testing Digital Cameras

PDF PDF (650 KB)

Christian Loebich and Dietmar Wueller
Image Engineering Dietmar Wueller · Cologne · Germany
IS&T's 2001 PICS Conference Proceedings

 

Different Methods
In 1997 we started to build our test booth for testing digital still cameras. The decision to start this kind of testing digital cameras was made because just looking at pictures on the screen or analysing printed pictures ended in different results depending on the test person and the surrounding conditions. We designed the testing equipment by using ISO Standards, except some parameters which are discussed on the following pages.
Until today we had the possibility to measure more than 150 different cameras and could also test up to 40 cameras of the same type. So there is a good base to make some conclusions about this way of camera testing.

The evaluated parameters in the test cycle are:

  • ISO 14524 OECF measurement
  • ISO 12232 Determination of ISO Speed
  • ISO 15739 Noise measurements
  • ISO 12233 visual Resolution / SFR measurements
  • Color Reproduction
  • Shading

 

Target Illumination
The homogenous illumination of the target area is achieved by an ulbricht integration sphere with a diameter of 1 meter. Two 250 W halogen bulbs with appropriate filtering for daylight and tungsten conditions are used for illumination. The light source is arranged in an 90 degree angle to the viewing direction of the camera towards the target. The outside of the ulbricht sphere is coated with a coal/plexisol mixture inside we used a bariumsulfate/plexisol mixture. The maximum daylight illuminance is about 500 cd/m², the maximum tungsten illuminance is more than 1500 cd/m². Both values are measured with the OECF chart 1:1000 in place. Without any chart maximum tungsten illuminance is about 1650 cd/m². The spectral distribution of the light source fits the requirements of ISO 7589.
The reflective targets like the Kodak SFR targets and the resolution targets are illuminated by halogen lamps with daylight filtering.

 

OECF
The OECF is measured using the camera method with a chart of 1:1000 contrast. Noise and speed are measured using the same chart. (ISO Standard: 1:80 reflective).
The OECF is measured by taking ten exposures of the target and evaluating them with a photoshop-plugin to get mean value and standard deviation of at least 64x64 Pixels of the 12 patches. These values are evaluated for each of the three channels red, green and blue. Y and R-Y and B-Y is calculated for ISO speed determination. The test target is exposed in a way that clipping is reached in the lightest patch which is needed for measuring saturation speed.
The dynamic range is given by the maximum log luminance minus log luminance where the S/Nx is below a certain value. Right now we are using S/Nx = 2 to determine the dynamic range of a camera. Until now we had only a few cameras which were able
to reproduce the whole dynamic range of 1:1000 according to 10 f-stops. Most of the consumer cameras are below that range at about 4-7 f-stops.

figure 1

Figure 1.
OECF daylight of Leica S1 with high dynamic range

 

The OECF curve is plotted in a chart with digital values against log luminance. The shape of the curve will be
exactly linear if gamma is 2.2, but most of the cameras are calibrated to a gamma below 2.2, so the curve is hanging a little bit.

It is easy to see whether the camera has a well balanced tone reproduction or shows clipped highlights or noisy shadows. The quality of the white balancing algorithm can also be determined. If the three tone curves of red, green and blue are lying exactly one over the other the white balance is working perfectly. The white balance algorithms seem to be very different and some manufacturers seem to know better than others how to get a neutral picture.

figure 2

Figure 2.
Fuji Finepix 4900 daylight

 

figure 3

Figure 3.
Fuji Finepix 4900 tungsten

There are cameras which have a sensible white balancing. Very slight differences in the angle pointing at the OECF target are resulting in a different white balancing. The target is neutral grey and so the camera should be able to produce neutral pictures with an ISO-standard illumination.
Our Photoshop-Plugin writes the meanvalue of R-G and B-G into a text file.
A remarkable value which can be determined from the OECF is the range of used digital values. There are some cameras which are hardly using 245 digital values so they are throwing away about 5 percent of the possible contrast range only because of a non perfect tonal range correction.
The ISO Speed is computed from the shutter speed and the used aperture given by the camera.

 

Noise
Three noise patches in the middle of the ISO-Noise chart are evaluated using a photoshop plugin. Because the chart is
made on a film recorder it is important to choose the appopriate shooting distance. Another possibility is to
defocus the lens a little bit.
We decided to frame our target between two glass plates to keep dust and mechanical stress as low as possible. The three noise patches are exactly in the middle of the target and the optical axis of the camera lens is pointing exactly to this place on the target. The target acts like a mirror and the reflection of the lens or the whole camera is to be seen in the image. This could be avoided by placing the noise patches out of the center of the target.

figure 4

Figure 4.
Reflection of the camera in the target (shown with the old target for better visibility of the reflection)

The results of these noise measurements are comparable only with values done with the same illumination of the three patches.

 

Resolution Measurements
For resolution measurements we started to use the standard transmission ISO SFR-Target.
The visual evaluation of the parabolic resolution structures and the parallel line structures on the target gave
good results. But we always had to use one picture as a kind of reference for comparing.
We tried to produce a computational evaluation of the parallel line structures in the image by measuring the contrast difference between the black and the white lines. The first minimum of this should be the maximum resolution.

figure 5

Figure 5.
absolute digital noise values (the starting point of the noise reduction is visible)

 

figure 6

Figure 6.
image of reflective target with Olympus E-100RS

 

figure 7

Figure 7.
SFR of Canon Powershot Pro 90 IS


Due to the fact that the resolution of this structure is changing from line to line there is no possibility to achieve a result where you can be shure that there are not already some aliasing effects which are misleading.

figure 8

Figure 8.
Linear structures for visual resolution evaluation


Some of the cameras with fixed lens had to be placed very close to the resolution testchart to use the maximum picture height. The focusing mode had to be changed to macro range because there was no possibility to get the resolution in standard focusing range.
We decided to print the resolution target 0.8m heigh and to take the pictures in reflection daylight mode. So we could use a focusing distance of 1.5 m to 2.5 m which is more realistic.
The SFR-based evalutation of the transmission target works with scanners or with camerabacks which have the
possibility to switch off the unsharpmasking. Otherwise we did get results with a contrast higher than 1.0 and depending
on the camera manufacturer the results had a certain offset. So you could not compare the results of two cameras of different manufacturers which have theoretically and visually the same resolution. The image processing inside the camera is changing the picture in a way that the SFRalgorithm gives the correct edge analysis, but this is not related to the details of a scene reproduced by the camera.

figure 9

Figure 8.
Circular structures for visual resolution evaluation

We also tried to use the low-contrast kodak reflection target, with similar results. Because of these problems we are evaluating other targets for visual analysis. We produced some testcharts with structures containing only parallel lines on it . The resolution is changing in 0.5 LW/PH. But these structures need space and so they are not always in the center position where you get the best resolution of the lens.
Furtheron we are trying to use black circular structures with a little white segment in it. So there is not the decision if the parallel lines are still separated but only the “yes” or “no” if the small segment is still visible.

 

Shading
A milk glass plate is used to measure the shading from the optical axis to the corner of the field of view. With standard consumer zoom cameras three pictures are taken with wide, standard and tele zoom factor. These pictures are taken in automatic exposure mode, so the grey value should be anywhere in the range of 100-150. The pictures are evaluated by measuring 2400 areas 8 by 8
pixels wide. The results are normalized to the range of 0 to 1 and graphically presented. Further on the minimum value is divided by the center value, this results in the percentage of the difference form center to the edge (Shading S).

Imin / Imax * 100 = S [%]  (1)

A problem with cameras that do not allow manual focusing is that you will get different results depending on the focusing distance the autofocus determined. We always put the front lens of the camera directly in contact with the milky glass target, so most of the cameras do not know what to do and are focusing to infinity.

figure 10

Figure 10.
Shading of Fuji Finepix 4900 (normalized values)

 

Color Reproduction
The color reproduction quality is determined by taking pictures of the right part of an enlarged transmitting IT8 chart. The the spectral transmission of the patches has been measured and the Lab values were computed. As most of todays cameras are offering the image data in the sRGB color space they are transferred to the Lab color space and a color distance calculation of ΔE is done. To avoid problems with varying exposure values only th twodimensional ΔE on the a and b axis of the Lab colorspace is evaluated. The results show whether misreproduction of colors is caused by a saturation boost or by a problem of the spectral sensitivity or color calculation in the camera.
We are planning to change to the Gretag Macbeth Colorchecker DC, because this target is using more natural colors instead of the film dyes in the IT8 chart.

 

Conclusions
The measured parameters OECF, dynamic range, used digital values and white balance are giving a good idea of the characteristics of a tested camera. By knowing the parameters and having some experience it is possible to transfer the sesults of the cameras to a predicted behavior in real life photography.
The SFR evaluation is often giving different results between computed value and visible result. This is in our opinion the most difficult value on testing digital cameras right now. We decided to make a visual evaluation of the pictures but we are always computing the SFR too to get a better understanding of the SFR .
The lower S/Nx value which is used to determine the dynamic range of a digital camera should be set to a fixed value.
The color reproduction quality gives information about saturation enhancement of the digital cameras. The target has to be changed to ColorChecker DC and the results should be displayed as Luminance, Chroma and Saturation values.

 

Future
Overall picture quality increased a lot within the last three years and is satisfactory for most users. Besides image quality easy using of the cameras, camera speed and behavior at low light conditions become more and more important. Right now we are working on measuring parameters like lens distortion, camera power consumption and autofocus speed at different image contrasts.

 

References

  1. ISO 12233 Photography - Electronic still-picture cameras - Resolution measurements
  2. ISO 14524, Photography - electronic still picture cameras - Methods for measuring opto-electronic conversion function (OECFs)
  3. Burns, P. and Williams, D., "Using Slanted Edge Analysis for Color Registration Measurement", IS&T/PICS Final Program and Proceedings, to be published April 1999.
  4. ISO 12232, Photography - Electronic still-picture cameras - Determination of ISO speed
  5. ISO 15739, Photography - Electronic still picture imaging - Noise measurements
  6. ANSI PH 3.57-1978 (R1987), Guide to optical transfer function measurement and reporting.
  7. Baker, L, "Optical transfer function: measurement", SPIE Milestone Series, Vol. MS 60: 1992.
  8. Okano, Y., "Influence of Image Enhancement Processing on SFR of Digital Cameras", IS&T/PICS Final Program and Proceedings, May 1998, pp. 74-78.
  9. Reichenbach, S. E. et al., "Characterizing digital image acquisition devices", Optical Engineering, Vol. 30, No. 2, Feb. 1991, pp. 170-176.
  10. Williams, D., "Benchmarking of the ISO 12233 Slanted Edge Spatial Frequency Response Plug-in", IS&T/PICS Final Program and Proceedings, May 1998, pp. 133-136.

 

Biography

Christian Loebich studied electronic engineering from 1986 to 1993 at the TU Darmstadt (Germany), and photographic sciences from 1994 to 1999 at the Fachhochschule in Cologne (Germany). Since 1998 he paticipated building the digital camera test stand at Image Engineering Dietmar Wueller in Cologne.

Practical Scanner Tests Based on OECF and SFR Measurements

PDF PDF (567 KB)

Christian Loebich · Dietmar Wüller
Image Engineering Dietmar Wüller · Cologne · Germany
IS&T's 2001 PICS Conference Proceedings

 

The technical specification of scanners has always been used as a marketing instrument since the introduction of commercially available scanners. The scan resolution specified is, in some cases, an interpolated sampling rate, and the color depth is ‘improved’ by using ‘bit or bit depth enhancement technologies.’ However, these numbers do not tell the customer anything about the quality of images that can be achieved by a particular scanner and are, more often than not, misleading. We were asked by German photographic and computer magazines to develop a method to evaluate the overall quality of scanners. We based our tests on developing ISO standards and procedures for digital still cameras and modified these to fit the specific characteristics of scanners. In this paper, we outline our methodology and discuss our results.

 

Characteristic Data of a Scanner
Following is a short description ot the four main scanner parameters that influence image quality.

  1. Resolution. The ability to capture fine detail found in the original film or print is one of the most important characteristics of a scanner. This ability to resolve detail is determined by a number of factors, namely the performance of the scanner lens, the number of addressable photoelements in the image sensor(s), and the electrical circuits in the scanner. Different measurement methods will provide different metrics to quantify the ability of the scanner to capture fine details.
  2. Dynamic range. Another aspect of quality is the ability to show details in the dark areas of the original film or print. The dynamic range of a scanner is the difference from the lightest to the darkest area of an original that show significantly resolved detail.
  3. Noise. The level of noise in homogeneous colored areas also contributes to the image quality that can be obtained from a scanner.
  4. Color reproduction quality. A fourth quality aspect is the accuracy of color reproduction in comparison to the original. This quality criterion is not discussed in this paper. Besides these four main quality characteristics of a scanner there are further significant aspects that influence the quality of digital image data. These aspects are, for example, sharpness, selective color corrections, automatic and manual color cast removal, etc. These parameters are mainly performed or influenced by the scanner software.

 

Development of Test Charts
When we started our work in 1998, standardisation for measuring characteristic data of scanners was just at its starting point. A lot of work for the characterisation of digital still cameras was already done and we had to find out whether some of the methods could be adapted for the analysis of scanners.
The first step in measuring any of the above mentioned characteristics was to develop a suitable test chart.
A test chart should be made of a material similar to the material of the originals, so that the measurements are not influenced by artifacts like glare due to the surface structure of the material. A second aspect is that the chart has to be better than every other usual film or print material in the specific field of test, so that the measures become meaningful.
For the SFR measurement described in Refs. [1] and [2] a testchart is needed that consists of elements with sufficiently fine detail, such as edges, lines, square waves, or sine wave patterns, as well a greyscale for the OECF determination.3 The latter is also needed for the determination of dynamic range and noise. The two requirements usually excludes that the same material can be used. Very high resolution photographic material usually have a high gamma that makes it difficult to reproduce greyscale patches with different densities.
During the development of our method we were able to produce reflective test charts on a graphic arts paper named Agfa DDP that are suitable for scan resolutions up to 3000 ppi. This material is capable to provide the fine detailed structures with high contrast, but it is useless for the greyscale. So we had to combine it with a typical greyscale made of a different material. The development of the Working draft for the ISO standard2 lead us to a commercially available reflective chart which is suitable up to about the same resolution of 3000 ppi. This Chart consists of sharp low contrast edges and the needed greyscale. For film scanners we currently combine a chart with the detailed structures made of holographic film, - which is suitable for resolutions up to 10.000 ppi - with a commercially available greyscale from X-Rite or Agfa which has patterns with densities up to 3.9 and 4.3, respectievly. We hope to find something similar to the reflective targets in the near future that can be produced in a mass production.
The problem we have with the actual ISO chart at the moment is that this chart is useless for the determination of dynamic range and noise. For these measures the maximum density should be equal or higher than 2.1. Due to the chart’s structured matt surface the scan also shows artifacts as shown in Figure 1 with the illumination of some scanners.

figure 1
Figure 1. ISO Chart with artifacts due to the matt surface (brightened for better visualisation).

 

Determining Resolution
For the magazines, a nice thing to publish is a resolution given in one number. We investigated two different ways. First we used the SFR measurement of the SFR as given in Ref. 2 and second we used an USAF Testchart for visual resolution analysis, contact copied on the test material from a chromium original made by Heidenhain in Germany.5 The edge for SFR measurements, in combination with the grey patches for the OECF determination, and the USAF Chart are scanned at the highest given physical resolution of the scanner under test.

figure 2
Figure 2. The test charts created by Image Engineering.

For the USAF chart, the structure with the highest frequency which shows well separated lines is determined by looking at the image in an image processing software like Adobe Photoshop® at a suitable magnification level of at least 100%. The frequency of the structure is the value for the visual resolution.
For the SFR measurement, the OECF curve3 is determined from the greyscale and entered to the analysing software. The latter can be downloaded from the ISO TC42 WG18 website http://www.pima.net/standards/iso/tc42/wg18/wg18_POW.htm.
After marking the ROI (region of interest), the edge is analysed. The result is a contrast curve in relation to the original frequency. The resolution number published is calculated at the frequency where the SRF is 30%.

figure 3
Figure 3. SFR results of three different film scanners

 

 

figure 4
Figure 4. The related images for the visual analysis

 

Our experience shows that the value from the visual analysis is about the same than the value of the SFR analysis. The two main reasons why the SRF method works with scanners but causes problems with digital consumer cameras are that with scanners, unsharp masking can usually be switched off and there is no demosaicing.

 

Color Misregistration
A useful peripheral product of the SFR algorithm is the color misregistration of the scanner.6 When calculating the line spread function, the position of the maximum can be determined with a high accuracy of about 1/10 of a pixel. The Kodak SFR software stores the result together with the SFR analysis.

 

Checking the Resolution at Different Places of the Scanning Field
To check wether the resolution is even over the scanning field, the SFR or visual tests can be performed at different places. In case there is a problem with different resolutions at different places, the geometric positioning of even resolutions give a hint on the kind of problem that has to be solved. For example a not accurately adjusted mirror or a mechanical problem.

 

A Different Approach
A slightly different approach to measure the resolution was developed by Agfa with their ‘Field Quality Check Guide’. This guide was created for product supporters to check the scanners for possible problems and to compare the scanners with the related products of Agfa’s competitors. The Agfa target consists of patches with horizontal and vertical lines arranged in different frequencies. A scan with the maximum physical resolution is followed by the determination of contrast for the highest visible frequency using the histogram of Adobe Photoshop®. The method seems to be suitable and minimum values for the contrast help to verify whether a scanner is in a given quality range or not.

figure 5

Figure 5.
Agfa’s ‘Field Quality Check Guide’ Target

 

Determination of the Dynamic Range
To prevent a number of meaningful words for the physical characterisation of scanners from being misused for marketing aspects, standards for a number of so far not standardised quality values should be created in the near future. Looking at technical specifications of scanners does not tell the user much about the quality of the images achieved by a scanner. What, for example, does the color depth of a scanner tell the user? Is it the maximum capable contrast in the original? Is it the bit depth of the A/D converter, or the number of possible colors in the image? Is the latter be produced by some bit enhancement algorithms or other kinds of calculations?
One of the values that should be standardised is the term ‘dynamic range’. A possible definition can be: the scanner dynamic range is the difference of the minimum unclipped density of an original to the maximum density of an original that can be reproduced with a signal-to-(total)-noise ratio - including temporal and fixed pattern noise - of at least 1.
We tried to measure the dynamic range that fits this definition by scanning a greyscale with a maximum density higher than the expected maximum density of the scanner and adjusting the gamma curve of the scanner software in a way that the fields with the higher densities can be best differentiated. This is usually reached by adjusting the gamma of the digital output to about 1.5. The grey patches are automatically analysed like the OECF patches by using a self written Adobe Photoshop® plug-in. This plug-in writes the mean values and the standard deviation for the red, green, and blue channels as well as for the Y, R-Y, and B-Y value of every patch into a text file. For further analysis, this text file can be exported to a program like Microsoft Excel® or Matlab®. The dynamic range can be determined from the OECF – including the gamma adjustment. The signal to noise ratio is calculated as described in Section 6.2 of Ref. 7.
Transparent greyscales are commercially available with maximum densities up to 4.3.
For 35mm film scanners, there is still a problem to produce spectrally neutral greyscales with suitable maximum densities. The maximum densities we reached with typical black and white film material, which can be exposed in an image recorder, are around 3.0.
Our results of measuring the dynamic range are consistent with the visual analysis of the scanned greyscales and scanned test images. Looking at values given in the specifications of midrange scanners, our results match the specifications, in most cases with an accuracy of +/- 0.1 densities. Looking at the specifications of scanners for the consumer market, one can conclude that most manufacturers do not provide accurate data.
Since the measurement of the dynamic range is simple, and the given value gives the user an impression of one of the most important image quality aspects of a scanner, it should be mandatory for manufacturers to report it.

 

Noise
We still have some difficulties measuring noise. To measure noise at different density levels, the patches of the greyscale need to be very homogenous They should be scanned at the maximum physical resolution, or at least at a level where no interpolation has an influence on the result. With our greyscales used for dynamic range measurements, we have trouble with the grain structure, especially in the lighter patches. We first wondered where the high standard deviations, meaning absolute noise level, for these lighter patches came from. Taking a closer look we found that the reason was the grain structure of the photographic material. Especially at high resolution (> 1500 ppi), the structure is often clearly visible in the scans. At the present time we have no idea how to produce a greyscale on photographic material that will enable us to measure noise. This grain structure seems to have no influence on the results of the dynamic range measurements, using our method, because it seems to get less important at higher densities.

figure 6

Figure 6.
Typical OECFs for measuring dynamic ranges of scanners.

In the near future, a scanner standard similar to the one for digital cameras8 should be developed keeping the above mentioned problem in mind.

figure 7
Figure 7. The measured absolute noise levels with a typical testchart using a Nikon film scanner.

 

Conclusions
The resolution can be measured using a test chart for visual analysis but due to possible aliasing artifacts, which decrease the accuracy, the SFR method is more precise. In contrast to our results with digital cameras the SRF method works well for all scanners for which the unsharp masking can be switched off. The reported SFR should be an average of at least 4 measurements because the SFR can vary. In most cases this variation is caused by an inaccurate positioning of the original in the focal plane of the scanner. This in some cases is a difficult thing to do.
The OECF measurements with a high gamma value give significant information about the scanners capabilities to reproduce high contrast originals. Noise measurements are necessary for the exact determination of a dynamic range but the grain structure of the typical greyscales causes problems.

 

References

  1. ISO 12233 Photography - Electronic still-picture cameras - Resolution measurements
  2. ISO 16067-1, Photography - Electronic scanners for photographic images - Spatial resolution measurements: Part 1 Scanners for reflective media
  3. ISO 14524, Photography - electronic still picture cameras - Methods for measuring opto-electronic conversion functions (OECFs)
  4. ISO 12641 Graphic technology - Prepress digital data exchange - Color targets for input scanner calibration.
  5. Miersch Karsten, Entwicklung eines Testverfahrens zur Bestimmung der Wandlerkennlinie und des Auflösungsvermögens von elektronischen Halbtonvorlagenscannern unter Berücksichtigung von ISO 12232, 12233 und 14524, Diploma thesis at Fachhochschule Köln Facherbeich Fotoingenieurwesen (in german language)
  6. Burns, P. and Williams, D., "Using Slanted Edge Analysis for Color Registration Measurement", IS&T/PICS Final Program and Proceedings, to be published April 1999.
  7. ISO 12232, Photography - Electronic still-picture cameras - Determination of ISO speed
  8. ISO 15739, Photography - Electronic still picture imaging - Noise measurements
  9. ANSI PH 3.57-1978 (R1987), Guide to optical transfer function measurement and reporting.
  10. Baker, L, "Optical transfer function: measurement", SPIE Milestone Series, Vol. MS 60: 1992.
  11. Okano, Y., "Influence of Image Enhancement Processing on SFR of Digital Cameras", IS&T/PICS Final Program and Proceedings, May 1998, pp. 74-78.
  12. Reichenbach, S. E. et al., "Characterizing digital image acquisition devices", Optical Engineering, Vol. 30, No. 2, Feb. 1991, pp. 170-176.
  13. Williams, D., "Benchmarking of the ISO 12233 Slanted Edge Spatial Frequency Response Plug-in", IS&T/PICS Final Program and Proceedings, May 1998, pp. 133-136.

 

Biography
Dietmar Wueller studied photographic sciences from 1987 to 1992 at the Fachhochschule in Cologne (Germany). Studies were followed by scientific work in the area of Light- and Colour measurement at the Institute for Light and Building Technique in Cologne and work in a prepress company. In 1995 he opened a training and testing center for digital image processing.

Colour Characterisation of Digital Cameras by analysing the Output Data for Measuring the Spectral Response

PDF PDF (712 KB)

Michaela Ritter · Dietmar Wueller
Image Engineering Dietmar Wueller · Cologne · Germany
IS&T’s 1999 PICS Conference

 

Introduction
The background for this work was the wish of a German photographic magazine to have a method for measuring the colour reproduction quality of digital still picture cameras, as well as their ability to be integrated in a colour management workflow. With scanners, characterisation is less difficult because they always use the same light source, and the colours in the photographic materials that are reproduced have very similar characteristics. In digital photography, the lighting conditions change with each scene, and the colours in a scene can be completely different than the colours inherent in photographic material. Therefore, the best way to characterise a digital still picture camera is to measure its spectral response. It should then be possible to calculate the RGB values by just knowing the spectral illumination of the sensor. That is exactly what the IEC TC100/61966-9 “Colour Measurement and Management in Multimedia Systems and Equipment Part 9: Digital Cameras” working draft proposes. The working draft of this standard was published after we already started our work to find out whether this is a possible way to characterise the colour reproduction of a digital camera or not.

 

Test Method
The test method needs to accommodate consumer cameras with automatic white balance and exposure control, as well as SLR-cameras and digital camera backs which allow a manual setting of these values. Therefore, a method was chosen to get the spectral response of the whole visual spectrum out of one single image. This would exclude the problem of different white balance and exposure settings during the measurement.
It is possible to create a picture of the whole visual spectrum with illumination of a diffraction grating, a prism or a continuous interference filter. In this test, an interference filter produced by Carl Zeiss was used.

figure 1
Figure 1. schematic arrangement for measurement

 

figure 2
Figure 2. test chart with continuous Interference Filter

 

To get a continuous spectrum, a halogen lamp and an Ulbricht integrating sphere was used to illuminate the interference filter. Most of the available digital cameras are optimized for daylight illumination. Therefore a daylight conversion filter was used to achieve the relative spectral illuminance shown in figure 3.

figure 3
Figure 3. relative spectral illuminance of the interference filter

 

The spectral transmittance of the interference filter was measured in combination with the viewing angle. Due to the fact that most consumer cameras use an automatic exposure control a front illuminated grey chart was placed around the filter. This chart contains a greyscale for determination of a relative opto electronic conversion function which is needed to exclude non linear effects of the image processing in the camera.

figure 4
Figure 4. relative opto electronic conversion function
Camera: Canon PowerShot A5

 

The pictures were analysed using an Adobe Photoshop plug-in, and the resulting data was transferred into curves of the spectral response.

figure 5
Figure 5. image of interference filter with marked analysed areas

The measurements were repeated using different cameras and different infrared blocking filters to see their influence on the results. The blue channel of some of the cameras had a remarkable high transmittance in the red and near infrared. By using an infrared blocking filter, the response of the red channel decreased as expected, but the response of the blue channel in the red did not. Surprisingly, it even increased in some cases. The explanation for this behaviour is the processing of the colour values in the camera, in order to overcome the spectral transmittance problems of the sensor.

figure 6
Figure 6. Comparison of the RGB-values with and without IRa-blocking filter from Leica
Camera: Canon PowerShot A5

 

figure 7
Figure 7. Comparison of RGB-values using different file formats
Camera: Canon PowerShot A5

This is one problem that was found by analysing the measurements, but there may be others that were not known. To further analyse the data, a scene with known colorants (IT-8 target) was photographed, and the actual RGB values were compared to the calculated RGB values. The expected camera RBG data of an IT-8 test chart was calculated by using the results of the spectral response measurements. They are represented by the formula:

formel 1 (1)

The data was then compared with the data from a real shot of the IT-8 with that camera. If the spectral response measurements were correct, the comparison of the RGB values in the picture and the calculated values would fall on a straight line in a graphical representation. Even if the white balances of the shots were different, the result would still be a line with a different gamma. However, the results fell anywhere but on a line. Only for the Jenoptik ProgRes camera was the result close to what was expected.

figure 8
Figure 8. Evaluation of the spectral response showing the RGB data in relation to the calculated data
Camera: Canon PowerShot A5

 

figure 9
Figure 9. Evaluation of the spectral response showing the RGB data in relation to the calculated data
Camera: Jenoptik ProgRes 3012

 

By marking the different RGB values with the colour values of the original, it was found that the difference between the expected and the real RGB camera data is dependent on the colour of the original. This proves the assumption that colour processing in the camera is colour and/or image dependent.

figure 10
Figure 10. Evaluation of the spectral response showing the R data in relation to the calculated data sorted by the column of the IT-8 chart
Camera: Jenoptik ProgRes 3012

 

figure 11
Figure 11. Area of the IT8 chart used for evaluation of the spectral response

 

Conclusion
The result of our work shows that the spectral response of a digital camera cannot be measured by simply using the output data of the camera. Therefore, an exact colour characterisation of a digital still picture camera can only be made by using raw, unrendered sensor data. Although we used a slightly different method to illuminate the sensor, our results should be taken into consideration for an evaluation of the IEC characterisation method.

 

References

  1. IEC TC 100, Committee Draft 100/89/CD Colour Measurement and Management in Multimedia Systems and Equipment Part 9: Digital Cameras, dated 1998-10-09.
  2. Ritter Michaela, Method for measuring the relative spectral response of electronic still picture cameras, Fachhochschule Cologne, 1998.
  3. ISO TC 42 WG 18/ TC 130 WG 03, Working Draft 2 of ISO 17321 Colour target and procedures for the colour characterisation of digital still cameras (DSCs), 14 September 1998.

Improving texture loss measurement: spatial frequency response based on a colored target

PDF PDF (11.7 MB)

Uwe Artmann and Dietmar Wüller
Image Engineering · Augustinusstraße 9d · 50226 Frechen · Germany
Electronic Imaging Conference 2012

ABSTRACT

The pixel race in the digital camera industry and for mobile phone imaging modules have made noise reduction a signi ficant part in the signal processing. Depending on the used algorithms and the underlying amount of noise that has to be removed, noise reduction leads to a loss of low contrast fi ne details, also known as texture loss. The description of these eff ects became an important part of the objective image quality evaluation in the last years, as the established methods for noise and resolution measurement fail to do so. Di fferent methods have been developed and presented, but could not fully satisfy the requested stability and correlation with subjective tests. In our paper, we present our experience with the current approaches for texture loss measurement. We have found a critical issue within these methods: the used targets are neutral in color. We could show that the test-lab results do not match the real live experience with the cameras under test. We present an approach using a colored target and our experience with this method.

Keywords: image quality evaluation, texture, noise reduction, spatial frequency response, kurtosis, SFR, Dead Leaves, MTF

 

1. INTRODUCTION
To get a full impression of the image quality of a digital camera, the test needs to include a measurement of the so called texture loss next to methods for noise and resolution measurement. At present, there are no standardized methods to measure the influence, even though this is an important topic and several people are working on this. Important requirements for test methods that can be used for image quality analysis are:

  • It should not need a Full-Reference, so it can be obtained by taking an image of a known test target. By knowing the target and analyzing the image of the target, one can obtain information about the system.
  • It shall not include any kind of subjective evaluation. Human observers are not objective enough to get comparable results over a long period of time and/or huge amount of cameras. Standardized testing with human observers can minimize this problem, but are extremely time consuming and therefore expensive.
  • Even though it does not include subjective testing, the objective results shall have a good correlation with the image quality experienced by human observers. That way the method can substitute subjective testing and makes tests much faster.
  • Ideally, the output is simple to understand and has just a single numerical value. This makes comparison between di erent cameras much easier and inexperienced people can understand ("The lower this number, the better the result.")


For more than three years, Image Engineering is using a method that is based on the measured kurtosis in the image a camera took of a gaussian white noise patch.(127) The idea is, that the statistical value "kurtosis" is 0 for a normal distribution of digital values found in white gaussian noise. All kind of linear fi ltering (like lens MTF or blur fi lter) will change the image content, but does not change the kurtosis. Only non-linear fi ltering as found in noise reduction will change the distribution and therefore result in a higher value of the kurtosis. We had good experience with this method, but it failed in one important requirement: It is not easy to understand and it always needs some interpretation. The kurtosis value always needs to be seen in context of noise and resolution and is more a good indicator than a stable measurement.

TE42

Figure 1. The used chart for camera image quality evaluation. It combines several structures
for image quality evaluation including the influence of noise reduction.


Di fferent methods have been discussed among the experts and one proposal (6) was based on a target called "Dead Leaves", as it is somewhat similar to a huge pile of dead leaves of trees, found in fall (see: Fig. 3). The first presented implementations could not ful ll the expectations, but with some modi cations presented by McElvain et al., (5) it seemed promising that a good method was found.
End of 2010, Image Engineering had the demand to develop a test target that can provide the most important aspects of image quality. We decided to include gaussian white noise into the target (Fig. 1) to be able to measure the kurtosis and to include the dead leaves structure (Fig. 3) with the reference patch as proposed by McElvain et al. (5) Next to other structures to obtain the OECF and to measure noise, color reproduction and other aspects, we included some natural objects to have a reference (Fig. 2). As the results from this target are used for magazines, the authors need some structures they can use to explain the results to their reader.
To evaluate the new chart and the new methods, several di erent cameras have been tested. This included cameras with known low performance in texture loss and some D-SLR cameras. The images and the results have been reviewed by trained personnel of Image Engineering and editors of the german photography magazine Colorfoto. With this feedback, we have found issues with the methods and also have found  solutions to overcome these issues.

 

2. CHART
The used dead leaves structure was created following a model, that opaque circles are stacked and the radii of these circles follow a 1/r³ probability distribution. The radii are limited to a minimum and maximum, so no circle can be smaller than print resolution and too large to fully cover the image. The gray value of the circles is chosen with a uniform distribution in a range of 25% and 75% of maximum density produced by the print process.

TE42 natural TE42 ladies

Figure 2. Natural objects in the charts to compare subjective and objective evaluation.
top left: gravel bottom // left: lawn // right: portraits (Part of Fig.1)

Dead Leaves gray

Figure 3. The dead leaves structure in the fi rst version of the chart
(circles with a probability function of radius and gray value). (Part of Fig.1)


After the first round of taking images of the chart, we have found di fferences in the evaluation between the engineers at Image Engineering and the staff at Colorfoto. We found that both parties came to a different ranking of the images if checking for texture loss. The editors have have found their experience with the cameras under test in the images. So their experience based on the usage of the cameras in daily life was also found in the images we have taken from the multi purpose chart. By just visually evaluating the images, Image Engineering's sta ff came to another ranking and has seen di fferent behavior of the camera. After a short discussion, we found the main reason for this observation: The less technical group of Colorfoto (with the good correlation to their subjective experience) had checked the behavior based on the natural objects on the chart (Fig. 2), the engineers have checked based on the dead leaves structure (Fig. 3). So we have seen:

  • The behavior of the cameras under test was diff erent if reproducing lawn, gravel and portraits compared to the dead leaves structure.
  • The behavior of the cameras under test reproducing the natural objects in the chart matched the experience the editors made with the cameras in "real live tests".

Combining these findings, we come to the conclusion:

  • The dead leaves structure (gray version) is not adequate as test target for texture loss measurements as it does not reflect the seen behavior in natural objects.

So we knew that we have to change the chart and not (only) the algorithm be used to analyze the image. The main diff erence between the natural objects and the dead leaves structure is, that the dead leaves structure is gray only. Nearly all cameras for photography purpose on the market work with a bayer pattern and need to interpolate the missing color information per pixel. This demosaicing also has an influence on the SFR (spatial frequency response) of the camera. Noise reduction algorithms can treat intensity and color information diff erently, so a pure gray chart does not reflect the behavior of the camera on colored targets.

Dead Leaves color

Figure 4. The colored dead leaves target


Taking this into account, it is clear that the test chart has to be colored. As we had also seen, that the measurement results based on the gray dead leaves target reflected what we have seen in the image of this structure, we decided that the algorithm itself seems to work, but the target needs modi fications. So we produced a colored dead leaves target (Fig. 4). We extended the model of the chart from a uniform distribution of gray values to a uniform distribution of R, G, B, so the red, green and blue color channel independently. This results in a colorful target and the statistics in the intensity channel (a weighted sum of R, G, B) is kept as in the gray chart. Also the mean value of R, G and B results in a gray patch which can be used for the needed reference measurement. So the basic algorithm (see Sec. 3) can be applied to the image data for both type of charts (gray and colored dead leaves).

Sample of di
erent camera behavior

Figure 5. Sample of di erent camera behavior.
top: Fuji lm S2800HD // bottom: Nikon D7000
left: Gray Dead Leaves // right: Colored Dead Leaves

 

3. OBTAINING THE SFR
The basic concept to obtain the Spatial Frequency Response (SFR) is to measure the power spectrum (PS) found in the image. The image is a reproduction the camera under test has made from the dead leaves target. The PS of the target is known, so by simply dividing these two PS, one gets the SFR. As could be shown (3) that this algorithm is in uenced by camera noise, it was extended (5) by a reference measurement on a gray patch that has the same intensity as the mean value of the dead leaves structure.

equitation 1 (1)

The calculation is done in this steps, assuming the camera under test has reproduced the dead leaves target and a reference patch which is the mean value of the dead leaves structure and is homogenous. Additional gray patches with a known reflectance are available to obtain an opto electronic conversion function (OECF) for linearization as cameras do not deliver linear OECF.

  • Calculate PStarget(f) (PS of dead leaves target) using the meta information from chart production process.
  • Read ROI of dead leaves patch, reference patch and gray patches.
  • Calculate OECF with image data from gray patches and the known reflectance of these patches. The OECF here is a function of reflectance vs. Y (Y is a weighted sum of R, G, B)
  • Calculate Y image from the RGB image of dead leaves patch and reference patch.
  • Linearize using the inverse of the OECF.
  • Calculate PSimage(f) (from dead leaves patch) and PSnoise(f) (from the reference patch).
  • Calculate SFR(f) using Equation 1.

The calculation of the PS includes a reduction process from the 2D spectrum to 1D data and the calculation of the SFR includes a normalization process for presentation purposes. This algorithm is applied to the gray dead leaves target as well as to the colored dead leaves target.

graph_sample

Figure 6. Comparison of SFR; Nikon D-7000 (D-SLR, ISO800) and Fuji lm S2800HD (Compact, ISO100);
Colored Dead Leaves and Gray Dead Leaves; the colored Dead Leaves leads to a signi cant lower SFR;
effect stronger for compact than for D-SLR



4. FROM SFR TO A SINGLE VALUE
A plotted SFR contains a lot of information that a trained person can interpret. But in a lot of cases, the complete SFR is too much information, as it makes it hard to compare di fferent cameras and to make a simple statement which result is better than the other. So for purpose of ranking and comparison, the SFR needs to be reduced to a single value. This is a difficult task, as it needs an algorithm that throws away a huge amount of data while preserving the essential information. Based on the SFR obtained from a variety of cameras, we tried three di fferent methods to simplify a SFR to one number and applied these to the data.

"MTF10" Based on the Rayleigh Criterion, the limiting resolution of a camera system is reported as the highest spatial frequency that results in a modulation or spatial frequency response of ≤ 10%. This value is suitable for checking the optical performance of a camera system and can be interpreted as the limiting resolution. The higher the value, the better the limiting resolution.

"MTF50" The "MTF50" value is the highest spatial frequency that results in a modulation or spatial frequency response of ≤ 50%. If checking for lens performance, this value is more related to the performance in the mid frequencies the lens-camera system delivers. The higher the value, the better.

"Acutance" This value needs a more complex calculation. The idea is to take the Contrast Sensitivity Function (CSF) of the human visual system into account, so to weight the performance of a system against the importance of the spatial frequencies for the perception. The implementation we have chosen was discussed among imaging experts at working groups of ISO and I3A. The obtained SFR is fi ltered with the CSF and the integral of the resulting function is divided by the integral of an ideal MTF fi ltered with the CSF in the same spatial frequency range. The higher, the better the performance of the camera under test. As the CSF needs to be calculated for a speci c viewing condition, we have chosen to calculate this value for two diff erent scenarios: (A) 100% view on a 96ppi display in 0.5m distance eye-display and (B) a 40cm height print (not limited by print technology) and a viewing distance which is equal to the diagonal of the print.

table direct compare

Figure 7. Table of measurement results; (-) Calculation not possible (X) Data not available

 

As can be seen in Figure 6, the SFR is lower if using a colored dead leaves target. This reflects the behavior as seen in all cameras and shown as sample in Figure 5.

The di fference between the Nikon D7000 and the Pentax K-5 is visible in natural images, but it could not be shown on the gray dead leaves target. In Figure 8 one can see, that on the colored dead leaves target, this diff erence in texture loss can be seen. As shown in Figure 7, the numeric results also show this. While on the gray dead leaves target the results are not signifi cantly di fferent, they are on the colored dead leaves target. So the advantage of the colored dead leaves target is obvious.

DL SLR sample

Figure 8. Image Sample: Detail of Colored Dead Leaves structure;
Pentax K-5 (left) vs. Nikon D7000 (right); both ISO800


The MTF10 value is not a good way to show the diff erences in the texture loss performance, as the SFR in
a lot of cases does not have values below 10%, so a calculation is not possible.

The performance of compact cameras and mobile phones can be very poor and the numerical results very
low. The filtering with the CSF for 100% view on a display, where we have a higher weight on high spatial frequencies, is misleading and results in too low values that do not represent the actual behavior.

Compared to the few subjective judgements we had from editors and our own evaluation, the acutance approach results in a di erent ranking of the numerical results. This should be topic of future work to evaluate this in more detail.

 

5. CONCLUSION AND FUTURE WORK
We could show that the type of target has an important in influence on the measurement results. The usage of gray targets is not suitable for the measurement of the so called "texture loss", which means the loss of low contrast, fine details. In our experience, the colored version of the dead leaves target is a good test chart, as cameras treat it in a very similar way as real, natural objects are treated.
The algorithm to obtain the SFR can be applied on both version of the dead leaves target (gray and colored). The resulting SFR is a good representation of the observed behavior of the camera under test.
The reduction of the complex data of an SFR to a single number is a difficult task and we have seen, that neither the MTF50 nor the acutance approach could show signi cant advantages in the data set we have used for this work. We have decided to use the MTF50 value as single value for ranking and comparison, as we could see these advantages over the acutance approach:

  • The MTF50 value gives higher di fferences between di erent cameras, so a ranking and comparison is easier.
  • The MTF50 value is easier to understand and to communicate to non-technical readers.
  • Di fferent shapes of SFR can result in similar values for the acutance approach. Even if this has not been finalized yet, we have the feeling that the MTF50 value correlates better with subjective image quality rating.

The reduction to a single value is subject to ongoing work on this topic.

 

REFERENCES

  1. Artmann, Wueller, "Noise Reduction vs. Spatial Resolution",Electronic Imaging Conference, Proc. SPIE, Vol. 6817, 68170A (2008)
  2. Artmann, Wueller,"Interaction of image noise, spatial resolution, and low contrast ne detail preservation in digital image processing", Electronic Imaging Conference, Proc. SPIE, Vol. 7250, 72500I (2009)
  3. Artmann, Wueller, "Di erences of digital camera resolution metrology to describe noise reduction artifacts", Electronic Imaging Conference, Proc. SPIE, Vol. 7529, 75290L (2010)
  4. Phillips, Jin, Chen, Clark,"Correlating Objective and Subjective Evaluation of Texture Appearance with Applications to Camera Phone Imaging", Electronic Imaging Conference, Proc. SPIE, Vol. 7242, 724207 (2009)
  5. Jon McElvain, Scott P. Campbell, Jonathan Miller and Elaine W. Jin, "Texture-based measurement of spatial frequency response using the dead leaves target: extensions, and application to real camera systems", Proc. SPIE 7537, 75370D (2010); doi:10.1117/12.838698
  6. Cao, Guichard, Hornung "Measuring texture sharpness",Electronic Imaging Conference, Proc. SPIE, Vol. 7250, 72500H (2009)
  7. Uwe Artmann "Noise Reduction vs. Spatial Resolution", Diploma thesis at University of Applied Science Cologne Germany, downloadable via www.image-enginering.de
  8. International Organization of Standardization,"ISO12233 Photography - Electronic still picture imaging - Resolution measurements"
  9. Image Engineering, "White Paper: Camera Test", downloadable via: www.image-engineering.de

    Interaction of image noise, spatial resolution, and low contrast fine detail preservation in digital image processing

    PDF PDF (6.6 MB)

    Uwe Artmann and Dietmar Wüller
    Image Engineering · Augustinusstraße 9d · 50226 Frechen · Germany
    Electronic Imaging Conference 2009

    ABSTRACT
    We present a method to improve the validity of noise and resolution measurements on digital cameras. If non-linear adaptive noise reduction is part of the signal processing in the camera, the measurement results for image noise and spatial resolution can be good, while the image quality is low due to the loss of fine details and a watercolor like appearance of the image. To improve the correlation between objective measurement and subjective image quality we propose to supplement the standard test methods with an additional measurement of the texture preserving capabilities of the camera. The proposed method uses a test target showing white Gaussian noise. The camera under test reproduces this target and the image is analyzed. We propose to use the kurtosis of the derivative of the image as a metric for the texture preservation of the camera. Kurtosis is a statistical measure for the closeness of a distribution compared to the Gaussian distribution. It can be shown, that the distribution of digital values in the derivative of the image showing the chart becomes the more leptokurtic (increased kurtosis) the stronger the noise reduction has an impact on the image.
    Keywords: Noise, Noise Reduction, Texture, Resolution, Spatial Frequency, Kurtosis, MTF, SFR

    ColorFoto is a German photography magazine with a focus on objective and complex tests on digital still camera systems. Since we started testing in 1997, the tests had to be adjusted from time to time to keep track with the development in the camera market, so the test results correlate with the subjective image quality, experienced by the user. In the last year we more often had the problem that cameras had better results in the resolution and noise tests than those of competitors, but the images didn't look better. An example is shown in table 1. These are results of the Sony α350 compared to the Pentax K20D. Both devices are digital SLR cameras with a comparable sensor pixel count of 14 and 14,5 Million Pixel respectively on a 23,5 mm x 15,7 mm (23,4 mm x 15,6 mm) sensor.(7) The results of the noise measurement do not show a significant advantage of one camera over the other.

    Camera Sony α350 Pentax K20D
    Pixelcount hor. 4592 ver. 3056 hor. 4672 ver. 3104
    SensorSize [mm] hor. 23,5 ver. 15,7 hor. 23,4 ver. 15,6
    Image JPEG JPEG
    ISO100 ISO400 ISO800 ISO1600 ISO100 ISO400 ISO800 ISO1600
    MTF10 Center [LP/PH] 1476 1427 1422 1112 1329 1295 1293 1294
    SNR (ISO 15739) 45,9 32,7 23,7 17,7 41,9 29,3 19,1 15,2
    Visual Noise 1,1 1,7 3,1 5,5 0,8 1,5 2,4 4,1

    Table 1. Results of Resolution and Noise measurement,(6) published in german magazine ColorFoto (7)
    Resolution: Limiting resolution (MTF10) in image center, SFR Siemens (3)
    Noise: SNR calculated according to ISO15739, additionally Visual Noise to describe the human perception of the noise (8)


    The measured limiting resolution of the Sony α350 is higher compared to the Pentax K20D if the sensitivity of the cameras does not exceed ISO 800. So just reading the numerical results, one would say the images will look comparable with a slight advantage in resolution for the Sony for ISO 100 to ISO 800, at higher sensitivity the Pentax will outperform the Sony in terms of resolution.
    Having a closer look at various test images revealed that the reality looks different. The alpha350 failed to properly reproduce fine low contrast details, so images showed strong so-called texture blur. Figure 1 shows a comparison of the same real scene, taken with both cameras in different sensitivity settings. One can see that at ISO 400 and ISO 800, the Pentax shows more details than the Sony, even if the measured limiting resolution of the Sony is higher for these settings.
    It could be shown, that other resolution measurement methods than the used SFR_Siemens (3) (e.g. ISO 12233 Chart or SFR_Edge (9)) also fail to describe texture blur.(2)

    verglSony_Pentax_real

    Figure 1. Detail of a real scene, showing pavement and soil (200% view).
    Top: Pentax K20D Bottom: Sony α350  /  Left: ISO 400  /  Center: ISO 800  /  Right: ISO 1600


    2. ALGORITHM
    The proposed method to describe this effect is based on a test chart showing white gaussian noise. These structures have been combined with other structures used for resolution measurement. Figure 2 shows the complete arrangement. An array of nine sinusoidal siemens stars is used to measure the system MTF at four different image heights. In the unused space between the stars, the structures B and C have been added. B is used for the SFR_Edge algorithm and will not be used for this purpose in this algorithm. The edges are used for a comparison in section 3.3. The structures can be seen as a ten-step gray scale, this part of the chart is used for a linearisation. The noise patches shown in C consists of eight patches with four different noise variances, in this chart 1/2, 1/4, 1/8 and 1/16 (mean = 1). All calculations based on these noise patches are performed on the two corresponding noise patches and the average is calculated.

    noiselabchart

    Figure 2. Used NoiseLab Chart with di erent structures. A - SFR Siemens B - SFR Edge C - Gaussian White Nose

    The camera under test reproduces the homogeneously illuminated test chart and the resulting image is analyzed. This method can be performed for so-called black box systems; therefore no additional information to the image is needed. This makes it useful for mobile phone cameras and all other cameras without RAW data access. After reading the image, the RGB data is transformed to intensity Y using equation 1

    formel 1 (1)

    Loss of low contrast fine detail is the result of non-linear filtering, mostly used for noise reduction in the image. Linear filtering would influence all structures in an image in the same way, so its influence could be measured on edges or siemens stars and therefore in the resolution measurement methods. The non-linear response of the camera to spatial frequencies can be shown in the distribution of the pixel values Y(x,y) of the reproduced white noise  in the chart. The distribution of the target is Gaussian, a linear filtering would change its variance but not the shape itself. So the shape of the distribution is a indicator for the filtering process in the signal processing and if it is highly non-linear or not.

    To normalize the distribution to a mode of 0 and to conserve the shape, we use the first derivative. This is calculated by a convolution of the image with the kernel [-.5 .5]. The first derivative of a normal distribution has also normal distribution and so on, so it is possible to check for the distribution in the processed image.

    To describe the shape of the distribution, the excess kurtosis is calculated (also called Fisher gamma). (12) The value becomes 0 for a normal distribution and is increased for leptokurtic distributions. The kurtosis is calculated as the fourth moment devided by the square of the second moment of the distribution. The second moment is the variance.

    formel 2(2)

    A distribution is called leptocurtic if it is more peaked about the mode than the normal distribution. Figuratively one can say, that the probability of the appearance of pixels with no or small difference to their neighbor in its value becomes higher (loss of low contrast fine details) while high differences (edges, high spatial frequencies) are maintained.


    3. ANALYSIS
    To prove the usability of the measure kurtosis to describe the texture blur, we made the following assumptions:

    1. As the focus of the lens in the camera system can be described as a linear filtering with the PSF, the kurtosis should not change with different focus. (3.1)
    2. If the kurtosis does change with the noise reduction, it should be low in unprocessed images and increase in processed, denoised images. (3.2)
    3. Kurtosis can describe texture blur while resolution tests fails to do this.(3.3)



    3.1. Kurtosis vs. Focus
    The chart shown in Figure 2 is reproduced using a Nikon D300 SLR camera. While keeping all camera settings constant, 16 images are taken with slight changes in the focus image by image. Each image is analyzed for its limiting resolution using the siemens star in the image center and the SFR_Siemens algorithm. The kurtosis is calculated for the parts of the image showing the different noise patches with different variances.
    The results are presented in table 3 and figure 3. One can see that the kurtosis does not change significantly with the focus, but there are slight differences. The maximum Δ Kurtosis that can be found is 0,12 in a range of a resolution from 0.39 to  0.45 lp/pix. The accuracy of autofocus systems is higher than the range we have tested here,(7) so the Δ Kurtosis will be lower for camera tests.

    MTF10 Kurt. 1/2 Kurt. 1/4 Kurt. 1/8 Kurt. 1/16
    0,39 0,54 0,19 0,24 0,21
    0,40 0,48 0,16 0,22 0,17
    0,42 0,52 0,23 0,22 0,17
    0,43 0,52 0,23 0,22 0,16
    0,44 0,42 0,21 0,20 0,12
    0,45 0,43 0,21 0,20 0,11
    Δ 0,12 0,07 0,04 0,10

    Table 2. Numerical Results of Focus to Kurtosis comparison. Graphical Results in Figure 3

    FocusvsKurtosis

    Figure 3. Kurtosis as a function of focusing. Nikon D300, standard camera JPEG, Kurtosis measured on four di erent patches with Noise variance of 1/2, 1/4, 1/8 and 1/16

     

    3.2. JPEG vs. RAW
    We selected four different digital SLR cameras for this test: The Canon 1Ds MkII, the Nikon D300, Pentax K20D and Sony α350. With all cameras we took images in the proprietary RAW file format and in JPEG mode. The JPEG images have been analyzed directly, while the RAW files have been processed in a very basic way. We used dcraw (11) to extract the basic image information from the files. We selected the "Document mode", which results in a readable intensity image (TIFF) in 16bit. For demosaicing we used gradient-corrected linear interpolation in the Mathworks MATLAB implementation and resulting 16bit RGB images have been loaded into Adobe Photoshop and adjusted using "Auto levels" followed by "Auto curves" and a conversion from 16bit-RGB to 8bit-RGB.

    Figure 4 shows a small detail of these image in comparison of the Pentax K20D and the Sony α350. This detail shows the noise patch with 1/8 variance in the chart. (Detail has been contrast enhanced and enlarged for presentation.) One can see that the texture blur effect is visible in the Camera JPEG image only. The RAW images have a strong noise overlay.

     

    VerglSony_Pentax

    Figure 4. Reproduction of white noise (200% view, contrast enhanced). Top: Pentax K20D Bottom: Sony α350
    Left to right: ISO100 Camera JPEG / ISO1600 Camera JPEG / ISO 100 RAW Basic / ISO 1600 RAW Basic


    All images have been taken at different camera sensitivity, while illumination and all other camera settings have been kept constant. We have calculated the signal to noise ratio, the MTF and the kurtosis for all images.
    The signal to noise ratio has been calculated as stated in (3) on four homogeneous neutral gray patches. Figure 5 compares the results of the four cameras for JPEG ans basic RAW processing.

    formel 3 (3)

    In the basic RAW processed image the relation of SNR to log(ISO speed) is linear. In general one can see a much better SNR for all cameras and all ISO settings in the camera JPEG. The ranking of the cameras changes between the two different processings, the Sony reduces the noise stronger than the competitors.

    To condense the amount of data, the MTF is reduced to MTF50 and MTF20, so these spatial frequencies that result to a modulation of 0.5 and 0.2. These measures are presented in Figure 8 as a function of the sensitivity. One can see, that the resolution does not change significantly between basic RAW and camera JPEG processing for all cameras and settings, except of the Sony in ISO 1600.

    Figure 6 illustrates the function kurtosis( ISO_speed). In the basic RAW image, the kurtosis is slightly below 0 for all cameras and all sensitivity settings. The platykurtic distribution (a kurtosis below zero) may be the result of the very basic demosaicing algorithm. The kurtosis increases towards zero with increasing sensitivity, but the changes are very low.

    The results for the camera JPEG images are much different compared to the RAW processed images. For all cameras, the kurtosis increases with increasing ISO-speed.

    ISOvsNoise_RAW ISOvsNoise_JPEG

    (a) Basic RAW Processing                                                                     (b) Camera JPEG

    Figure 5.
    Comparison of a basic RAW processing to camera JPEG processing.
    Signal to Noise Ratio [dB] measured on homogeneous gray patch in image for ISO 100 (Nikon: ISO200),
    ISO 400, ISO 800 and ISO 1600

     

    JPEG RAW
    ISO Kurtosis 1/4 Kurtosis 1/8 Kurtosis 1/4 Kurtosis 1/8
    Canon 1Ds MkII 100 0,71 0,59 -0,11 -0,22
    400 0,77 0,68 -0,06 -0,16
    800 0,85 0,72 -0,08 -0,17
    1600 0,92 0,75 -0,06 -0,15
    Nikon 300D 200 0,19 0,22 -0,14 -0,22
    400 0,19 0,20 -0,11 -0,22
    800 0,31 0,29 -0,14 -0,18
    1600 0,28 0,31 -0,14 -0,19
    Pentax K20D 100 0,04 0,02 -0,10 -0,11
    400 0,04 0,03 -0,07 -0,10
    800 0,03 0,06 -0,05 -0,09
    1600 0,15 0,16 -0,03 -0,02
    Sony α350 100 0,68 0,74 -0,07 -0,15
    400 1,05 1,13 -0,11 -0,14
    800 1,53 1,76 -0,06 -0,14
    1600 2,31 2,91 -0,12 -0,17

    Table 3. Numerical Results of kurtosis to ISO-speed and processing comparison. Graphical Results in Figure 6

     

    ISOvsKurtosis_RAW ISOvsKurtosis_JPEG

    (a) Basic RAW Processing                                                                     (b) Camera JPEG

    Figure 6.
    Comparison of a basic RAW processing to camera JPEG processing. Kurtosis, calculated from image repro- ducing a white noise patch with a variance of 1/4 (second line:1/8) for ISO 100 (Nikon: ISO200), ISO 400, ISO 800 and ISO 1600
    Note the di fference in y-axes scaling

     

    3.3. Kurtosis vs. Resolution measurement
    We compared two different resolution measurement methods with the kurtosis. All informations have been extracted from the same images, using the center siemens star for the SFR_Siemens approach, an edge with 60% modulation for the SFR_Edge method and the noise patches with 1/4 variance and 1/8 variance. The results can be seen in Table 4 and Figure 7.

    One can see that the kurtosis increases dramatically with the increased sensitivity. The values calculated using the SFR_Siemens approach indicate a significant loss of resolution in the ISO 1600 setting. This loss of resolution is visible in test images, but for the lower sensitivities we have also already seen a great increased texture blur. The SFR_Edge method does not indicate any texture blur or loss of resolution, which is not surprising as an edge is a structure the noise reduction tries to conserve as good as possible.

    Kurtosis SFR Siemens SFR Edge
    ISO Kurt. 1/4 Kurt. 1/8 MTF50 MTF20 MTF50 MTF20
    100 0,68 0,74 0,32 0,43 0,34 0,44
    400 1,05 1,13 0,29 0,41 0,34 0,46
    800 1,53 1,76 0,30 0,41 0,36 0,49
    1600 2,31 2,91 0,23 0,32 0,35 0,46

    Table 4. Numerical Results of kurtosis to SFR Siemens and SFR Edge comparison. Graphical Results in Figure 7



    4. CONCLUSION
    The three assumptions made could be proved by tests on digital still cameras. We could show that the focus has a low influence on the shape of the pixel value distribution if the camera is reproducing a white noise target. The comparison shown of a basic RAW processing and the complex JPEG processing in the camera revealed a significant increase of the kurtosis in the presence of noise reduction. Tests on more than 30 actual digital SLR cameras in the german market (7) have shown a good correlation between the kurtosis and the loss of low contrast fine detail in the test images, especially in high ISO settings.
    Furthermore it could be shown in this paper and in previous work (1) that the standard resolution measurement methods fail to describe the effect of texture blur.

    isovsKurtvsRes

    Figure 7. Comparison of Kurtosis against SFR Siemens and SFR Edge.Sony α350, NoiseLab Chart as shown in Figur 2

     

    ISOvsMTF_RAW ISOvsMTF_JPEG

    (a) Basic RAW Processing                                                                     (b) Camera JPEG

    Figure 8
    . Comparison of a basic RAW processing to camera JPEG processing. MTF20 (second line: MTF50), calculated using sinusoidal siemens star (SFR Siemens) for ISO 100 (Nikon: ISO200), ISO 400, ISO 800 and ISO 1600


    As the kurtosis itself is is more an indicator for a non-linear processing and differences in the processing of edges and texture, we propose to use this measure additionally to resolution and noise measurement. With this additional information, good results in resolution and noise tests can be put into perspective against texture blur.



    REFERENCES

    1. Uwe Artmann "Noise Reduction vs. Spatial Resolution", Diploma thesis at University of Applied Science Cologne Germany, downloadable via www.image-enginering.de
    2. Artmann, Wueller, "Noise Reduction vs. Spatial Resolution",Electronic Imaging Conference 2008, 6817-9
    3. Loebich,Wueller,Klingen,Jaeger, "Digital Camera Resolution Measurement Using Sinusoidal Siemens Stars", Electronic Imaging Conference 2007, SPIE Vol. 6502, 65020N
    4. International Organization of Standardization,"ISO12233 Photography - Electronic still picture imaging - Resolution measurements"
    5. International Organization of Standardization,"ISO15739 Photography - Electronic still picture imaging - Noise measurements"
    6. Image Engineering, "White Paper: Camera Test", downloadable via: www.image-engineering.de
    7. Stechl, "Karten neu gemischt, 29 digital SLR-Kameras im neuen Testverfahren", ColorFoto 07/2008
    8. Kleinmann, Wueller,''Investigation of two Methods to quantify Noise in digital Images based on the Perception of the human Eye ", Electronic Imaging, SPIE Vol. 6494, 64940N
    9. Williams, Wueller, Matherson, Yoshida, Hubel, "A Pilot Study of Digital Camera Resolution Metrology Protocols Proposed Under ISO 12233, Edition 2", Electronic Imaging Conference 2008, EI08\_6808\_3
    10. University of Cambridge "http://thesaurus.maths.org/" Entry "Leptokurtic" 17.11.2007
    11. D. Coffin, "Decoding raw digital photos in Linux", http://www.cybercom.net/~dcoffin/dcraw/
    12. Fahrmeir, Kuenstler, Pigeot, Tutz, "Statistik", Springer Verlag, 2007 (ISBN 978-4-540-69713-8)

    CONFERENCE PAPERS

    Description of texture loss using the dead leaves target:
    Current issues and a new intrinsic approach
    Electronic Imaging Conference 2014
    Leonie Kirk, Philip Herzer, Uwe Artmann (Image Engineering)
    and Dietmar Kunz (Cologne University of Applied Sciences)

    PDF PDF (1.3 MB)

    Abstract: The computing power in modern digital imaging devices allows complex denoising algorithms. The negative influence of denoising on the reproduction of low contrast, fi ne details is also known as texture loss. Using the dead leaves structure is a common technique to describe the texture loss which is currently discussed as a standard method in workgroups of ISO and CPIQ. We present our experience using this method. Based on real camera data of several devices, we can point out where weak points in the SFRDeadLeaves method are and why results should be interpreted carefully.

    The SFRDeadLeaves approach follows the concept of a semi-reference method, so statistical characteristics of the target are compared to statistical characteristics in the image. In the case of SFRDeadLeaves, the compared characteristic is the power spectrum. The biggest disadvantage of using the power spectrum is that phase information is ignored, as only the complex modulus is used.

    We present a new approach, our experience with it and compare it to the SFRDeadLeaves method. The new method follows the concept of a full-reference method, which is an intrinsic comparison of image data to reference data.

    1px

    Low Light Performance of Digital Still Cameras
    Electronic Imaging Conference 2013
    Dietmar Wueller (Image Engineering)

    PDF PDF (1.1 MB)

    Abstract: The major difference between a dSLR camera, a consumer camera, and a camera in a mobile device is the sensor size. The sensor size is also related to the over all system size including the lens. With the sensors getting smaller the individual light sensitive areas are also getting smaller leaving less light falling onto each of the pixels.

    This effect requires higher signal amplification that leads to higher noise levels or other problems that may occur due to denoising algorithms. These Problems become more visible at low light conditions because of the lower signal levels.

    The fact that the sensitivity of cameras decreases makes customers ask for a standardized way to measure low light performance of cameras. The CEA (Consumer Electronics Association) together with ANSI has addressed this for camcorders in the CEA-639 standard. The ISO technical committee 42 (photography) is currently also thinking about a potential standard on this topic for still picture cameras. This paper is part of the preparation work for this standardization activity and addresses the differences compared to camcorders and also potential additional problems with noise reduction that have occurred over the past few years.

    The result of this paper is a proposed test procedure with a few open questions that have to be answered in future work.

    1px

    Image Quality Evaluation Using Moving Targets
    Electronic Imaging Conference 2013
    Uwe Artmann (Image Engineering)

    PDF PDF (1.3 MB)

    Abstract: The basic concept of testing a digital imaging device is to reproduce a known target and to analyze the resulting image. This semi-reference approach can be used for various di erent aspects of image quality. Each part of the imaging chain can have an influence on the results: lens, sensor, image processing and the target itself. The results are valid only for the complete system. If we want to test a single component, we have to make sure that we change only one and keep all others constant. When testing mobile imaging devices, we run into the problem that hardly anything can be manually controlled by the tester.

    Manual exposure control is not available for most devices, the focus cannot be influenced and hardly any settings for the image processing are available. Due to the limitations in the hardware, the image pipeline in the digital signal processor (DSP) of mobile imaging devices is a critical part of the image quality evaluation. The processing power of the DSPs allows sharpening, tonal correction and noise reduction to be non-linear and adaptive. This makes it very hard to describe the behavior for an objective image quality evaluation. The image quality is highly influenced by the signal processing for noise and resolution and the processing is the main reason for the loss of low contrast, fi ne details, the so called texture blur. We present our experience to describe the image processing in more detail. All standardized test methods use a de fined chart and require, that the chart and the camera are not moved in any way during test. In this paper, we present our results investigating the influence of chart movement during the test. Diff erent structures, optimized for di erent aspects of image quality evaluation, are moved with a defi ned speed during the capturing process.

    The chart movement will change the input for the signal processing depending on the speed of the target during the test. The basic theoretical changes in the image will be the introduction of motion blur. With the known speed and the measured exposure time, we can calculate the theoretical motion blur. We compare the theoretical influence of the motion blur with the measured results. We use diff erent methods to evaluate image quality parameter vs. motion speed of the chart. Slanted edges are used to obtain a SFR and to check for image sharpening. The aspect of texture blur is measured using dead leaves structures. The theoretical and measured results are plotted against the speed of the chart and allow an insight into the behavior of the DSP.

    1px
    Improving texture loss measurement: spatial frequency response based on a colored target
    Electronic Imaging Conference 2012
    Uwe Artmann and Dietmar Wüller (Image Engineering)
    Article »

    Abstract: The pixel race in the digital camera industry and for mobile phone imaging modules have made noise reduction a signi ficant part in the signal processing. Depending on the used algorithms and the underlying amount of noise that has to be removed, noise reduction leads to a loss of low contrast fi ne details, also known as texture loss. The description of these eff ects became an important part of the objective image quality evaluation in the last years, as the established methods for noise and resolution measurement fail to do so. Di fferent methods have been developed and presented, but could not fully satisfy the requested stability and correlation with subjective tests. In our paper, we present our experience with the current approaches for texture loss measurement. We have found a critical issue within these methods: the used targets are neutral in color. We could show that the test-lab results do not match the real live experience with the cameras under test. We present an approach using a colored target and our experience with this method.
    1px
    PCP-Tour 2011: Zukunft der dig. Fotografie, aktuelle Probleme mit DSLR und deren Lösung
    Dietmar Wüller (Image Engineering)
    PDF
    PDF(4.4 MB)

    Vom 03. bis 13. Mai 2011 veranstalteten die Photo Competence Partner (PCP) die 5. PCP-Tour.
    Dietmar Wüller hielt Vorträge mit folgenden Schwerpunkten:
    · Der Stand der Digitalkamera-Technik.
    · Wie ist der aktuelle Stand?
    · Wo liegen die Probleme?
    · Was ist noch zu erwarten?
    1px
    What if the image quality analysis rates my digitization system a “no go”?
    Archiving Conference 2011
    Dietmar Wüller (Image Engineering)
    Article »

    Abstract: As solutions for quality assurance (QA) like UTT (Universal Test Target) and 'golden thread'2 have been introduced it is now easy to monitor the quality of a digitization system. But several questions remain that have not yet been answered. Among these the following three are the most important ones:
    1. How do I calibrate the system step by step to meet given specifications like e.g. Metamorfoze?
    2. Are the existing specifications suitable for my individual application?
    3. What do I do if one of the image quality aspects is out of the range?
    This paper tries to answer these questions and to provide guidance for the daily use of the QA solutions.
    1px
    Differences of digital camera resolution metrology to describe noise reduction artifacts
    Electronic Imaging Conference 2010
    Uwe Artmann and Dietmar Wüller (Image Engineering)
    Article »

    Abstract: Noise reduction in the image processing pipeline of digital cameras has a huge impact on image quality. It may result in loss of low contrast fine details, also referred to as texture blur.Previous papers have shown, that the objective measurement of the statistical parameter kurtosis in a reproduction of white Gaussian noise with the camera under test correlates well with the subjective perception of these ramifications. To get a more detailed description of the influence of noise reduction on the image, we compare the results of different approaches to measure the spatial frequency response (SFR). Each of these methods uses a different test target, therefore we get different results in the presence of adaptive filtering. We present a study on the possibility to derive a detailed description of the influence of noise reduction on the different spatial frequency sub bands based on the differences of the measured SFR using several approaches. Variations in the underlying methods have a direct influence on the derived measurements, therefore we additionally checked for the differences of all used methods.
    Keywords: image quality evaluation, texture, siemens star, resolution, spatial frequency response, modulation transfer function, kurtosis, MTF, SFR
    1px
    Interaction of image noise, spatial resolution, and low contrast fi ne detail preservation in digital image processing
    Electronic Imaging Conference 2009
    Uwe Artmann and Dietmar Wüller (Image Engineering)

    Article »

    Abstract: We present a method to improve the validity of noise and resolution measurements on digital cameras. If non-linear adaptive noise reduction is part of the signal processing in the camera, the measurement results for image noise and spatial resolution can be good, while the image quality is low due to the loss of ne details and a watercolor like appearance of the image. To improve the correlation between objective measurement and subjective image quality we propose to supplement the standard test methods with an additional measurement of the texture preserving capabilities of the camera. The proposed method uses a test target showing white Gaussian noise. The camera under test reproduces this target and the image is analyzed. We propose to use the kurtosis of the derivative of the image as a metric for the texture preservation of the camera. Kurtosis is a statistical measure for the closeness of a distribution compared to the Gaussian distribution. It can be shown, that the distribution of digital values in the derivative of the image showing the chart becomes the more leptokurtic (increased kurtosis) the stronger the noise reduction has an impact on the image.
    Keywords: Noise, Noise Reduction, Texture, Resolution, Spatial Frequency, Kurtosis, MTF, SFR
    1px
    In Situ Measured Spectral Radiation of Natural Objects
    17th Color Imaging Conference Final Program and Proceedings
    Dietmar Wüller (Image Engineering)

    Article »

    Abstract: The only commonly known source for some in situ measured spectral radiances is ISO 17321-1. It describes the principle of how the color characterization of a digital camera works and provides spectral radiances for 14 common objects.
    This paper summarizes the results of a project that was started to collect several hundred measurements of all different kinds of objects under various illuminations keeping in mind typical scenes and objects that people take photographs of. In many cases the spectral radiation of objects is not only that of the reflected light. Sometimes the light coming form objects like leaves for example is a mixture of the reflected and the transmitted light. In other cases inter reflections between the objects modify the spectral radiance in scenes and some objects like the human skin appears totally different in real live compared to the skin tones of a reflective color target.
    The collected data can be used as a scientific data basis for different studies related to natural objects. But the main reason to collect the data was to provide training data for the color characterization of digital cameras. Future work will show whether a carefully collected subset of the database is sufficient to create an ideal matrix or look up table for a digital camera but for the time being all app. 2500 measurements are available and used to calculate camera matrices.
    1px
    Noise Reduction vs. Spatial Resolution
    Uwe Artmann and Dietmar Wüller (Image Engineering)
    Article »

    Abstract: In modern digital still cameras, noise-reduction is a more and more important issue of signal processing, as the customers demand for higher pixel counts and for increased light sensitivity. In the last time, with pixel counts of ten or more megapixel in a compact camera, the images lack more and more of ne details and appear degraded. The standard test-methods for spatial resolution measurement fail to describe this phenomenon, because due to extensive adaptive image enhancements, the camera cannot be treated as a linear position-invariant-system. In this paper we compare established resolution test methods and present new approaches to describe the spatial frequency response of a digital still camera. A new chart is introduced which consists of nine siemens stars, a multi-modulation set of slanted edges and Gaussian white noise as camera target. Using this set, the standard methods known as SFR-Siemens and SFR-Edge are calculated together with additional information like edge-width and edge-noise. Based on the Gaussian white noise, several parameters are presented as an alternative to describe the spatial frequency response on low-contrast texture.
    Keywords: Noise reduction, SFR-Siemens, SFR-Edge, MTF, Spatial resolution, Noise, resolution measurement
    Statistic Analysis of Millions of Digital Photos
    Dietmar Wueller (Image Engineering) · Reiner Fageth (CeWe Color AG)
    Article »

    Abstract: The analysis of images has always been an important aspect in the quality enhancement of photographs and photographic equipment. Due to the lack of meta data it was mostly limited to images taken by experts under predefined conditions and the analysis was also done by experts or required psychophysical tests. With digital photography and the EXIF1 meta data stored in the images, a lot of information can be gained from a semiautomatic or automatic image analysis if one has access to a large number of images. Although home printing is becoming more and more popular, the European market still has a few photofinishing companies who have access to a large number of images. All printed images are stored for a certain period of time adding up to several million images on servers every day. We have utilized the images to answer numerous questions and think that these answers are useful for increasing image quality by optimizing the image processing algorithms. Test methods can be modified to fit typical user conditions and future developments can be pointed towards ideal directions.
    Keywords: Image Quality, Exposure Value, EXIF, Meta Data, Image Analysis
    A Pilot Study of Digital Camera Resolution Metrology Protocols Proposed Under ISO 12233, Edition 2

    Don Williams (Image Science Associates) · Dietmar Wüller (Image Engineering) ·
    Kevin Matherson (Hewlett Packard) · Hideaka Yoshida (Olympus Imaging Corp.) ·
    Paul Hubel (Foveon, Inc.)

    Article »

    Abstract: Edition 2 of ISO 12233, Resolution and Spatial Frequency Response (SFR) for Electronic Still Picture Imaging, is likely to offer a choice of techniques for determining spatial resolution for digital cameras different from the initial standard. These choices include 1) the existing slanted-edge gradient SFR protocols but with low contrast features, 2) polar coordinate sine wave SFR technique using a Siemens star element, and 3) visual resolution threshold criteria using a continuous linear spatial frequency bar pattern features. A comparison of these methods will be provided. To establish the level of consistency between the results of these methods, theoretical and laboratory experiments were performed by members of ISO TC42/WG18 committee. Test captures were performed on several consumer and SLR digital cameras using the on-board image processing pipelines. All captures were done in a single session using the same lighting conditions and camera operator. Generally, there was good conformance between methods albeit with some notable differences. Speculation on the reason for these differences and how this can be diagnostic in digital camera evaluation will be offered.
    Keywords: Resolution, spatial frequency response, imaging performance, image quality, imaging standards

    Investigation of two Methods to quantify Noise in digital Images based on the Perception of the human Eye
    Electronic Imaging Conferenece 2007
    J. Kleinmann · D. Wüller

    Article »

    Abstract: Since the signal to noise measuring method as standardized in the normative part of ISO 15739:2002(E)1 does not quantify noise in a way that matches the perception of the human eye, two alternative methods have been investigated which may be appropriate to quantify the noise perception in a physiological manner: - the model of visual noise measurement proposed by Hung et al2 (as described in the informative annex of ISO 15739:20021) which tries to simulate the process of human vision by using the opponent space and contrast sensitivity functions and uses the CIEL*u*v*1976 colour space for the determination of a so called visual noise value. - The S-CIELab model and CIEDE2000 colour difference proposed by Fairchild et al3 which simulates human vision approximately the same way as Hung et al2 but uses an image comparison afterwards based on CIEDE2000. With a psychophysical experiment based on just noticeable difference (JND), threshold images could be defined, with which the two approaches mentioned above were tested. The assumption is that if the method is valid, the different threshold images should get the same ‘noise value’. The visual noise measurement model results in similar visual noise values for all the threshold images. The method is reliable to quantify at least the JND for noise in uniform areas of digital images. While the visual noise measurement model can only evaluate uniform colour patches in images, the S-CIELab model can be used on images with spatial content as well. The S-CIELab model also results in similar colour difference values for the set of threshold images, but with some limitations: for images which contain spatial structures besides the noise, the colour difference varies depending on the contrast of the spatial content.
    Key words: Signal to noise ratio, noise measurement, contrast sensitivity function, visual noise value, S-CIELab, CIEDE2000, CIEluv1976, just noticeable difference, OECF.
    Measurement Method for Image Stabilizing Systems
    Electronic Imaging Conference 2007
    B. Golik · D. Wüller
    Article »

    Abstract: Image stabilization in digital imaging continuously gains in importance. This fact is responsible for the increasing interest in the benefits of the stabilizing systems. The existing standards provide neither binding procedures nor recommendations for the evaluation. This paper describes the development and implementation of a test setup and a test procedure for qualitative analysis of image stabilizing systems under reproducible, realistic conditions. The basis for these conditions is provided by the studies of physiological properties of human handshake and the functionality of modern stabilizing systems.
    Keywords: Image Stabilizing Systems, Tremor, Handshake, OIS, VR, Anti-Shake, Shake Reduction
    Digital Camera Resolution Measurement Using Sinusoidal Siemens Stars
    Electronic Imaging Conference 2007
    C. Loebich · D. Wüller · B. Klingen · A. Jaeger
    Article »

    Abstract: The resolution of a digital camera is defined as its ability to reproduce fine detail in an image. To test this ability methods like the Slanted Edge SFR measurement developed by Burns and Williams1 and standardized in ISO 122332 are used. Since this method is - in terms of resolution measurements - only applicable to unsharpened and uncompressed data an additional method described in this paper had to be developed. This method is based on a Sinusoidal Siemens Star which is evaluated on a radius by radius or frequency by frequency basis. For the evaluation a freely available runtime program developed in MATLAB is used which creates the MTF of a camera system as the contrast over the frequency.
    Keywords: modulated siemens star, MTF, SFR, contrast function, camera resolution measurement
    The Useage of Digital Cameras as Luminance Meters
    Electronic Imaging Conference 2007
    D. Wüller · H. Gabele
    Article »

    Abstract: Many luminance measuring tasks require a luminance distribution of the total viewing field. The approach of imageresolving luminance measurement, which could benefit from the continual development of position-resolving radiation detectors, represents a simplification of such measuring tasks. Luminance measure cameras already exist which are specially manufactured for measuring tasks with very high requirements. Due to high-precision solutions these cameras are very expensive and are not commercially viable for many image-resolving measuring tasks. Therefore, it is desirable to measure luminance with digital still cameras which are freely available at reasonable prices. This paper presents a method for the usage of digital still cameras as luminance meters independent of the exposure settings. A calibration of the camera is performed with the help of an OECF (opto-electronic conversion function) measurement and the luminance is calculated with the camera’s digital RGB output values. The test method and computation of the luminance value irrespective of exposure variations is described. The error sources which influence the result of the luminance measurement are also discussed.
    Keywords: digital camera, luminance, OECF measurement, exposure value
    Evaluating Digital Cameras
    Electronic Imaging Conference 2006
    D. Wüller
    Article »

    Abstract: The quality of digital cameras has undergone a magnificent development during the last 10 years. So have the methods to evaluate the quality of these cameras. At the time the first consumer digital cameras were released in 1996, the first ISO standards on test procedures were already on their way. At that time the quality was mainly evaluated using a visual analysis of images taken of test charts as well as natural scenes. The ISO standards lead the way to a couple of more objective and reproducible methods to measure characteristics such as dynamic ranges, speed, resolution and noise. This paper presents an overview of the camera characteristics, the existing evaluation methods and their development during the last years. It summarizes the basic requirements for reliable test methods, and answers the question of whether it is possible to test cameras without taking pictures of natural scenes under specific lighting conditions. In addition to the evaluation methods, this paper mentions the problems of digital cameras in the past concerning power consumption, shutter lag, etc. It also states existing deficits which need to be solved in the future such as optimized exposure and gamma control, increasing sensitivity without increasing noise, and the further reduction of shutter lag etc.
    Keywords: digital photography, image quality, noise, dynamic range, resolution, ISO speed, shutter lag, power consumption, SFR
    Proposal for a Standard Procedure to Test Mobile Phone Cameras
    Electronic Imaging Conference 2006
    D. Wüller
    Article »

    Abstract: Manufacturers of mobile phones are seeking a default procedure to test the quality of mobile phone cameras. This paper presents such a default procedure based as far as possible on ISO standards and adding additional useful information based on easy to handle methods. In addition to this paper, which will be a summary of the measured values with a brief description on the methods used to determine these values, a white paper for the complete procedure will be available.
    Keywords: digital photography, image quality, noise, dynamic range, resolution, ISO speed, shutter lag, power consumption, SFR
    Measuring Scanner Dynamic Range
    IS+T PICS Conference 2002
    D. Wüller
    Article »

    The use of scanners to provide digital image files is rapidly growing. Currently there is no standardized method to determine the dynamic range of scanners. Therefore the data reported in technical specifications can be determined using different methods. An ISO 21550 Standard to measure the ability of scanners to reproduce tones especially in the dark areas of the original is currently under development (in an early working draft stage).
    Three Years of Practical Experience in Using ISO Standards for Testing Digital Cameras
    IS+T PICS Conference 2001
    C. Loebich · D. Wüller
    Article »

    In 1997 we started to build our test booth for testing digital still cameras. The decision to start this kind of testing digital cameras was made because just looking at pictures on the screen or analysing printed pictures ended in different results depending on the test person and the surrounding conditions. We designed the testing equipment by using ISO Standards, except some parameters which are discussed on the following pages. Until today we had the possibility to measure more than 150 different cameras and could also test up to 40 cameras of the same type. So there is a good base to make some conclusions about this way of camera testing.
    Practical Scanner Tests Based on OECF and SFR Measurements
    IS+T PICS Conference 2001
    D. Wüller · C. Loebich
    Article »

    The technical specification of scanners has always been used as a marketing instrument since the introduction of commercially available scanners. The scan resolution specified is, in some cases, an interpolated sampling rate, and the color depth is ‘improved’ by using ‘bit or bit depth enhancement technologies.’ However, these numbers do not tell the customer anything about the quality of images that can be achieved by a particular scanner and are, more often than not, misleading. We were asked by German photographic and computer magazines to develop a method to evaluate the overall quality of scanners. We based our tests on developing ISO standards and procedures for digital still cameras and modified these to fit the specific characteristics of scanners. In this paper, we outline our methodology and discuss our results.
    Colour Characterisation of Digital Cameras by analysing the Output Data for Measuring the Spectral Response
    IS+T PICS Conference 1999
    M. Ritter · D. Wüller
    Article »

    The background for this work was the wish of a German photographic magazine to have a method for measuring the colour reproduction quality of digital still picture cameras, as well as their ability to be integrated in a colour management workflow. With scanners, characterisation is less difficult because they always use the same light source, and the colours in the photographic materials that are reproduced have very similar characteristics. In digital photography, the lighting conditions change with each scene, and the colours in a scene can be completely different than the colours inherent in photographic material. Therefore, the best way to characterise a digital still picture camera is to measure its spectral response. It should then be possible to calculate the RGB values by just knowing the spectral illumination of the sensor. That is exactly what the IEC TC100/61966-9 “Colour Measurement and Management in Multimedia Systems and Equipment Part 9: Digital Cameras” working draft proposes. The working draft of this standard was published after we already started our work to find out whether this is a possible way to characterise the colour reproduction of a digital camera or not.