Introduction

“Cameras are everywhere” is a claim that can be easily justified when looking around and seeing cameras in everything from consumer electronics to surveillance systems and cars. Cameras in a car, which assist the driver with various tasks including driving at night and reversing, are part of an array of sensors that make ADAS possible.

The image quality performance of a camera is always assessed depending on the various functions. For example, if a camera is intended to deliver an image to a human observer, then the observer should perceive all of the necessary information in a “clean looking image.” Achieving a “clean looking image” to a human observer is of course highly subjective and thus naturally creates the main challenge of image quality testing. Nevertheless, there are many objective measurements available that can correlate and analyze the image quality of an image by a human observer.

An automobile camera treated as a sensor for an ADAS, however, has much different image quality requirements for its performance and is not described as “clean looking.” Nonetheless, it extremely important that we analyze the image quality of ADAS cameras to ensure their safety and effectiveness.

Image quality is FUN – Fidelity, Usefulness, and Naturalness. While the importance of fidelity is low in a case where an algorithm (such as in ADAS) instead of a human observer is the recipient, usefulness, in this case, becomes the most important aspect. An image without content or information is not useful for an ADAS. Thus it has become imperative that we create a way to evaluate how much relevant information the image contains.

Spatial Frequency Response

An ADAS will have to be able to detect objects in a defined distance, a spatial frequency response (SFR) measurement will provide the information about the optical performance of a camera. In other words, what level of detail can the device under test reproduce in a way that an algorithm can still detect something? The SFR can be used to provide different objective metrics to show how spatial frequencies in the object space are reproduced. ISO standard ISO12233:2014 describes one of the most common ways to measure this process. Even though this standard is aimed at digital photography, it is still widely accepted in other applications such as ADAS or other industrial applications that use cameras.

Many engineers have a specific chart in mind when they hear "ISO12233" (see Figure 1), which is based on subjective evaluation by an observer. The chart in figure 1, however, is not recommended because it only provides the limits and not the entire SFR. It is much better to use the s-SFR method described in the standard, which is based on a sinusoidal Siemens star. Another very popular method is the e-SFR method based on the reproduction of a slanted edge. It is important to keep in mind that the e-SFR method is easily influenced by sharpening algorithms. Sharpening is a method to improve the visual sharpness of an image for a human observer as well as a popular image enhancement algorithm. Unfortunately, what benefits the human observer is counterproductive for object detection and classification, a typical task for ADAS.

TE170 outdated small

HDR & Noise

Common scenarios for an automotive camera tend to have a very high dynamic range, i.e., the difference between the darkest regions and the brightest regions of a scene. A typical example is a car approaching the entrance or the exit of a tunnel. The ADAS needs to be able to detect objects such as lines or other cars in and outside the tunnel, which is a huge difference in luminance. To be able to provide information on all parts of the image, it is very common for cameras in ADAS to combine over- and underexposed images into a high dynamic range (HDR) image. Using this technology, which is performed on sensor level for the latest systems, the camera can potentially reproduce a contrast of 120dB or more. 120dB indicates that we have a contrast of 1:1.000.000 between the brightest and the darkest region of a scene.

As always, nothing comes easy, as such the HDR algorithms may also introduce some artifacts that are typical for these kinds of cameras: SNR (signal to noise ratio) drops. The higher the value of SNR, the lower the disturbing impact of noise. When plotting the SNR versus the scene light intensity, we normally have an increase of the SNR with the intensity. Therefore, we can assume that we have low SNR in low light and good SNR in all intensities above that. When the camera merges several images into a single HDR image, this assumption does not hold true, and we can have intensity ranges in the mid tones that also show a poor SNR. A low SNR leads to a loss of information and serious problems for image quality in ADAS.

SFR curve

TE253

The SNR is a very common metric for all kinds of signals, but the meaning of SNR values for the application is limited. In photography, the SNR is replaced by ISO15739:2013 "Visual Noise" metric, which provides a much better correlation between the human observer and the obtained metric.

A new approach to evaluate a camera system with a direct correlation to the usability of the image is the contrast transfer accuracy (CTA). Other than SNR, CTA can also directly provide the information on whether the system can detect specific object contrast under test or not. Essentially, it can also answer the question how serious an observed "SNR drop" really is. Engineers from BOSCH have presented the CTA measurement, and it is currently under discussion within the IEEE-P2020 initiative to become an official industry standard.

TE269 intro

dts NL intro

Color

Cameras in photography are designed to mimic the human perception of color, while cameras for ADAS do not have the same constraints. All cameras used as detectors of objects, rather than a source of video or images for a human observer, do not focus that much on color. Nevertheless, these cameras need at least a basic understanding of the differences between a white line and a yellow line on a road. It also does not hurt to know if the traffic light is red or green, but it does not need to reproduce the colors very accurately.

Different ADAS provide an image to the human observer (such as a bird's eye view for parking assistance), which has merged from different signal streams of several cameras. While it is not of very high importance that the colors are perfect, it is however important that at least all cameras are nearly equal regarding color reproduction. If the manufacturer fails to calibrate the various cameras properly, it will be very obvious where the stitching between different images occurs.

Flicker

Light sources based on LED technology have manifold advantages over traditional technologies and are spreading rapidly in various applications. Headlamps have become a design element and brand differentiator, while traffic lights require far less maintenance and use less energy. The problem for camera systems with this technology is how it controls these LED-based light sources. Pulse width modulation (PWM) will turn the LED on and off with a defined frequency and for a defined fraction of the cycle ("duty cycle"). As the human visual system integrates the intensity over time, the human driver does not perceive any of this and sees a constant light source intensity. A camera system integrates light energy only during the exposure time, and if this exposure time is short, the camera system might show an artifact known as "flicker".

Camera systems that use a short exposure time suffer the most from this effect. As HDR technologies are often based on the combination of images with different exposure times, highlights can perceivably be affected. This artifact can lead to a low-frequency blinking of visible light sources, e.g., headlamps of approaching cars or brake lights of cars in front. Both are visually annoying and more importantly can lead to wrong information for the ADAS. It is important for the ADAS to be able to differentiate between brake lights and a turn indicator and between a normal car and an emergency vehicle approaching from behind. As we might lose information or get wrong information due to flicker, today’s camera in the automotive environment requires strategies (hardware or software) that suppress the flicker effect.

To effectively test a camera system for flicker, the device has to be tested in an environment that can produce a large variety of PWM frequencies, different duty cycles and allow shifts in its phase between camera capture frequency and PWM frequency. The IEEE-P2020 initiative has a workgroup that is working on a standardized way to measure and benchmark the flickering behavior of camera system used in ADAS.

IEEE-P2020*

The IEEE P2020 initiative is a joint effort of experts from various fields of image quality in the automotive environment. The workgroup gave itself the task of creating a standard that describes how camera systems shall be benchmarked and evaluated to ensure that these devices deliver the required performance for an ADAS system to make the best possible decisions. Members of the workgroup have affiliations to all parts of the typical supply chain in the automotive industry (OEM, Tier 1, Tier 2, ...), test institutes (like Image Engineering) and academia. The first output of this workgroup will be a whitepaper that contains an overview of the requirements of a test system for automotive cameras and a gap-analysis over existing standards.

*Disclaimer: This presentation and materials and opinions contained within it solely represent the views of this Working Group and do not necessarily represent the position of either the IEEE or the IEEE Standards Association