DOCX

Camera phone principle

By Philip Perez,2018-01-01 00:53:00
487 views 0
Camera phone principle

Behind every beautiful photo in a photographable phone, there's plenty of electronic / optical / mechanical processing "magic" playing out.Users usually do not pay attention to these processing "magic", because they quietly and unnoticed.This article discusses the challenges of generating a superior image with a CMOS sensor inside a photographable phone.

Figure. 1: A mechanical diagram of a digital camera with an optical system basically identical to that of a conventional film camera.

Image Generation

In a film camera, light collected through an optical system is shone on a piece of film that undergoes exposure and subsequent chemical development.In a digital camera, light still passes through an optical system with a multicomponent lens and a lens barrel.But the light now shines on a row and column of sensors made up of millions of tiny image elements, known as pixels. Figure 1 is a mechanical diagram of a digital camera.

When light hits the pixel array, it passes through a color filter array to ensure that only blue, red, or green light actually reaches the appropriate pixel.An analog signal is generated on each pixel, which is then transformed into a digital signal via an ADC.The signal then passes through a component known as an image pipeline, or I-Pipe, made up of a series of electronic filters that make the signal look like a real photo.

I-Pipe adjusts the white balance and color, and eliminates some of the anomalies introduced by the photography method itself.These anomalies include lens shadows, geometric distortion, blurred focus from the center of the lens, and digital sensor noise.Agilent's I-Pipe also compresses the image into JPEG format to produce a small, accurate compressed image that can be written quickly to a storage medium.

Pretreatment of light

An absorbent or reflective infrared filter is used to block infrared radiation above 780 nanometers, allowing only the visible part of the spectrum to pass through.This ensures that the image sensor only focuses on what the human eye will see and optimizes for color integrity.If infrared light is not truncated in this way, it can cause blurring and reduce the sharpness of the image taken by the lens.

Figure 2: The human eye is twice as sensitive to green as it is to red and blue. The Bayer color filter uses a line of blue and green filters alternating with a line of red and green filters, making green pixels twice as sensitive as blue and red pixels.

A microlens is also used to preprocess falling light so that it is refracted as much as possible into vertical pixels.The microlens improves the sensitivity of the pixel, which is usually located directly above the color filter array.

Color Filter Array-Bayer Filter

Photodiodes are sensitive to brightness but not to color.Therefore, some mechanism must be adopted to make it sensitive to specific colors by manual adjustment so that these colors can finally be displayed in front of people's eyes.A color filter array can be used to ensure that each sensor pixel receives only one color of light: typically red, blue, or green.

There are several different modes available for color filter arrays.Based on the way the human eye perceives color, and the fact that it is twice as sensitive to green as it is to red and blue, the camera needs more green pixels in order to mimic the human eye.In Bayer mode, shown in Figure 2, a line of blue and green filters alternates with a line of red and green filters, with the result that you have twice as many green pixels as blue and red.The original output of the Bayer filter is a mosaic of blue, green, and red pixels of varying brightness, which varies depending on the brightness of the light shining on a particular pixel.

De-mosaic effect and white balance

When the color filter array generates an image, four independent pixels determine the color of a single pixel. This results in a discrete color mosaic that does not look like a real image. Unless an algorithm is used to de-mosaic the true color of the target pixel by averaging the color values of several pixels closest to the target pixel.

Figure 3: A microlens is used to refract the light entering the pixel as vertically as possible.The higher end of the sensor uses an auxiliary microlens to bend the light again and move the pixel further down, minimizing the chance of it jumping into a neighboring pixel and causing interference noise.

Without any correction, an image taken under a fluorescent lamp may look too green, while an image taken outdoors at sunset may look a little orange. Automatic White Balance (AWB) correction ensures that the white in the image looks truly white to the viewer.

Image restoration: removing harmful interference

In a CMOS or CCD sensor, several noise sources are added to the image and must be removed or at least attenuated. These noise sources are:

1. Fixed mode noise, which produces the same pattern of noise in each picture.The way to reduce this fixed-mode noise is to take a "dark exposure" reading (that is, exposure without light) through the camera and then subtract a normal exposure value.The output current of the dark state is the average output current produced without illumination and will include the leakage current of the photodiode.

2. Random noise, which may be caused by changes in ambient temperature.A higher temperature typically causes more electrons to leave their tracks and generates a random noise signal in the sensor. The heat dissipation of the sensor circuit makes it worse.If a camera phone is placed in a car during the summer, the noise of the photos it takes will be much higher than the noise of the photos taken in an air-conditioned building.

3. Pixel scrambling, in the light into a pixel to a neighboring pixel, will produce a "muddy", for example, when a red pixel so that its red light into a neighboring blue pixel, will lead to abnormal enhancement of the blue pixel signal, while the red pixel signal image information is lost.

The high-end sensor uses an auxiliary microlens to bend the light again and move the pixel further down, minimizing the chance of it jumping into a neighboring pixel and causing interference noise.

Figure 4: A block diagram of a complete imaging system showing various forms of processing of the original output from the image sensor.

The number of pixels is only one measure of the ability to capture information.In general, the signal-to-noise ratio of a larger pixel is higher than that of a smaller pixel because it has more area to collect light, thereby capturing more photons and thus producing a more useful signal relative to the total noise present.

Agile bug fixes

There will always be pixels on a sensor that do not conform to the manufacturing process. They have optical and electrical defects. This can produce an undesirable visual effect, leading to inconsistent or nonlinear responses to the incident light.

I-Pipe determines whether a pixel is defective by measuring its output and then comparing it to the average of several nearby pixels.If the difference is greater than a specific tolerance, the pixel is "marked" as defective and its output is no longer valid.The defect pixel output value at the target pixel location can be obtained by interpolating the output value of each adjacent pixel and then averaging it as an alternative output of the defect target pixel.

Improve degraded image resolution or sharpness

So far, we have understood how light passes through an optical lens, a Bayer array, and one or more microlenses.A familiar image is then regenerated by techniques such as de-mosaic effects, interpolation, anti-aliasing, and defect correction.Performing so many "unnatural" electronic actions results in a final image that is less sharp than the original, and the original, more realistic definition can be obtained by adding a portion of the high-pass signal (that is, only the high frequency) to the output.Noise sharpening can be reduced or increased, depending on the amount of high frequency signal actually added.

Vignette Correction

Vignetting is a shadow or dark shadow created on the edge of a picture due to the lens and lens barrel. Vignetting correction compensates for this by correcting the brightness ratio of the corners to the brightness ratio at the center of the picture.

Improved post-processing enhancements for images

The image effect can be improved by increasing the contrast of light and dark areas, thus improving the perceived image's fullness and color quality.Agilent uses a technique called adaptive color reconciliation mapping within its CMOS image sensor to produce more realistic, richer colors by extending the dynamic range.Edge images use adaptive color reconciliation mapping technology to improve brightness and contrast by automatically adjusting color reconciliation mappings. This technology improves contrast in light and dark areas and repairs under- and overexposed images, resulting in brighter, more realistic color reproduction.

Complete imaging system

In a complete imaging system (Figure 4), the various stages of I-Pipe image processing can be seen. That is, the image signal must be purified, shaped, and amplified before the image can be output to a monitor or stored in memory.