Dispelling the Top 10 Myths of IP Surveillance: Myth No. 10

Dec. 20, 2005
Myth #10: Network Video Image Quality Is Not as Good as Analog

Image quality is one of the most important features of any camera. This is especially true in security, surveillance and remote monitoring applications, where lives and property may be at stake. While analog cameras are often thought to have higher image quality than network cameras, this is a myth. Advancements have been made in the past few years that have allowed network cameras' image quality to equal - and in some cases surpass - that of analog technology.

When comparing network and analog cameras, it is best to look at professional, high-quality network cameras. Professional network cameras should not be confused with lower-end network or Web cameras used for Web attraction applications. These cameras cannot deliver the same image quality required for security and surveillance applications. However, even in professional network cameras, image quality can vary considerably and is dependent on several factors such as the choice of optics and image sensors.

Image Sensors

A good image sensor and optics are the most important factors in providing high quality images. Network cameras now have image sensors and optics that are the same as or better than those used in analog security cameras, and network cameras now can make use of progressive scan and megapixel sensors that are not available to analog technology.

The image sensor of the camera is responsible for transforming light into electrical signals. When building a camera, there are two possible technologies for the camera's image sensor: a Charged Coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS). Analog cameras utilize only CCD sensors, while network cameras can be produced with both types of sensors. This provides further flexibility for optimizing the network camera to fit the installation.

CCD sensors use a technology developed specifically for the camera industry. They are more light sensitive than other sensors, which means they produce better images in low-light conditions. CCD sensors typically are more expensive and more complex to incorporate into a camera because they produce an analog signal that needs to be converted into a digital signal.

CMOS sensors are based on a standard technology that is already used extensively in memory chips, such as those inside of PCs. Recent advances in CMOS sensors bring them closer to their CCD counterparts in terms of image quality. CMOS sensors tend to cost less than CCD sensors and contain all the components needed to generate digital signals. Eliminating the digital conversion process has also made it possible to produce smaller network cameras because fewer components are required.

The Interlacing Issue

At a high 4CIF resolution, the clarity of rapidly moving objects - such as a person running or speeding car - has long been problematic in security and surveillance applications. In an analog environment, a rapidly moving object will appear blurry. This is because an analog video signal, even when connected to a DVR, interlaces to create the images. Interlaced images use techniques developed for analog TV monitor displays, made up of visible horizontal lines across a standard TV screen. Interlacing divides these into odd and even lines and then alternately refreshes them. The slight delay between odd and even line refreshes creates some distortion - only half the lines keep up with the moving image while the other half waits to be refreshed. This causes moving objects to blur (see Illustration 1).

A network camera, on the other hand, uses progressive scan technology to capture moving objects. Progressive scan captures the whole image at one time, and scans the entire picture line by line every 1/16th of a second. This eliminates the delay between odd and even line refreshes and prevents the picture from being split into separate fields. Images from network cameras are also displayed on computer monitors. Unlike TV screens, computer monitors do not interlace. They display images one line at a time in perfect order, so there is virtually no "flickering."

Resolution

Analog and digital resolution are similar, but there are some important differences in how each is defined. In analog video, the image consists of interlaced lines since, as described above. In a digital system, the picture is made up of picture elements, also called pixels. No matter which system is used, higher resolution provides more visible detail. This is a very important consideration in surveillance applications, where a high-resolution image can enable a license plate to be read or a person to be identified.

When analog video is digitized, the maximum amount of pixels created is dictated by the number of available TV lines. Based on NTSC standards, the maximum resolution of an analog system is 400,000 pixels, or 0.4 megapixel, once the video is digitized by a DVR or video server.

Network camera technology renders NTSC resolution irrelevant and makes higher resolution possible. Network cameras today produce images that are at least one megapixel in resolution, which is 2.5 times higher than the best analog image. Cameras with two and three megapixel resolutions are also available.

Even with megapixel resolution it is still possible to generate lower-resolution images in order to save bandwidth. In this case, low resolution images are sent over the network until a trigger prompts the camera to send images with more detail. That way, the most significant images are presented with the highest possible level of detail.

In addition to providing clearer images, megapixel network cameras also provide different aspect ratios. Standard TVs use a 4:3 aspect ratio, while movies and wide-screen TVs use a 16:9 ratio. The advantage of 16:9 is that the upper and lower parts of most images take up pixels, bandwidth and storage space, but do not contain critical information.

Image Degradation

When an analog installation is spread out over a long distance, the length of the cable will influence image quality. The further a user is from the video source, the lower the image quality becomes. However, IP-Surveillance does not have these types of problems. Viewing video from a network camera is just like viewing images from a Web site. The network camera produces digital images, so there is no quality reduction due to physical distance.

Also, IP-surveillance images are digitized once, and then stay digital throughout the transportation and viewing processes. When converting analog video signals to digital through a digital video recorder or other device, the image must go through several conversions from analog to digital, or vice versa. With each conversion, quality can be lost (see Illustration 2).

Network video resolution has steadily increased over the past few years. Now that technology developments have allowed network cameras to feature the same or better image quality as that of analog technology, the security market has a strong incentive to push ahead into the digital future.

About the author: As the general manager for Axis Communications, Fredrik Nilsson oversees the company's operations in North America. In this role, he manages all aspects of the business, including sales, marketing, business expansion and finance.