Key considerations when selecting a video compression algorithm: Part 2

In the first article of this series, we covered the basics between temporal and frame-based compression and identified nine factors a decision maker should consider when in the market for a high quality network video surveillance system. The nine factors are:

  1. RESOLUTION [User Requirement] (Article 1)
  2. FRAME RATE [User Requirement] (Article 1)
  3. WEATHER [Video Environment]
  4. LIGHTING [Video Environment]
  5. SCENE MOTION [Video Environment]
  6. OBJECT SPEED [Video Environment]
  7. CAMERA MOTION [Video Environment]
  8. RECORDING [User Requirement] (to be discussed in article 3 of this series)
  9. LIVE VIEWING [User Requirement] (to be discussed in article 3 of this series)

The considerations fall into two categories: "User Requirement" which will vary depending on the customer's preferences, and "Video Environment," which are the variables in an application that will likely have an impact on your video.

In this article, we will review considerations 3 through 7, comprising all factors for the video environment and their impact on system variability. To frame the discussion, it is our position that a video system should always be optimized to achieve a desired image quality, and then adjusted for frame rate to maximize bandwidth and storage efficiencies. When dealing with MJPEG, we encourage a compression setting no lower than medium (MJPEG= 50). With H.264, we recommend nothing lower than the MAIN profile and that you allow the bit rate to vary (VBR) to ensure delivery of the best quality image. Remember, the customer is paying for a video recording system that should meet their expectations in all conditions, not just optimal ones. As system designers and architects, we must work diligently to guarantee that the choices made during the design and configuration of the system do not degrade the video's quality during those times when events of interest occur.

3. Weather [Video Environment]

Weather can have many characteristics, but the four elements that have the largest impact on video are rain, snow, changing light and wind. In this section we will consider the impact of rain and snow and we will address changing light and wind in subsequent sections. Video compression treats rain and snow like motion and will do its best to try to reproduce it accurately. With MJPEG, rain and snow have no impact on image quality and very little impact on bandwidth and storage as MJPEG handles each image separately. However, the steady-state bandwidth and storage consumption of MPJEG video will be higher than H.264 when there is no rain or snow. Rain and snow will have a significant impact on H.264 bandwidth and storage as it interprets rain and snow as a 100 percent scene change from one frame to the next.

This means a well-designed network, which properly accounts for the worst-case scenario will budget enough bandwidth for these events -- possibly budgeting the same amount of bandwidth as you would with an MJPEG system. Essentially, you will need to factor into your bandwidth and storage calculations the number of days out of the year you can expect rain or snow. The impact on image quality with H.264 can also be quite profound as the cameras are forced to predict not only the motion associated with the rain and snow, which is typically vertical, but also the motion of any subjects of interest which is often horizontal or diagonal. These extraordinary processing demands can sometimes be too much for a camera to handle and the resulting images will have substantial compression artifacts, often described as blurriness or blockiness.

4. Lighting [Video Environment]

There are two characteristics of lighting that need to be considered when selecting a video compression algorithm; the first is the amount of change in light over time. For static lighting scenes, H.264 can offer excellent efficiencies over MJPEG with minimal image degradation. Scenes where the light is changing, like outdoor scenes, can present some challenges for H.264. Similar to rain and snow, changing light -- whether headlights in the field of view or clouds passing in front of the sun -- can represent large frame-to-frame scene changes. H.264 cameras will respond to this by increasing the bandwidth, in some cases exponentially, and all but the highest broadcast quality HDTV cameras like those found in professional sports will degrade the image quality noticeably.

The other characteristic of lighting that will have an impact is low light. When lighting decreases, cameras have to amplify signals to try and reproduce the image. Anytime you amplify a signal, you introduce noise. For those of you old enough to remember TV before cable, weak TV signals meant a lot of noise or "snow". Today it is no different. While more mature companies are very effective at reducing low-light noise in the camera, at some point it will happen and when this occurs, it behaves exactly like real snow.

Overall, a temporal compression scheme such as H.264 will serve to compound the problems associated with video degradation and bandwidth increases in low-light environments. MJPEG compression can also produce slight increases in bandwidth as it tries to compress video with this additional overall noise but the variability will be much less pronounced. Some of the negative impacts seen in lower-light conditions with H.264 encoding can be mitigated by specifying a day/night camera (one with a removable IR cut filter) rather than a standard camera. Negative impacts may also be reduced by decreasing the sharpness setting of the camera, although this will also make the overall video less crisp during regular lighting conditions.

5. Scene Motion [Video Environment]

Scene motion, or the amount of motion within the field of view, is one of the most important elements to take into account when selecting a video compression algorithm. H.264 and MPEG-4 are what called temporal compression schemes. If you remember your Star Trek episodes, you will recall that the term temporal refers to time. With H.264 compression, the more things change over time, (i.e. motion in the field of view) the more difficult it is to compress the video while maintaining high quality and minimizing bandwidth and storage.

For frame-based compression like MJPEG, scene motion will not impact image quality, bandwidth, or storage. With H.264, scene motion such as trees blowing in the wind can have a dramatic impact on bandwidth and storage requirements. Having to factor in the frequency of windy days for your bandwidth and storage requirements is very difficult and typically inaccurate.

Like wind, vehicle and pedestrian traffic will have a similar impact on compression. Consider a camera system installed in a school's hallways. During class there is virtually no motion, which would be ideal for H.264 compression. However, if an emergency situation such as a fire alarm occurs, the hallways are suddenly filled with rapidly moving students resulting in nearly 100 percent scene motion which causes large spikes in network bandwidth and storage. If the network wasn't designed with these incidents in mind, then these large bandwidth spikes can cause network data losses which can translate into corrupted video. This is not something you want to try to explain to your customer. Since scene motion can impact the overall image quality of the video, we recommend utilizing H.264 compression for camera installations where no more than 20 percent of the field of view will contain motion at any given point in time.

6. Object Speed [Video Environment]

The faster objects are moving through the field of view, the more distance they cover between frames. The more distance they cover, the more changes you have between frames. The more changes you have between frames, the more a temporal encoding scheme (H.264/MPEG-4) must estimate where the objects are moving to. Fast moving objects have negative impact on bandwidth, storage, and video quality when using temporal compression. The pain is compounded at lower frame rates (there are more changes between key frames) and lower performance H.264 profiles (see article 1).

Frame-based compression like MJPEG suffers no video quality degradation with fast moving objects. If you are using temporal compression like H.264 to capture fast objects, then choose your H.264 profile wisely because all H.264 profiles are not created equal. There is a direct relationship between the video quality of moving objects and the H.264 profile chosen. Lower quality H.264 cameras use a very "light" profile like the one known as "Constrained Baseline", and this can result in substantially lower video quality for moving objects. As with many technologies, there is no free lunch; less computing power in the camera equals lower performance.

To illustrate, consider a camera with a 20-foot field of view. A normal person running past that camera would be in the field of view for about 2 seconds (100-meter world record holder Usain Bolt could cross the camera's field of view in 0.6 seconds!). A low quality H.264 camera running at 7.5 frames/second (fps) may not capture a single true image (key frame) of that normal person, instead producing 15 "estimated" frames, thus making clear video of the subject improbable. For this reason, we recommend considering H.264 only when you need 15 fps video or higher. For those applications where the surveillance system needs to capture even faster objects such as moving cars or fast moving people, the frame rate should be even higher if the video system is to do its job appropriately. Of course, these higher frame rates will increase the bandwidth and storage requirements of the overall system.

7. Camera Motion [Video Environment]

Camera Motion represents the worst case scenario for temporal compression algorithms because it represents 100 percent scene motion at all times (see Consideration #5). Camera motion can be caused by wind or vibrations, a pole swaying, or by panning, tilting, or zooming (PTZ) a camera. It can also be the result of a mobile application such as cameras mounted in trains, buses, police cars or emergency vehicles. H.264 video from an actively panning PTZ camera is very poor quality, which underscores the critical role scene motion plays in the selection of H.264 compression for a given application. For this reason, we recommend a frame-based compression like MJPEG for any mobile application or PTZ camera application, regardless of the impact of the other considerations we have outlined above.

In closing, here are some important rules of thumb based on the considerations outlined above:

  • When using H.264 compression, be sure to factor in weather and lighting conditions when calculating bandwidth and storage needs.
  • Select H.264 when there will be less than 20 percent maximum motion in the scene.
  • Select MJPEG or utilize high frame rate H.264 compression when attempting to capture faster moving objects.
  • Select MJPEG if the camera is on a mobile platform or in a PTZ unit.

In the third and final article of this series, look for a discussion of our final two considerations - the user requirements for recording and live viewing -- as well as more recommendations based on all nine considerations for what compression methodology is best for your application.
 

About the authors:

Pete DeAngelis of IQinVisionPeter DeAngelis is president and chief executive officer megapixel surveillance camera manufacturer for IQinVision.  Before joining IQinVision, Peter was co-founder, Vice President of Engineering, and Chief Technical Officer of San Diego-based Rokenbok Toy Company. Previously, he served as Director of New Products at Newpoint Corporation, a division of Proxima Corporation. Mr. DeAngelis’ successful career in start-up organizations began with PC Devices Inc., a company he founded in the early 1990s to market and sell PC-based audio products. He received a Bachelor of Science in Electrical Engineering from the University of Maine and holds numerous US and foreign patents.

Paul Bodell of IQinVisionPaul Bodell is chief marketing officer for IQinVision. He has spent over 15 years in the security industry with senior management positions at Sensor/HID, Silent Knight, and Philips CCTV. Paul is a regular contributor to top industry magazines and is active in SIA, the IP UserGroup, and other industry groups. He holds undergraduate degrees in Engineering from the University of Connecticut, Mathematics from Fairfield University, and an MBA from University of New Haven.

Loading