Key considerations when selecting a video compression algorithm: Part 3

In the first two articles of this series, we covered the basics between temporal and frame-based compression and identified nine elements that should be considered when designing, specifying, or buying a high quality network video surveillance system. In the first and second articles, we reviewed the first 7 of our 9 considerations. In this article, we will deal the last two. To review, the nine considerations are:

  1. RESOLUTION [User Requirement] (Article 1)
  2. FRAME RATE [User Requirement] (Article 1)
  3. WEATHER [Video Environment] (Article 2)
  4. LIGHTING [Video Environment] (Article 2)
  5. SCENE MOTION [Video Environment] (Article 2)
  6. OBJECT SPEED [Video Environment] (Article 2)
  7. CAMERA MOTION [Video Environment] (Article 2)
  8. RECORDING [User Requirement]
  9. LIVE VIEWING [User Requirement]

Our considerations fall into two categories: "User Requirement" which will vary depending on the customer's preferences, and "Video Environment", which are the variables in an application that will likely have an impact on video quality.

To frame the discussion of our final two considerations, it is our position that a video system should always be optimized to achieve a desired image quality, and then adjusted for frame rate to maximize bandwidth and storage efficiencies. When dealing with MJPEG, we encourage a compression setting no lower than medium (MJPEG= 50). With H.264, we recommend nothing lower than the main profile and that you allow the bit rate to vary (VBR) to ensure delivery of the best quality image. Remember, the customer is paying for a video recording system that meets their expectations in all conditions, not just the optimal conditions - that is a big difference. System designers and architects must work diligently to guarantee that the choices made during the design and configuration of the system do not degrade the video's quality during those times when events occur.

RECORDING [User Requirement]

For many recording applications, users like to change frame rate and resolution when there is an event, such as motion. If you are using H.264, it is better to vary frame rate and image quality at the camera, using the camera's motion detection, than to stream all the compressed video back to your server to detect motion. Running motion detection at the server takes a lot of processing power and can even bring a high performance machine to its knees. Motion detection has to run on uncompressed video which is readily available in smart cameras. It is easy to do in a camera since the camera is detecting only the motion in its field of view. For a server to detect motion on every camera, it has to decode every stream, and decoding multiple streams like H.264 requires substantial processing. Even if the server has special H.264 decoding hardware (as many video cards do) it will probably struggle with multiple H.264 video streams. So, check with your NVR supplier and see how they do it. If they are using "server-based" motion detection for "RECORD ON MOTION" or "CHANGE FRAME RATE/IMAGE QUALITY ON MOTION", make sure you understand the processing requirements and select a server that can handle the load.

For determining frame rate, a general rule of thumb for compression schemes such as H.264 is the higher the frame rate, the better the video quality, albeit at the expense of higher bandwidth and storage. Keep in mind that many high resolution, megapixel sensors are limited in the frame rate they can deliver (some can deliver no more than 10 frames/second maximum), so make sure the specified compression scheme and the maximum frame rates of the cameras match accordingly.

LIVE VIEWING [User Requirement]

When H.264 compression is implemented properly to deliver good quality video for security/surveillance, it will take more processing to display live video than it would using a frame-based compression like MJPEG. For this reason, you must carefully consider the maximum number of simultaneous cameras you wish to view and make sure the server is up to the task or budget for the additional hardware that may be required. The industry consensus is that displaying an H.264 stream (main profile or higher) requires about twice the processing power of equivalent quality MPEG-4 or MJPEG video.

Latency is also an important consideration. In terms of video, latency is the delay between when things happen in real time and when you see it on your monitor. The most common way to test this is to wave your hand in front of the camera and see how long it takes to see it wave on the monitor. Things that affect latency with H.264 include the profile utilized, how a manufacturer has designed the decoder, and the amount of buffer memory allocated for the video. For the same reason YouTube video is "buffered", many decoders try to "smooth" video, which can add 3 to 4 seconds of latency. Latency really doesn't matter too much for viewing recorded video, but too much latency can be unacceptable for live viewing and can make focusing a camera or using a mechanical pan/tilt/zoom camera nearly impossible. Most security professionals are accustomed to minimal latency and most video systems have latency far less than one second.

Last, pay attention to how the video is displayed on your monitor. A 1600x1200 pixel image displayed on an 800x600 monitor will always appear crisper and clearer than if you showed it on a 1600x1200 monitor. That is because the computer will "down-sample" the image to fit it on the screen and down-sampling masks many undesirable compression artifacts, making the live viewing experience better than displaying the image at 1:1. When you need to display the image at 1:1 (like during forensic analysis of video), inadequate compression techniques or sub-standard H.264 compression profiles will not meet your expectations for megapixel video quality.

 

H.264 Profiles Review

All H.264 profiles are not created equal. H.264 profiles define the maximum possible feature set of an H.264 category, but it is up to each manufacturer to decide which features they use in their implementation. Those decisions can greatly impact the video quality and bandwidth performance of any given H.264 encoder.

 

This comparison table shows 27 of 51 identified key features supported by the different H.264 Profiles. Another source which helps to convey the complexity and widely varying implementations of the standard can be found on Wikipedia's H.264 page, where you can see that out of the 13 vendors listed supporting H.264 there are 13 different implementations of the feature set.

 

Comparison chart of H.264 video compression profiles

The message here is simple: there is much more to the term "H.264" than meets the eye. As seen in the above chart there are thousands of different schemes (feature sets) and every one of them can be called H.264 even though the image quality and bandwidth and storage demands will vary greatly with the implemented features. You must know your video compression encoder and design the system accordingly.

Summary Chart for Article Series

The following chart is intended to help you design an optimal IP video surveillance system by highlighting the major differences between the H.264/temporal and MJPEG/frame-based compression schemes for considerations 1 - 7 that we have in this article series. The highlighted regions show the most important areas to take into account when designing for the most robust video surveillance system.

Comparison chart of H.264 and MJPEG video compression formats

 

Summary Recommendations

In the interests of full disclosure it is important to note that IQinVision has a complete line of H.264 and MJPEG cameras so it is not our intention to promote one over the other. We are simply providing a navigational aid through the vast amounts of misinformation in the industry and, we hope, enough information to make the best choice for your application. Just as megapixel cameras are not all things to all people, neither is H.264/temporal video compression. H.264 is a very complex subject and impossible to boil down to simple rules that fit 100 percent of all applications. Nevertheless, the following rules of thumb should help guide you in making better decisions when choosing your compression methodologies:

  • It is all about the environment and the requirement. The application should determine the selection of a video compression scheme. One size does not fit all.
  • The main benefit of H.264 is lower bandwidth and storage under appropriate conditions, which in general are well lit, high-frame-rate (15+ fps) installations with little scene motion (<20%).
  • To maximize H.264 image quality, we recommend the use of the "main profile" compression and that it is set to variable bit rate (VBR)
  • If using H.264, be sure to design the network to handle high bandwidth spikes and select a server with the necessary processing power for handling increased decoding and display loads.
  • Be sure to factor in weather and lighting conditions when calculating bandwidth and storage needs if specifying H.264 compression.
  • Select a non-temporal compression scheme like MJPEG if the camera is on a mobile platform or in a PTZ unit.

With H.264/temporal compression, system variability increases and predictability decreases, complicating overall system design. These complications can be accounted for and designed around when you are armed with accurate information. Our company places high value on providing all the information you need to build a system that exceeds your customers' expectations. We trust these articles have exceeded your expectations in providing such information.

References

1. Wikipedia - H.264/MPEG-4 AVC
2. Adobe Developer Connection - Encoding Options for H.264 Video
3. IEEE Transactions on Circuits and Systems for Video Technology - Overview of the H.264/AVC Video Coding Standard

About the authors:

Pete DeAngelis of IQinVisionPeter DeAngelis is president and chief executive officer megapixel surveillance camera manufacturer for IQinVision.  Before joining IQinVision, Peter was co-founder, Vice President of Engineering, and Chief Technical Officer of San Diego-based Rokenbok Toy Company. Previously, he served as Director of New Products at Newpoint Corporation, a division of Proxima Corporation. Mr. DeAngelis’ successful career in start-up organizations began with PC Devices Inc., a company he founded in the early 1990s to market and sell PC-based audio products. He received a Bachelor of Science in Electrical Engineering from the University of Maine and holds numerous US and foreign patents.

Paul Bodell of IQinVisionPaul Bodell is chief marketing officer for IQinVision. He has spent over 15 years in the security industry with senior management positions at Sensor/HID, Silent Knight, and Philips CCTV. Paul is a regular contributor to top industry magazines and is active in SIA, the IP UserGroup, and other industry groups. He holds undergraduate degrees in Engineering from the University of Connecticut, Mathematics from Fairfield University, and an MBA from University of New Haven.

Loading