In the first two articles of this series, we covered the basics between temporal and frame-based compression and identified nine elements that should be considered when designing, specifying, or buying a high quality network video surveillance system. In the first and second articles, we reviewed the first 7 of our 9 considerations. In this article, we will deal the last two. To review, the nine considerations are:
- RESOLUTION [User Requirement] (Article 1)
- FRAME RATE [User Requirement] (Article 1)
- WEATHER [Video Environment] (Article 2)
- LIGHTING [Video Environment] (Article 2)
- SCENE MOTION [Video Environment] (Article 2)
- OBJECT SPEED [Video Environment] (Article 2)
- CAMERA MOTION [Video Environment] (Article 2)
- RECORDING [User Requirement]
- LIVE VIEWING [User Requirement]
Our considerations fall into two categories: "User Requirement" which will vary depending on the customer's preferences, and "Video Environment", which are the variables in an application that will likely have an impact on video quality.
To frame the discussion of our final two considerations, it is our position that a video system should always be optimized to achieve a desired image quality, and then adjusted for frame rate to maximize bandwidth and storage efficiencies. When dealing with MJPEG, we encourage a compression setting no lower than medium (MJPEG= 50). With H.264, we recommend nothing lower than the main profile and that you allow the bit rate to vary (VBR) to ensure delivery of the best quality image. Remember, the customer is paying for a video recording system that meets their expectations in all conditions, not just the optimal conditions - that is a big difference. System designers and architects must work diligently to guarantee that the choices made during the design and configuration of the system do not degrade the video's quality during those times when events occur.
RECORDING [User Requirement]
For many recording applications, users like to change frame rate and resolution when there is an event, such as motion. If you are using H.264, it is better to vary frame rate and image quality at the camera, using the camera's motion detection, than to stream all the compressed video back to your server to detect motion. Running motion detection at the server takes a lot of processing power and can even bring a high performance machine to its knees. Motion detection has to run on uncompressed video which is readily available in smart cameras. It is easy to do in a camera since the camera is detecting only the motion in its field of view. For a server to detect motion on every camera, it has to decode every stream, and decoding multiple streams like H.264 requires substantial processing. Even if the server has special H.264 decoding hardware (as many video cards do) it will probably struggle with multiple H.264 video streams. So, check with your NVR supplier and see how they do it. If they are using "server-based" motion detection for "RECORD ON MOTION" or "CHANGE FRAME RATE/IMAGE QUALITY ON MOTION", make sure you understand the processing requirements and select a server that can handle the load.
For determining frame rate, a general rule of thumb for compression schemes such as H.264 is the higher the frame rate, the better the video quality, albeit at the expense of higher bandwidth and storage. Keep in mind that many high resolution, megapixel sensors are limited in the frame rate they can deliver (some can deliver no more than 10 frames/second maximum), so make sure the specified compression scheme and the maximum frame rates of the cameras match accordingly.
LIVE VIEWING [User Requirement]
When H.264 compression is implemented properly to deliver good quality video for security/surveillance, it will take more processing to display live video than it would using a frame-based compression like MJPEG. For this reason, you must carefully consider the maximum number of simultaneous cameras you wish to view and make sure the server is up to the task or budget for the additional hardware that may be required. The industry consensus is that displaying an H.264 stream (main profile or higher) requires about twice the processing power of equivalent quality MPEG-4 or MJPEG video.