Our Man in the Field: The Process of the IP Solution, Part IX

Welcome back to our series! If your memory is good, you will recall that in the end of my last column, I mentioned that we would take a look at video compression and the differences between video surveillance and video monitoring. So, without any extra dilly dally let's get started.

The first question most people won't ask about compression is "Why bother?", as in "If compression messes up my picture, why bother?"

Most everyone that I speak with thinks that he or she understands compression factors and such, but in the end most are reciting words or processes that they have read on the Internet and/or have heard around the water cooler at the office. OK, this sounds a bit over the top or conceited, but it is true. The worst part about it is that most of these folks read my words on the Internet or were listening to me around the water cooler, so I'm the one to blame for what they're just "repeating". So let's take a good look at the problem, do the math and then move into the solutions ... if there are any.

Problem number one is that the average analog video image, once digitized into pixels, will require between 1 megabyte and 1.5 megabytes of storage space on a hard drive. That's just one image. Now, multiply that by 30 and you have one second of required hard drive space as based upon good old NTSC standards. Multiply that by 60 (one minute), take the result and multiply again by 60 (one hour) and that times 24 (hours in a day) and you have one day. If we did the math, we would find that one camera, uncompressed, over a day's time would require a whole lot of storage space, not to mention transmission bandwidth. If we averaged 1.2 megabytes per frame from this 30 fps camera, it would mean we'd need to have about 3 terabytes of storage from that camera by the end of the day.

This being the fact, we are actually faced with two dilemmas. The first is that storage space is sold by the byte. I actually remember, because it was not so long ago when a hard drive could be priced at $100 per megabyte. Obviously, it's not that bad today, but if you consider that we are using terabytes (approximately 1 trillion bytes) and pedabytes (equal to 1,024 terabytes) the cost potential is still there. The second half of the consideration is the actual size and space required by the storage unit(s). The more memory you require, the more general space for the equipment is required. So the idea of compression is to reduce cost and space. So we "compress".

As discussed in many of the articles that I have written in the past, there are several ways to compress information. The first (and sometimes the best) method of compression is to reduce the actual number of images that you record in a second. Do you really require 30 images per second or would one or five per second fit your needs? "Real" time, after all, is what you "really need" as it applies to your application. Look back a column or two and you will find this discussion (see earlier article).

In the world of computers, we have two types of compression, "Lossless" and "Lossy", and the latter is often referred to as lousy. Lossless compression is a format that is primarily seen in the medical industry. Just as the name infers, there is very little or no information or detail lost when using this format of compression. This is very good for X-rays and such, but not very cost effective for long-term video storage. Lossy compression is used where the playback or restoration of information can be "good enough" when compared to the original information. Lossy compression allows us to have maximum storage capability on a minimal usage of disk space. Again this is very good for space and cost savings, but not necessarily good for reproduction.

In most "lossy" compression schemes, we reduce the file size of an image by throwing away specific bits of information. The most common format of this is to eliminate repetition. If the background of an image doesn't change, why continuously store it? If I have 26 shades of red, can I compress them into five or six shades of red? The key is not to throw away the baby with the bath water. All compression schemes are accomplished by specified algorithms. An algorithm is nothing more than a fancy name for a mathematic formula. After all, we are only dealing with zeros and ones - the binary language of computers.

We refer to these algorithms as "codecs". A codec may be a device or a program. In essence, we first use an encoder to compress the information and then a decoder to put it back to the nearest possible reproduction of the original. Since this is a two-sided, proprietary process, we adopted the idea of calling our codecs as "engines". Each engine we give a name, something like JPEG, Wavelet, Mpeg, Mpeg4, H263, etc. The key here is that all compression engines are some form or variation of an original scheme.

Everyone has their own idea and method for compressing information, and of course, every new engine that comes along is "the best". If you don't believe me, just ask the manufacturer which compression engine is the best and you will promptly be told that theirs is. You may even be blessed with a 45-minute technical explanation about the benefits of the manufacturer's compression engine as opposed to all the others. In the end, however, all compression schemes are, one way or another, just another incorrect, problematic way to store an image. But it's a problem we deal with because we don't have the luxuries of unlimited storage and unlimited transmission bandwidth.

As I inferred earlier, each compression scheme approaches the problem of reducing information from a different angle. One engine may compress colors. That's not a problem unless you go overboard and your playback ends up looking like a child's drawing where the child only had four or five primary colors to work with. The next engine might record a single full frame of information and then only those things that change in the image from that point on. Again, this is not bad unless you get carried away by not refreshing the full frame often enough. A third image may attack the resolution or detail of the image. This is easy if you remember that all images are made up of dots or squares of color. If I throw out every eighth square of color, I have a 25 percent reduction. Again, this isn't a bad idea unless you get carried away.

The newest compression schemes are working from a truly digital format. They look at each individual pixel point and redesign the image into complex digital codes or equations and store them in a text format. Text format requires practically little or no storage space as compared to an actual image. I've probably got the technicalities off just a little, but the gist is correct for this emerging "text-based" compression scheme.

Still, the key with any compression factor is to remember five things:

1. Once compressed, there is no retrieving the "thrown out" information.

2. If it looks or sounds too good to be true, it probably is (spoken like a true pragmatist).

3. There is no such thing as a "best" compression scheme or engine.

4. There are no standards for compression in the CCTV industry so whatever you end up will probably be proprietary. Buyer beware. If you have multiple locations and you expect to be able to view all of your video from one central point, you must be careful to use only those compression schemes that are able to be decoded by your software at the central location. In other words, be careful how you mix and match.

5. Lastly and most importantly, the compression engine or scheme that is right for you is the one that reproduces the information that you need to view, in a way that is beneficial to your application. In other words, don't be afraid to ask for an on-site test for a week or two under your control, and not based upon a 15-minute demonstration by a sales guy or gal that knows how to strut their stuff.

In my next column, we will pull the most common known engines apart and see how they work. We will look at the benefits and the pitfalls. So until then, avoid tight places and try not to give up too many squares.

About the Author: Richard R. "Charlie" Pierce has been an active member of the security industry since 1974. He is the founder and past president of LRC Electronics Company, a full service warranty/non-warranty repair center for CCTV equipment. In 1985, Charlie founded LeapFrog Training & Consulting (Formally LTC Training Center), a full service training center specializing in live seminars, video-format certification training programs, plain language technical manuals and educational support on CCTV. He has also recently become the director of integrated security technologies for IPC International, Corp., a firm which provides major retail (mall) surveillance solutions. He is an active member of ASIS, ALAS, CANASA, NBFAA, NAAA and SIA. He is the recipient of numerous security industry awards, and is a regular contributor to Security Technology & Design magazine. Look for his columns to also appear regularly via SecurityInfoWatch.com and this website's e-newsletters. He can be contacted via email at charliep@ipcinternational.com.

Loading