Remember the days of sports entertainment before HDTV? Spotting a golf ball that landed in the rough was eye-squintingly difficult — and nearly impossible if it found the bunker. Both NFL and soccer fans missed a significant portion of the action because the 4:3 aspect ratio of old tube TVs focused around the ball while sacrificing the rest of the field. Hockey was so difficult to watch on TV that FOX tried a gimmick where they virtually highlighted the puck with a blue circle so viewers could better follow it around the rink.
Once fans experienced the in-your-face clarity and widescreen format of HDTV, there was just no going back. The same is true in the world of surveillance.
The HDTV journey began in 1998 with the first NFL game broadcast in high-definition. Within a decade, standard-definition NFL broadcasts were no more. In 2005, if you journeyed to Tokyo’s Akihabara district — known to some as the consumer electronics capital of the world — it was clear that analog was dead and 720p/1080p HDTV were the new preferred formats.
At the same time, the point-and-shoot digital camera market was saturated by relatively inexpensive megapixel models. From the early- to mid-2000s, the digital camera market saw rapid resolution development and the most popular camera sellers went from 1MP to 8MP and up to 12MP, where the lens became the limiting factor. But then something really interesting happened: the original iPhone launched in 2007 with a 2.0MP camera. From 2008 to 2010, the “more megapixels” trend in the digital point-and-shoot camera market took a step back, and sales were soon dominated by the 2.0 — 5.0MP camera phone. But even though the point-and-shoot megapixel race slowed down, camera phones literally put megapixel technology at the fingertips of millions.
Security personnel who had been enjoying HDTV in their homes and megapixel on their phones started clamoring for the same superior viewing experience and performance from their surveillance systems. Fortunately, surveillance manufacturers had already set the R&D wheels in motion.
Megapixel vs. HDTV: What’s the difference?
High-resolution in security did not begin with the first HDTV surveillance camera — high-resolution cameras have actually been on the market since around 2003. But those precursors to HDTV had all been megapixel, just like their digital still photography counterparts. Yes, there is a big difference between HDTV and megapixel.
Megapixel strictly applies to one part of the image: the number of pixels in the field of view. It has no meaning in terms of frame rate, aspect ratio or color fidelity performance of the video. So whether the camera is 3, 5 or even 10 megapixels, a higher megapixel resolution does not necessarily mean it will provide better overall usable video quality than a camera with a lower resolution running in HDTV.
True, the more pixels you have, the more details you capture. This is great for forensic searches — provided you have a comparable megapixel-rated lens to achieve the true megapixel rating of the camera; however, because of the amount of pixel detail being captured by a high megapixel camera, bandwidth consumption rises accordingly. If you are facing bandwidth or storage constraints, then the frame rate of the megapixel video must be dialed back to accommodate the limits of the pipeline.
Some people confuse the issue by referring to a megapixel camera as “HD” — but it is not HDTV. HDTV is a standards-based format governed by the Society of Motion Picture and Television Engineers (SMPTE) that not only guarantees resolution (720p, 1080i or 1080p) but also the 16:9 widescreen format, frame rate and color fidelity of the video. HDTV provides a much better overall video viewing experience. This is why your smartphone takes 8MP snapshots but records in 720p or 1080p HDTV when you switch to video recording.