Editor's Note: Seeing is Not Believing

March 11, 2021
The rise and ease of deepfake video creation calls for increased vigilance among security pros
Paul Rothman is Editor-in-Chief of Security Business magazine. Email him your comments and questions at prothman@securitybusinessmag.com. Access the current issue, full archives and apply for a free subscription at www.securitybusinessmag.com.
Paul Rothman is Editor-in-Chief of Security Business magazine. Email him your comments and questions at [email protected]. Access the current issue, full archives and apply for a free subscription at www.securitybusinessmag.com.
This article originally appeared in the March 2021 issue of Security Business magazine. When sharing, don’t forget to mention @SecBusinessMag on Twitter and Security Business magazine on LinkedIn.


Just about two years ago, in the PC (pre-COVID) days, I wrote about one of the largest budding threats to the video surveillance industry, deepfake videos.

Back then, deepfakes were new and feared. Today, as you may have expected, the technology has evolved. Last March, I received a promotional email: Impressions, is the world’s first app for creating deepfakes on mobile, and with deepfakes taking minutes instead of days, users have created over 30,000 deepfakes in the last 30 days. Most of the focus has been on creating impersonations of Jim Carrey.

Using the the app, users select the celebrity face they want to use, and record a video selfie. The app uploads the video to its servers, and after a few minutes, the servers send the final face-swapped video back to the user’s device. “While deepfakes have been an area of concern for governments and corporate entities, consumers have flocked to the technology to create entertaining and humorous content,” the release says.

Wait...let’s do a Jim Carrey super-slow-motion replay back to those “areas of concern for corporate entities.”

Video surveillance manufacturer IDIS recently sent out an alert, stating that “growing concern about deep-fake videos will make it increasingly important to be able to demonstrate the integrity of video evidence.”

IDIS warns that rapid advances in these digital video manipulation techniques, and the as-illustrated rise in deepfake celebrity videos, will put pressure on both video tech users and on prosecutors to demonstrate the integrity of any video surveillance footage they use.

“As we look ahead, wherever video is presented for use as legal evidence, or as part of internal disciplinary proceedings, we will see more attempts to assert that footage is not genuine,” says Dr. Peter Kim, Global Technical Consultant, IDIS. “Courts will dismiss evidence where tampering cannot be ruled out. Any challenge to the integrity of video evidence, if not countered, risks undermining the value of an entire video solution. This is particularly true in applications where investigating and prosecuting wrongdoing is a key function of the camera system.”

As I wrote in 2019, digital evidence protection and chain of custody has long been a major concern for the video surveillance industry; in fact, most leading VMS feature video encryption to by default, and it usually takes a special player to even review surveillance footage.

IDIS, for its part, coupled the deepfake alert with promotion of its “Chained Fingerprint” algorithm, which its NVRs use to ensure the integrity of video data. The system assigns a unique numerical ‘fingerprint’ to each frame, calculated by relating its own pixel value to the fingerprint of the previous frame…If any part of the image frame is tampered with, the fingerprint chain will be broken and will not match the chain value calculated at the time of video export, prompting a flag, the release explains.

“As organizations look to upgrade or invest in new video solutions, protecting themselves against claims of video evidence tampering should be high on their priority list,” Kim adds.

With COVID first on the minds of most people – security pros, of course, included – what was once a dire threat has moved to the back burner; however, as it has with so many other things, COVID has opened up a new dimension even for deepfakes.

“With increased video conferences and remote work collaboration, attackers applying deepfake technologies on live, real-time collaboration is a very real possibility,” says Kowsik Guruswamy, CTO of Menlo Security, who warns that deepfakes are the new “phishing lures” for unsuspecting workers. “With more distributed team members who may be less familiar with their fellow co-workers, this is a ripe opportunity for threat actors to extrapolate confidential information in what seems like a real video call.”

Paul Rothman is Editor-in-Chief of Security Business magazine. Email him your comments and questions at [email protected]. Access the current issue, full archives and apply for a free subscription at www.securitybusinessmag.com.