How to Avoid Bias in AI-Based Video Analysis

Aug. 9, 2023
Anomaly detection software eliminates PII and identifies incidents based solely on statistical data

This article originally appeared in the August 2023 issue of Security Business magazine. When sharing, don’t forget to mention Security Business magazine on LinkedIn and @SecBusinessMag on Twitter.

The news cycle is buzzing with mention of artificial intelligence (AI) and the power it holds for society, simplifying the way we do everything from finding the shortest commute to solving complex problems. As the applications of AI and machine learning continue to expand, organizations interested in deploying this technology are asking themselves: How do we leverage the power of AI and machine learning in an ethical way?

This is an especially important question to answer as it relates to AI applications in video security. AI has the potential to offer extensive security advantages when used for both public and private safety applications. Solutions like facial recognition, for example, make use of AI to help to locate known criminals or persons of interest captured in video streams. However, these solutions are constantly walking the line between ethics and crime deterrence. When an AI application (sometimes referred to as “the machine”) gives consistently different outputs for one group of people compared to another, this is known as bias in AI, and it is more common than you’d think.

Risks of Bias in Video Security AI

While AI and machine learning have the potential to bring new security and business intelligence opportunities for both consumers and businesses, the historical data on which these algorithms are often built brings with it inherent biases. In the case of facial recognition, inequity in recognition abilities and other biases are well-documented. This leads to biased outcomes for different groups, particularly those who are unfortunately often underrepresented in AI training data, such as minority groups and women. These biases have prompted many cities and states to ban the use of facial recognition in government applications over the past few years. 

The risks of such biases in AI systems are high, posing a risk to those whom the technology is applied to. Examples include false arrests due to cases of mistaken identity and profiling within airports and other public venues. Many AI-powered applications will even store personally identifiable information to compare against current findings, which, when not properly secured, can be a treasure trove for bad actors. For organizations deploying these technologies, such “mistakes” or database breaches could be costly, ranging from hefty financial penalties to damaging public scrutiny, lawsuits, and legal judgments.

These risks have many security decision makers reconsidering the use of AI in their security plans, particularly the use of facial recognition, despite the security value it may have. While these concerns are not completely unfounded, it is important that we do not condemn AI or machine learning entirely when considering security applications.

Anomaly Detection

Artificial intelligence takes many forms, and leveraging it in a way that is both ethical and low-risk is not only possible, but it has the potential to be revolutionary for security operations.

Anomaly detection uses AI in a way that is not subject to bias or profiling. Instead, AI-powered anomaly detection employs advanced, statistics-based machine learning algorithms that constantly adapt to events in a specific video stream.

For example, if a person falls or is laying on the ground in an office, retail outlet, or other location where this type of activity is statistically uncommon, the software will identify it as an atypical event and automatically notify designated personnel or trigger a Standard Operating Procedure (SOP) that there is a potential problem in real-time.

AI-powered anomaly detection is completely unbiased, without profiling or infringement on personal privacy, as it does not record or store any images or personal information as a point of reference. It does not use complexion, gait, or perceived gender as reference for its baseline of normality.

This unique form of Artificial Intelligence for security applications is focused on detecting anomalous conduct and behavior based on statistical data analysis, not personally identifiable information. As such, anomaly detection occurs without the need for human judgment, which historically has also been a major source of bias. With no stored database of identifiable information, risk and liability associated with data breaches for this tool are nearly eliminated.

Anomaly detection solutions lend themselves well to use in nearly any public or private environment – including manufacturing and other business intelligence applications – while removing the potential for personal privacy issues and liabilities. It enables an organization to confidently walk the line between ethics and deterrence, privacy, and security, without fear that their system is unjustly biased or increasing their exposure to risk.

Ken LaMarca is CEO of Active Intelligence, a provider of the ASTRA anomaly detection solution. Learn more about the product at www.securityinfowatch.com/21296023.