The security industry's AI conundrum

July 29, 2019
Why balancing innovation and ethics in today's video surveillance landscape is paramount

There’s no way around it: innovation in security is centered around intelligence gathering. Achieved through the analysis and sorting of various data points, this intelligence is used to not only indicate potential risk factors for an organization (and, in many cases, anticipate threats), but also to streamline processes that can often bog management down. But none of this is possible without applying certain rules to the data being collected in an effort to make sense of it all.

Enter: artificial intelligence (AI). AI refers to systems that can be designed to take cues from their environment and, based on those inputs, proceed to solve problems, assess risks, make predictions and take actions. In the security world, a lot of the information that can be used to make decisions and take action is taken from data derived from a number of Internet of Things (IoT) devices, including video and access control data.

AI innovation, however, is not without controversy. Recently, San Francisco passed a ban that prohibits law enforcement and other government agencies from using facial recognition technologies, citing potential abuse and privacy concerns. Data scientists have also raised the question of potential bias, amplified by real-world trends in racial or gender bias for making hiring decisions. The ethical implications of the technology — and the data being analyzed — is center to the discussion of innovation and ethics in the broader sense, bringing its use to the forefront of these points.

However, it is possible to balance between innovation and the use of AI in the development of security software applications by understanding its potential, determining what is considered “fair game” within the collection of data points, and establishing clear rules to govern the use of the data.

The Potential of AI in Security Applications

AI can be used to help understand data feeds from various sources. In security, this data can come from anywhere including video or IoT devices that can be used to ensure that security operators gather only actionable alarms instead of manually having to sift through incoming data. Not only does this save a significant amount of time and resources, it increases an operator's efficiency and response time. To be more specific, say you have many people who enter and exit through a set of doors in a building. In one instance, you have three people enter but only two badges are scanned. Through the use and analysis of multiple sensors, this action can be tagged with a higher urgency for an operator in an effort to identify potential threats.

AI can also correlate seemingly unrelated events to surface insights or discover patterns with a broader scope. For example, certain traffic patterns at some locations in a city could be matched with anomaly conditions that occur irregularly at other locations. AI systems can discover these correlations and make predictions that, for example, better facilitate traffic control.

In a broader sense, AI can be leveraged to identify potential diseases in screening tests, such as magnetic resonance imaging (MRIs), or predict position and movement of objects.

What Data Can be Used?

One of the biggest questions being raised relating to AI and ethics is with regards to the use of the data collected. Video data specifically is being gathered on a regular basis by both public and private entities: a person's presence in a retailer or hotel, airport terminal, or any number of locations is being recorded. Many use cases of this data is strictly for investigative purposes in the event of an emergency or threat; however, more and more businesses are leveraging the information to gather additional insights and intelligence into how individuals move around a location, interact with displays and conversion rates for people who enter a store (and then make a purchase). All of this information has the potential to be gathered through video data.

That being said, companies that are engaging in the collection of this data beyond investigative purposes have to be truly ethical with the data from AI models and ensure privacy safeguards are in place. This holds the key in the public's acceptance of AI and data privacy: too many breaches and the use of information to profile or target specific groups can lead to distrust in the use of the technology. If that trust is broken, organizations (and the developers working on the algorithms that make these determinations) run the risk of being unable to realize the value that AI can bring to any number of use cases.

Software developers typically build software around models that have already been built to harness the intelligence from various pieces of data. Even still, the data being collected and used within these algorithms must be handled with care to ensure legal and privacy concerns are correctly addressed.

Establishing Rules and Regulations

With the exception of the recent San Francisco ruling against the use of facial recognition, regulation, oversight and the development of ethics boards has largely been done on an ad hoc basis. Google and Microsoft, for example, have been in the center of the discussions about AI ethics, engaging in internal ethical reviews, but operating without much oversight from external bodies about how AI is used.

In an effort to address the ethics of AI development, higher ed facilities, such as Harvard University, have focused on ethics boundaries into students’ training curriculum. Harvard Law School Berkman Klein Center, with the MIT Media Lab, created the Ethics and Governance of AI Initiative to integrate an ethics curriculum. But as with any new and emerging technology, this is only the beginning. Tech companies are beginning to see the value in standards of practice, engaging in code of conduct agreements that govern the use of data within the world of AI development.

As AI continues to be developed, more and more ethics codes will be formulated to help reinforce trust in companies using identifiable information. These practices and guidelines are often validated through process reviews, internal and external ethics reviews/panels, and personnel training. As a result, these organizations can benefit from harnessing the power of AI for business while still adhering to best practices and ethical use of data.

Small glimpses into more widespread state and federal regulation will come, but in the meantime, it's up to private entities to take the ethical handling and treatment of data into consideration when implementing such innovation using AI-driven technology.

Moving Into the Future with AI Development

Businesses stand to gain in the day-to-day use of AI-driven data points, bringing the potential to identify insights or intelligence such as anomalies and unusual patterns to the forefront of decision-making. Powerful, thoughtfully designed algorithms can be designed and implemented, but they are still based on the imperfect, flawed and unpredictable real-world examples, which can make them potentially biased.

Despite the potential of AI, the risk is greater when companies use AI that engages in the identification of patterns using these biased models. In particular, unethical use of AI has the potential to reduce efficiency, increase costs, damage a company’s reputation, and even worse, it could adversely affect people’s interests and welfare.

While security stands to benefit from the development of this technology, it is a critical first step for a manufacturer using this kind of data analysis to establish a code of conduct, harnessing ethical norms and standards to govern the development of software platforms. Using this guidance, software developers and practitioners must strive to strike a balance between innovation and ethics; through the adherence of privacy laws, avoiding bias and establishing rules and internal ethics reviews, this can be accomplished now and into the future.