This article originally appeared in the January 2024 issue of Security Business magazine. Don’t forget to mention Security Business magazine on LinkedIn and @SecBusinessMag on Twitter if you share it.
Nobel Peace Prize recipient Christian Lous Lange said: “Technology is a useful servant but a dangerous master.” That phrase is being put to the test today, as Artificial Intelligence and biometrics are just two broad categories of the many types of useful technologies changing the world across many professions and in everyday life.
My law firm receives solicitations daily from companies seeking to sell us the latest legal technology, such as AI-generated legal research, drafting tools, or e-discovery platforms. In mid-2023, a New York lawyer used the AI tool ChatGPT for legal research. While that is unconventional, there is nothing inherently wrong with using it for research; however, that presumes the lawyer confirmed the accuracy of the research. He did not.
The case involved a man suing an airline over an alleged personal injury. The lawyer submitted a brief that cited several previous court cases. The defense lawyers for the airline alerted the judge that they could not find several of the cases cited in the brief, and the court determined that at least six of the cases were not real – they were made-up cases with fictitious quotes by ChatGPT. Thus, the brief contained fabricated information that could not be used in a court filing.
The lawyer claimed that he was unaware that the content generated by ChatGPT could be false. This was foolishly naïve and resulted in disciplinary proceedings against the lawyer for failing to meet professional standards.
EU Leads the Way on Regulation
The security industry – even more than the legal industry – is heavily reliant on technology. AI and biometrics present great opportunities, but also great risks. How will these technologies advance in the security industry? How can we regulate these technologies without hindering innovation?
In the U.S., regulation of AI has not taken shape; however, regulation of biometrics is developing, with various states and cities limiting the use of biometric technologies.
The U.S. is lagging behind Europe, which has been working since 2021 to devise a regulatory framework for AI and biometrics. In December 2023, the European Union seemingly made some progress. Specifically, it passed a set of provisional rules governing the use of AI in biometric surveillance and AI systems, such as ChatGPT. The rules are not specific to the security industry, but they undoubtedly encompass it. While many details are yet to be resolved, this represents the first major world power to enact laws governing AI.
Timothy J. Pastore, Esq., is a Partner in the New York office of Montgomery McCracken Walker & Rhoads LLP (www.mmwr.com), where he is Vice-Chair of the Litigation Department. Before entering private practice, Mr. Pastore was an officer and Judge Advocate General (JAG) in the U.S. Air Force and a Special Assistant U.S. Attorney with the U.S. Department of Justice. Reach him at (212) 551-7707 or by e-mail at [email protected].