Legal Brief: Generative AI and Privacy Clash

Records of conversations with AI assistants could be used as evidence, creating a new data risk for you and your company.
Jan. 16, 2026
3 min read

Key Highlights

  • Your AI chatbot conversations may not be private: A federal magistrate ordered OpenAI to produce 20 million anonymized ChatGPT logs in the New York Times copyright lawsuit—raising questions about whether personal or business discussions with AI assistants are discoverable in litigation or government investigations.
  • NYT lawsuit tests "fair use" vs. copyright infringement: The Times accuses OpenAI and Microsoft of training chatbots on millions of articles without permission, producing near-verbatim excerpts that threaten journalism revenue—defendants claim fair use, but discovery rulings suggest chat logs are fair game for evidence.
  • Security executives should audit AI privacy policies now: Even deleted or anonymized conversations may be preserved and discoverable—review how AI tools store, access, and protect data before revealing personal or company details to assistants like ChatGPT, Alexa, or work-related chatbots.

 

This article originally appeared in the January 2026 issue of Security Business magazine. Don’t forget to mention Security Business magazine on LinkedIn or our other social handles if you share it.

Technology is so amazing that you can have legitimate, nuanced conversations with computers every day. I use Alexa for home automation, general reference (news, sports, etc.), and sometimes silly discussions about my family and daily life.

It is striking how smart this technology has become, and how quickly, but are my and your private conversations or messages with Alexa or other chatbots, like ChatGPT, subject to discovery by the government or by parties in civil litigation? The answer may be yes…and the privacy implications could be profound.

The New York Times vs. ChatGPT

In 2023, The New York Times sued ChatGPT creator OpenAI and Microsoft, accusing them of using millions of the newspaper’s articles without permission to help train chatbots to provide information to readers.

Consult the privacy policy of AI tools you use, and understand how your data is stored, accessed, and protected.

The lawsuit cited several instances in which OpenAI and Microsoft chatbots gave users near-verbatim excerpts of NYT articles, and it claimed that such infringements threaten journalism by reducing readers’ need to visit its website – reducing traffic and potentially cutting into revenue. It also said the defendants’ chatbots make it harder for readers to distinguish fact from fiction, including when ChatGPT falsely attributes information to the newspaper.

OpenAI and Microsoft claim that using copyrighted works to train AI products amounts to “fair use,” a legal doctrine governing the unlicensed use of copyrighted material.

For now, the merits of the lawsuit are TBD. No winner or loser has been declared; however, underlying discovery rulings by a federal magistrate are making waves. In a decision rendered in early December 2025, a federal magistrate judge ordered OpenAI to produce 20 million anonymized chat logs from ChatGPT users in the litigation. 

The Times argues that the logs are necessary to determine whether ChatGPT reproduced copyrighted content. OpenAI counters that turning over the logs would disclose confidential user information.

The presiding magistrate judge concluded that 20 million logs maintained by OpenAI were relevant, and that handing them over would not risk violating users’ privacy; in fact, the magistrate ordered OpenAI to produce the logs after removing users’ identifying information. OpenAI adamantly asserts that the demand for the chat logs violates well-settled privacy protections and disregards common data security protocols; thus, they are appealing the magistrate’s decision.

Privacy and Risk Implications

While these issues play out in this litigation (and others to come), generative AI users should consider whether their otherwise private interactions with chatbots are earnestly private – even when they are deleted and even when anonymized.

Do you reveal personal details to your AI assistant? How about company details? Consult the privacy policy of the AI tools you use, and understand how your data is stored, accessed, and protected. Even if you are not involved in a lawsuit or criminal investigation, records of your digital conversations might be preserved and could be discovered in a court of law.

For now, uncertainty prevails. Your data could be at risk, and the conversations you have with your favorite AI tool – from silly to meaningful – may extend beyond the confines of your phone or computer.

About the Author

Timothy J. Pastore, Esq.

Timothy J. Pastore, Esq.

Timothy J. Pastore Esq., is a Partner in the New York office of Montgomery McCracken Walker & Rhoads LLP (www.mmwr.com), where he is Vice-Chair of the Litigation Department. Before entering private practice, he was an officer and Judge Advocate General (JAG) in the U.S. Air Force and Attorney with the DOJ. [email protected]  •  (212) 551-7707

Meet Timothy J. Pastore

Timothy J. Pastore, Esq., is the newest columnist to join the Security Business magazine family. He is a Partner in the New York office of Montgomery McCracken Walker & Rhoads LLP (www.mmwr.com), where he is Vice-Chair of the Litigation Department. 

Before entering private practice, Mr. Pastore was an officer and Judge Advocate General (JAG) in the U.S. Air Force and a Special Assistant U.S. Attorney with the U.S. Department of Justice. As a JAG, in particular, Mr. Pastore was legal counsel to the Air Force Security Forces and Air Force Office of Special Investigations.

Mr. Pastore has represented some of the largest companies in the security industry, including Protection One, Comcast, Charter, Cox, Altice, Mediacom, IASG, CMS and others. He regularly provides counsel on risk management, contracting, operations, licensing, sales practices, etc. Mr. Pastore also has served as lead counsel in courts throughout the country in dozens of litigation matters involving the security industry.

Among other examples, Mr. Pastore led the successful defense at trial of cable giant Comcast in a home invasion case in Seattle, Washington. The case received significant press attention and was heralded by CVN as a top-ten defense verdict.

Mr. Pastore is a graduate of Bucknell University and Boston College Law School.

Reach him at (212) 551-7707 or by e-mail at [email protected].

 

Sign up for our eNewsletters
Get the latest news and updates