Using Natural-Language Filtering to Speed Incident Response
Key Highlights
-
Speed matters early: Natural-language filtering can help incident responders reach relevant data more quickly during the initial stages of an incident.
-
Less friction, clearer decisions: Translating plain-language questions into structured, auditable filters reduces manual query building under pressure.
-
Knowledge scales across teams: Visible, shareable filters support handoffs and help teams build familiarity with complex systems over time.
Cyber attackers are moving faster, often boosted by recent advances in artificial intelligence (AI). Defenders need to move just as fast, especially when responding to an ongoing security incident.
Generative AI has a lot of the spotlight when it comes to large language models (LLMs), but a quieter shift has happened. Natural-language filtering (NL-filtering) has matured after years of steady work. Instead of asking a general chat model to guess, NL-filtering maps plain human language to each platform’s schema with precision you can audit. The payoff is attention spent on containment, not syntax.
Why NL-filtering now
In the first minutes of any incident, the key is to gather relevant data and take action. It is easy to lose focus as teams bounce between consoles, trying to recall how each field is labeled between systems and which values mean what. Stitching together filters requires context switching at a time when the pressure is on. It is just more toil.
NL-filtering lagged because early natural language processing matched words, not intent. Keyword rules broke on ambiguity, synonyms and domain terms. For example, “GilTab” would produce no match, even though it is highly likely you meant “GitLab.” But due to costs, and the maturity of AI models, teams have mostly stuck with manual filters, which are cumbersome but predictable.
The foundational models underneath NL-filtering have changed. Retrieval keeps language grounded in live schemas. Useful slices of data can then be turned into shared views, assisting the whole team.
From plain language to schema
NL-filtering is not a general chatbot. It is an interface that compiles intent to the fields, operators, and values each platform understands. State the need in everyday terms, as you would to a coworker. The system parses each piece, aligns it with the given schema, and shows you the translation.
As an example, let's imagine you are responding to a repository leak. You ask to see “high or critical incidents tied to GitLab repositories in production.” The system interprets and maps your request to “field integration:GitLab,” “Environment:production,” and “severity:high|critical.”
You do not recall whether one console stores severity as numbers and another as strings. You still get a result you can defend, with the exact translation on screen and room to refine.
Speed under pressure
Speed is not negotiable during a response. You are often racing an adversary to limit the blast radius they could otherwise affect. The faster you can filter to identify any known dangers lurking in a breached system, the sooner responders can make informed judgment calls on which actions to take and in what order. NL-filtering helps responders reach the first useful slice, then keeps momentum as you pivot.
Refinement stays in plain language. Each adjustment updates the visible translation so you remain oriented.
Speed compounds through learning. Because the translations are visible, you see which fields were invoked and how values were picked. Teams absorb the schema by doing the work. Future prompts get cleaner, and fewer edits are needed to reach the same clarity. The path from question to containment gets shorter.
Shareable, teachable workflows
Good incident work is a team sport with handoffs. A good platform should help with that by keeping histories and supporting shared views. When a useful view emerges, save it with a clear name and share it in the incident workspace. The saved view captures the compiled filters so teammates can pick up where you left off.
Adoption lands best when it fits current habits. NL-filtering normally appears in the same filter bars a team uses today. Many platforms seed a small prompt library from real cases. encouraging teams to focus on the questions that reliably anchor scope. The goal is not to replace structured filters; just to reach them faster with less friction.
A better interface for judgment
Incidents vary, but the work is constant. Find the signal. Prove it is the right signal. Act. NL-filtering gives responders a better interface for that sequence. You talk to the system as you would a colleague, not to get a chatty answer, but to drive the console to the exact data you need. Plain language in. Auditable filters out. Rights enforced. The time you save in the first mile returns where it counts, in containment. That is the mark of maturity. The technology is no longer the story. The outcomes are.
About the Author

Dwayne McDaniel
Senior Developer Advocate at GitGuardian
Dwayne McDaniel is a senior developer advocate at GitGuardian. He has worked in developer advocacy roles since 2014 and has been involved in technology communities since 2005. His work focuses on helping technical teams better understand tools, workflows and emerging practices. He has presented at industry events worldwide, including academic and international venues.
