How to Deploy Self-Learning AI Behavioral Analytics Ethically — And Why
The Skinny
-
Self-learning AI improves threat detection by adapting to evolving user behavior and uncovering anomalies missed by traditional static models.
-
Scalability and efficiency are key benefits, as these AI systems can analyze large volumes of data without manual rule updates.
-
Ethical oversight is essential, with a focus on minimizing bias, ensuring transparency, and protecting data privacy in AI-driven security tools.
It’s clear that organizations need artificial intelligence to fight AI in 2025, but what does this actually mean? Security teams need advanced tools to combat sophisticated threats leveraging AI to generate malicious code, craft convincing social engineering attacks and automate targeted campaigns.
These AI-powered attacks are not only striking organizations rapidly but are becoming increasingly tailored to individual companies and their unique vulnerabilities, amplifying their threat level.
Self-learning AI behavioral analytics offers a way to combat these evolving threats, but security teams must be mindful of its ethical risks and ensure these systems are properly vetted before widespread implementation.
Understanding traditional behavioral analytics
Many, if not all, cybersecurity professionals are familiar with user and entity behavioral analytics (UEBA). Behavioral analytics provides a people-centric defense using complex machine learning algorithms to analyze user and entity data across an organization.
The baseline is created by configuring user profiles from various sources of data, such as devices, IP addresses, login times and access files. This enables behavioral analytics to provide context-dependent security, utilizing environmental factors such as time of day, user job roles, collaboration patterns and location-specific behavior.
Traditional behavioral analytics rely on static baselines that require manual updates. When properly defined and maintained, it can quickly identify anomalous behavior and detect sophisticated attacks, like insider threats.
The evolving challenge of insider threats
Insider threats are becoming harder to detect as attackers increasingly use human creativity and AI to blend malicious and normal behavior. Research shows insider threats have an annual financial impact of $16.2 million.
While traditional behavioral analytics that rely on static baselines can detect these hidden threats, they struggle with detecting subtle, evolving or first-of-a-kind insider threats. In contrast, AI that learns on its own can adjust to changing behavior, reducing false positives and uncovering anomalies that static models might miss.
The power of self-learning AI for behavioral analytics
Self-learning AI is more than the standard adaptive learning that occurs within machine learning methods. Instead, self-learning AI refers to the system’s ability to react to shifting baselines of any complex enterprise. Employees regularly change jobs, resulting in new tools being used and new information being accessed, while organizational divisions may also change their strategic goals, resulting in new behaviors becoming the new normal. A self-learning system can adapt to these real-world changes without the need for manual tweaks and rule updates that traditional systems require.
This adaptability unlocks the following benefits of self-learning AI in behavioral analytics:
Enhanced threat detection — Self-learning AI improves detection accuracy over time by constantly evolving with an organization’s unique environment and threat landscape. This allows for the detection of more complex risks that traditional systems might miss.
Scalability and efficiency — Behavioral analytics powered by self-learning AI can handle massive volumes of data at scale whereas traditional behavioral analytics may struggle with scalability, especially as more rules are added. Hands-free anomaly detection enables organizations to reduce the resource costs that traditional systems typically require.
Bias mitigation — Self-learning AI better equips behavioral analytics to mitigate biases that might emerge from the static datasets and rules. Baselines are personalized to evolve with the user, reducing the risk of unintended bias, discrimination and other limitations.
Ethical considerations for self-learning AI
While self-learning AI enhances behavioral analytics, its reliability depends on how well it is trained and governed. Without proper safeguards, AI can reinforce existing security blind spots and introduce risks such as bias, lack of transparency and data privacy concerns. As a result, ethical AI is becoming a global priority to promote safe, secure and trustworthy AI models, highlighted by the recent Hiroshima AI Process (HAIP) Reporting Framework.
Like all AI models, biased training data will produce biased results. Skewed initial training data will cause the model to develop a biased understanding of normal behavior, creating ripple effects throughout its evolution and compromising accuracy over time. Human biases can be inadvertently transferred to an AI model when training it, leading to inaccurate or discriminatory outputs.
AI models might also lack transparency and explainability, making it difficult to trace the model’s decision-making processes and introducing additional risk. For instance, many AI-driven security models function as “black boxes,” which are nearly impossible to trace how they reach decisions. This lack of visibility amplifies risk, especially in high-stakes security environments.
A lack of explainability in AI models leads to real-world ethical and compliance challenges, like an inability to diagnose or explain when security tools incorrectly flag normal user behavior as malicious. Organizations need AI solutions that provide clear reasoning behind security decisions, enabling teams to validate outputs rather than blindly trust them.
Selecting AI solutions that prioritize ethics
To mitigate these risks, organizations should look for solutions that incorporate ethical principles, such as data minimization and privacy by design. AI models should only collect and process the data they truly need, reducing unnecessary exposure. Additionally, AI should be built with tiered access levels to ensure sensitive data is handled securely and in compliance with privacy regulations.
As threat actors grow more sophisticated and machine learning algorithms continue to advance, behavioral analytics will become more essential and refined. While self-learning AI enhances traditional behavioral analytics by improving accuracy, scalability and bias mitigation, it's crucial to remain aware of the ethical concerns surrounding these models.
By responsibly leveraging self-learning AI within behavioral analytics tools, organizations can enhance proactive security measures and effectively detect insider threats and other anomalies.

Stephan Jou | Senior Director of Security Analytics
Stephan Jou is Senior Director of Security Analytics at OpenText Cybersecurity, and currently leads efforts to apply AI and analytical methods for cybersecurity use cases. Jou was CTO and co-founder of Interset, where he developed a leading-edge cybersecurity and In-Q-Tel funded project that uses machine learning and behavioral analytics, prior to being acquired by Micro Focus and then OpenText. Previous to OpenText, Jou has been at IBM and Cognos where he led the development of over ten products in the areas of cloud computing, mobile, visualization, semantic search, data mining, and neural networks.

Maria Pospelova | Senior Manager of AI & Data Science
Maria Pospelova is Senior Manager of AI & Data Science at OpenText Cybersecurity, where she serves as Principal Data Scientist. She was previously a Senior Security Data Scientist at Interset, which was acquired by Micro Focus and later became part of OpenText. Before joining OpenText, Pospelova was a developer at Bedarra Research Labs, where she supported both front- and back-end development. With deep expertise in applying data science to the cybersecurity domain, she takes an active role in the development and innovation of OpenText's technology, authoring several patents and research papers in both fields.