Why human oversight is essential to mitigate emerging risks

May 1, 2025
As organizations race to integrate AI, security leaders must ensure human control remains central to prevent sabotage, privacy breaches, and system failures.

Artificial Intelligence is moving faster than any technology we’ve seen. From weather forecasts to healthcare to the employee experience, AI is being integrated into our lives from every direction. It’s also a catalyst for huge investments in the tech landscape–OpenAI recently poured almost $12 billion into AI infrastructure startup CoreWeave, and Nvidia is investing several hundred billion in AI chipmaking in the US.

With all this momentum, taking a step back and assessing the less favorable effects of AI implementation can be challenging. But that’s precisely what I’m asking you to do. The power and speed of this new technology are moving at an unforgiving pace and have the potential to snowball out of control, especially regarding security. 

The key element to utilizing AI is to ensure that humans have the ultimate control over every aspect of it. We have a way to go there: In 2024, Workday reported that 70% of leaders believe AI should be developed in a way that easily allows for human review and intervention, yet 42% of employees think companies don’t have a clear understanding of which systems should be fully automated vs. have human intervention. 

Security Pitfalls of AI Implementation

As many have discussed, hackers use AI to aid in their traditional malware, ransomware, and phishing attacks. The difference is that these attacks are now more advanced and much faster, leaving cybersecurity teams playing catch-up. However, an often-overlooked issue is how threat actors use the AI tools within their targets' systems to their advantage. Some find their way around guardrails meant to contain AI by using tactics like prompt injection attacks to get the AI to reveal restricted information or other undesirable outcomes.

Model poisoning is also a concern. Model poisoning is when cybercriminals or insider threats intentionally sneak insufficient data into an AI’s training process, essentially sabotaging it. This prompts the AI to make serious errors and produce incorrect output, while people using the AI have no idea it has been corrupted. In addition to hindering decision-making, this could seriously impact organizational structure or cybersecurity if the tool supports proactive threat detection and response capabilities.

Privacy is another overlooked factor regarding AI. How many people read the privacy agreements before checking the box? I’d assume not many. It’s always important to know where your data is stored and who can access it, but even more so if it is sensitive, such as trade secrets, medical records, or Personally Identifiable Information (PII). Just as important as checking what AI tools are doing with your data is checking on the cloud providers that are storing related data.

Moving Forward with AI Safely

AI can be a wonderful and productive technology if we don’t let the moment’s excitement overshadow precautionary common sense. Moving forward with AI implementation securely and sustainably is possible if humans are included from start to finish. 

We cannot predict anything AI has or will have the potential to do. Even if we set guardrails on the agency of AI tools, it is still essential to include a fail-safe mechanism with human control to prevent disasters and unwanted surprises. This could look like a hard-wired “turn off” button managed by an experienced employee with full veto power over the AI agent or tool without any opposition, day or night, 24/7. 

Another side effect of AI implementation is a massive increase in data utilization and storage needs to train and house all these AI models. This is causing out-of-control and unpredictable cloud storage costs for enterprise IT teams. To compensate for this, organizations are cutting costs by resorting to subpar storage practices, such as frequently placing critical data in cold storage tiers without enabling Object Lock for protection. This opens organizations to downtime and data loss from cyberattacks or other outages–98% face performance degradation and penalties, leveraging cold storage tiers.  

Instead, IT teams should consider moving some of this data back on premises, avoiding hidden cloud costs and those pesky access barriers. They should also strongly consider applying Zero Trust principles to their backup environments, just as they would with the rest of the IT infrastructure, to ensure data remains uncorrupted and recoverable. With these tips, businesses can free up IT resources to meet AI pressures while ensuring resilience and keeping pace with the boom in AI technology.

 

About the Author

Geoff Burke | Community Manager at Object First

Geoff Burke is the Community Manager at Object First and has over 20 years of experience in IT, with 12 years of experience in Data Protection. He is VMCE and VMCA certified, as well as Kubernetes CKA and CKAD. Burke is active in IT Communities as a Veeam Vanguard, Veeam Legend, Tanzu Vanguard, and Calico Big Cat Ambassador. Geoff’s enthusiasm for technology is contagious, and his energy and passion for the subject inspire others to expand their knowledge and interests.