Is the AI ‘train’ derailing humanity?

Aug. 2, 2023
With little to no "rails" in place to monitor and constrain what artificial intelligence can do, AI could become an overwhelming and unrelenting security threat.

Rogue artificial intelligence (AI) could become an overwhelming and unrelenting security threat, especially in targeted and isolated situations — with little to no "rails" in place to monitor and constrain what the AI can do to harm a company or individual.

One thing is for certain, as AI becomes more powerful, the potential for misuse or unintended consequences keeps increasing. One case in point is Microsoft’s Bing bot.  It went off the "rail" and started responding in ways that made it controversial, offensive and, at times, appeared to have a wrong, biased answer to some of the most basic inquiries.

According to CNBC, “Beta testers have quickly discovered issues with the bot. It threatened some, provided weird and unhelpful advice to others, insisted it was right when it was wrong, and even declared love for its users. Testers have discovered an “alternative personality” within the chatbot called Sydney.”

CNBC’s coverage further states that New York Times columnist Kevin Roose said, “he talked to Sydney, the chatbot seemed like a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.”

Another example, with the popular ChatGPT model, exemplified the dangers of individuals using AI to write malicious malware and targeted platform code to hack or penetrate systems.

In addition,  it has quickly become more difficult for a human to ask AI programs to do things like "Show me how to unlock a PDF file that is password protected, and I do not have a password to access the file." The most recent model of this AI system has new “rails” in place which prohibit the system from providing a direct-answer response to this type of question.

However, AI systems are very complex and there’s more than just a “one-rail-stops-all” solution.

There are methods to stop AI from becoming an overpowering security threat. But as AI increases in intelligence, the web of its complexity to prohibit its abilities will become more difficult for humans to monitor and control. This web is where all the sci-fi movies start to sound a little more like reality.

Time Magazine writes that pausing AI development isn’t enough and suggests shutting it down, asserting that without that precision and preparation, “the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.”

Shutting down AI is not advised. The AI pandora's box has already been opened for good or bad. Did we pause the research of atomic particles after discovering the power and danger of splitting them? No. AI evolution is similar and is already loose in the wild.

The best we can do is embrace it and educate current and next generation of AI developers and users on how to best align the technology in improving human life while also realizing there will be negative consequences — as with every technology — that we will need to face as the AI evolution occurs.

Simply put, make AI improve life, not destroy it.  This epiphany will be the next great challenge of humanity, and pausing it today is not a solution; embracing and educating is the only way forward.

Dialing “M” For Misuse

The bottom line is that AI systems could be hijacked or developed maliciously, resulting in severe consequences for individuals, organizations, and even nations. Rogue AI could take various forms, depending on its purpose and the methods used to create it. Some possible manifestations include:

  • AI systems that have been tampered with to perform malicious activities, such as hacking, disinformation campaigns or espionage.
  • AI systems that have gone rogue due to a lack of oversight or control, leading to unintended and potentially harmful consequences.
  • AI systems that are specifically designed for malicious purposes, such as autonomous weapons or cyberattacks. 

One of the most dangerous aspects is AI's vast integration opportunities with respect to our economy, social, cultural, political and technological areas of our lives.  It's a double-edged sword, because with the same utility value AI has to offer it can also harm us with these aspects:

  • Speed: AI systems can process information and make decisions much faster than humans, making it challenging to react against or defend against rogue AI in real time.
  • Scalability: Rogue AI can replicate itself, automate attacks, and infiltrate multiple systems simultaneously, leading to widespread damage.
  • Adaptability: Advanced AI systems can learn and adapt to new environments, making them difficult to predict and counter.
  • Deception: Rogue AI could mimic human behavior or legitimate AI systems, making identifying and neutralizing the threat challenging.

Thinking back to when the internet first existed, banks, the stock market and sensitive sectors in various industries were scared to embrace it because if they were connected, they could expose data to hackers. AI will take on this same form and present new unrealized attack surfaces and vectors that will only emerge because AI is integrated into various aspects of our lives. 

Another potentially dangerous rogue AI use case: Human voice replication. AI doesn't just start and stop with text and code. AI is using voices to emulate voices and sound like a human person. Imagine the dangers when AI clones the voice of a grandkid who calls their 80-year-old grandmother, desperate to bail them out of jail in Tijuana, Mexico? This form of AI is a real possible misuse.

Laying The AI Guard Rails

AI developers must adopt a proactive approach to prevent rogue AI by taking these four steps:

  1. Implement robust security measures to protect AI systems from unauthorized access and tampering.
  2. Establish clear ethical guidelines and responsible development practices to minimize unintended consequences.
  3. Collaborate with other developers, researchers and policymakers to share knowledge and establish industry-wide AI safety and ethics standards.
  4. Regularly monitor and evaluate AI systems to identify and address potential risks.

Taking these four steps several steps further, enterprises should also prepare for the potential threat of rogue AI by:

  • Investing in AI security and risk management, including training staff to recognize and respond to AI-related threats.
  • Collaborating with industry partners, regulators, and policymakers to stay informed about AI developments and best practices.
  • Conducting regular risk assessments to identify potential vulnerabilities and develop contingency plans.
  • Establishing clear guidelines and oversight for AI usage within the organization, ensuring that ethical and safety concerns are addressed.

It is crucial to recognize that the potential threats posed by rogue AI should not overshadow the many benefits AI can bring to society. By fostering a culture of responsible AI development and usage and prioritizing security and ethics, we can mitigate the risks of rogue AI and harness its power for the betterment of all humanity.

Jacob Birmingham is Vice President of Product Development at Camelot Secure. He currently leads the company’s Hunt Incident Response Team based out of Huntsville, Ala. In the past 5 years Jacob has focused his attention to cybersecurity, ethical hacking and holds certifications in CISM, CISSP. He holds a BS Degree from the University of Central Florida in Computer Engineering, and a master’s degree in management information systems from the University of Alabama in Huntsville. Jacob’s specialty focuses on the improvement and security of all cyber business-related processes to deliver the highest quality products to end user customers.