U.S., China and Europe begin push to regulate AI

May 26, 2023
Regulations could have a significant impact on the research and implementation of products and technology in the security industry in the coming years.

Just as the research and application of artificial intelligence is leaping forward, the U.S. and foreign countries are moving to regulate the technology and address galloping public fears of misuse ranging from privacy concerns to labor displacement to the destruction of humanity.

With AI technology being a raging topic of discussion at ISC West earlier this year, and vendors racing to implement the technology to improve their offerings, regulations could have a significant impact on the research and implementation of products and technology in the security industry in the coming years.

U.S. Senators Michael Bennett (D-Col.) and Peter Welch (D-Vt.) proposed a bill recently that would establish a Federal Digital Platform Commission to “provide comprehensive regulation of digital platforms to protect consumers, promote competition, and defend the public interest.

The Biden-Harris Administration also announced steps this week to “advanced responsible artificial intelligence research, development and deployment” of AI, citing potential risks to society, security and the U.S. economy.

The moves come after the White House hosted a number of representatives from leading AI companies for a briefing from national security experts on cyber threats to AI systems and best practices to secure high-value networks and information.

OpenAI CEO Sam Altman recently testified on Capitol Hill about the need for a federal agency to regulate the technology. “OpenAI was founded on the belief that artificial intelligence has the potential to improve nearly every aspect of our lives, but also that it creates serious risks. We have to work together to manage,” Altman told a Senate panel.

RECOMMENDED READING: Real Words or Buzzwords?: AI Model

“GPT-4 is more likely to respond, helpfully and truthfully and refuse harmful requests than any other widely deployed model of similar capability. However, we think regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.”

“AI risks come in a few different flavors,” said Kevin Barrett, CEO of Deep AI. “One is destabilization of society. Because AI can be very biased. It can harm us in different ways. It can replace humans in the workforce, but then the AI works harder than humans. What does the future look like in a world when an AI is smarter than us?”

A great deal of AI is used as algorithms that process video streams (Computer Vision-CV), speech (Natural Language Processing-NLP), and acoustic waveforms (gunshot detection, noise reduction (NR) and occupancy sensing).

“Generative AI” systems incorporate CV, NLP, NR and create content. Generative AI systems rely on the collection and processing of massive amounts of data, then clean, segment and organize it so AI models can be trained.

Should people deliberately or accidentally annotate this data to alter the created content, without traceability, the generative AI is considered biased. The biased output may be accidental (misinformation), deliberate (disinformation) or based on fact but used out of context to mislead, harm, or manipulate (malinformation).

One example would be intentional editing of valid video content to remove important context. Microsoft, owner of the, generative AI ChatGPT by OpenAI, recently announced the company is expanding public access to its AI programs to make the technology more accessible to users, as deployment of AI is benefits businesses struggling to find skilled labor in a post-pandemic economy.

IBM is urging Congress to adopt “precision regulation” approach to AI, which means establishing rules to govern deployment of AI in specific use cases, not regulating the technology itself, said Christina Montgomery, IBM’s chief privacy and trust Officer.

This would include, she said, different rules for different risks, clear definition and guidance on AI uses or categories that are high risk, transparency to consumers, and impact assessment by companies.

“Consumers should know when they’re interacting with an AI system and that they have recourse to engage with a real person should they so desire. No person anywhere should be tricked into interacting with an AI system,” Montgomery said. “At the core of precision regulation, Congress can mitigate the potential risk of AI without hindering innovation. But businesses also play a critical role in ensuring the responsible deployment of AI.”

Montgomery suggested companies active in developing or using AI have strong internal governance, including a lead AI ethics official responsible for an organization’s trustworthy AI strategy, and standing up an ethics board. “IBM has taken both of these steps and we continue calling on our industry peers to follow suit,” Montgomery said.

‘The public interest’

U.S. Senators Michael Bennett (D-Col.) and Peter Welch (D-Vt.) proposed a bill recently that would establish a Federal Digital Platform Commission to “provide comprehensive regulation of digital platforms to protect consumers, promote competition, and defend the public interest.

“There’s no reason that the biggest tech companies on Earth should face less regulation than Colorado’s small businesses – especially as we see technology corrode our democracy and harm our kids’ mental health with virtually no oversight,” Bennet said in a statement. “Technology is moving quicker than Congress could ever hope to keep up with. We need an expert federal agency that can stand up for the American people and ensure AI tools and digital platforms operate in the public interest.”

The U.S. Department of Justice and the Federal Trade Commission largely oversee digital platforms, but despite their work to enforce existing antitrust and consumer protection laws, Bennett says they “lack the expert staff and resources necessary for robust oversight.” He also asserts both agencies are limited by existing statutes “to react to case-specific challenges raised by digital platforms, when proactive, long-term rules for the sector are required.”

Bennett points to other key oversight bodies such as the Food and Drug Administration, the Federal Communications Commission and Federal Aviation Administration as examples of key regulatory bodies created to protect the public.

The Digital Platform Commission Act introduced by the two Senators would:

 ● Establish a five-member federal commission empowered to hold hearings, pursue investigations, conduct research, assess fines and engage in public rulemaking to establish “rules of the road” for digital platforms to promote competition and protect consumers from, for example, “addicting design features or harmful algorithmic processes.”

● Empower the Commission to designate “systemically important digital platforms” subject to extra oversight, reporting and regulation -- including requirements for algorithmic accountability, audits, and explainability.

● Create a “Code Council” of technical experts and representatives from industry and civil society to offer specific technical standards, behavioral codes and other policies to the Commission for consideration, such as transparency standards for algorithmic processes.

● Direct the Commission to support and coordinate with existing antitrust and consumer protection federal bodies to ensure efficient and effective use of federal resources.

Biden’s administration also introduced a host of efforts to bring more scrutiny to the technology.

This includes the Blueprint for an AI Bill of Rights and related executive actions, the AI Risk Management Framework, a roadmap for standing up a National AI Research Resource, active work to address the national security concerns raised by AI, as well as investments and actions announced earlier this month.

China Taking Action

After playing catchup to OpenAI’s success with ChatGPT, China’s cyberspace agency wants to ensure AI will not attempt to ‘undermine national unity’ or ‘split the country.’

In 2017, two early Chinese chatbots were taken offline after they told users “they didn’t love the CCP and wanted to move to the U.S.”

Under draft regulations released last month, Chinese tech companies (see figure 1) will need to register generative AI products with China’s cyberspace agency and submit them to a security assessment before they can be released to the public.

The regulations cover practically all aspects of generative AI, from how it is trained to how users interact with it. AI solution providers are also restricted from using personal data as part of their generative AI training material and must require users to verify their real identity before using their products.

The enumeration of the measures is significant to North America as they are the first established standard regarding generative AI.

The so-called Measures for the Management of Generative Artificial Intelligence Services, released in April 2023, incorporate 20 articles, summarized below:

  • Article 1: Purpose: In order to stimulate the healthy development and standardized application of generative artificial intelligence (AI), on the basis of the Cybersecurity Law of the People’s Republic of China, the Data Security Law of the People’s Republic of China, the Personal Information Protection Law of the People’s Republic of China, and other such laws and administrative regulations, these measures are formulated.
  • Article 2: Domain: These measures apply to the research, development, and use of products with generative AI functions, and to the provision of services to the public within the [mainland] territory of the People’s Republic of China.
  • Generative AI, as mentioned in these measures, refers to technologies generating text, image, audio, video, code, or other such content based on algorithms, models, or rules.
  • Article 3: Prioritized use of secure and reliable software, tools, when developing AI algorithms.
  • Article 4: Content generated shall reflect CCP Core Values, prevent discrimination and intellectual property theft.
  • Article 5: Generative AI to provide services such as chat, text, image, or audio generation must fulfill personal information protection obligations.
  • Article 6: Security assessment - must be submitted to the Cyberspace Administration of China
  • Article 7: Providers shall bear responsibility for the legality of the sources of generative AI
  • Article 8: Alteration of Generative AI by humans; disclosure (prevention of bias).
  • Article 9: Identity verification of all solution providers.
  • Article 10: Prevention of addiction to generated content.
  • Article 11: Solution providers must purge unprotected PII
  • Article 12: No discriminatory processes based on a user’s race, nationality, sex.
  • Article 13: Receive, handle, resolve user complaints. Handle individual requests concerning revision, deletion, or masking of PII and when generated text, images, audio, video, etc., infringe other persons’ likeness rights, reputation rights, personal privacy, or commercial secrets, or do not conform to CCP Core Values.
  • Article 14: Provide secure, stable and sustained services through the service’s lifecycle.
  • Article 15: When generated content that does not conform to the requirements of these Measures is discovered during operations or reported by users, aside from adopting content filtering and other such measures, repeat generation is to be prevented through such methods as optimization training within three months.
  • Article 16: Providers shall mark generated images, videos, and other content in accordance with the Internet Information Service Deep Synthesis Management Provisions.
  • Article 17: Solution providers shall provide any human-annotated data that could influence users trust or choices.
  • Article 18: Guide users to scientifically understand & use content by generative AI; and not use generated content to damage others’ lawful rights; and not engage in improper marketing.
  • Article 19: Provider must suspend or terminate services on discovery that generative AI products violate laws or regulations, business ethics.
  • Article 20: Penalties are imposed if a provider violates the measures, as per Cybersecurity Law of the People’s Republic of China, the Data Security Law of the People’s Republic of China, the Personal Information Protection Law of the People’s Republic of China.

European Union’s ‘AI Act’

The European Union has proposed sweeping legislation known as the AI Act (see Figure 2) that would classify which kinds of AI are “unacceptable” and banned, and establish categories of “high risk,” regulated and unregulated.

This is considered a scale-up of the 2018 General Data Protection Regulation, passed in 2018, which is one of the toughest data privacy protection laws in the world.

Figure 2: European Union Agency for Fundamental Rights: Bias in Algorithms: Artificial Intelligence and Discrimination

The Centre for Information Policy Leadership (CIPL), a global privacy and data policy think and do tank, based in Washington D.C., Brussels and London, responded to the EU Commission's Consultation on the Draft AI Act with a new project focusing on AI, its reliance on Big Data and the evolution towards autonomous AI. The goals include the following, culminating in a first roundtable this summer in Brussels:

  • Clearly describe the wide range of technological innovations encompassed by “AI,” today and in the near future, including examples of the ways in which they are being deployed in specific sectors.
  • How AI is being used to facilitate privacy, and the challenges to data protection laws.
  • Opportunities and challenges presented by AI innovations to ethical use of personal data
  • Steps for addressing challenges, including best practices already in use by leading companies to facilitate trust, transparency, and control while delivering a friction-less, enjoyable experience; innovative applications of existing legal concepts; the role of accountability; and proposals for new approaches.
  • Acknowledgment of issues that cannot be resolved within existing laws and regulations and of the limits of what we know about AI and its future.