top of page

U.S. Artificial Intelligence Regulations and National Security Risks

Updated: Nov 24

By: Phillip Paddock


Introduction

The following scenario represents a hypothetical event given the lack of oversight on AI usage and deepfake media:

The President of the United States delivers a televised speech from the Oval Office, notifying the American public that there has been egregious election tampering in the latest presidential election. Supporters from both parties respond by condemning the speech and mass panic ensues across the country. Law enforcement agencies nationwide are deployed to enforce peace. The governors of several states call in their National Guard units to assist. Clashes occur between protestors, rioters, law enforcement personnel, and uniformed service members. The public reaction is very real, but the televised speech that instigated the fury never actually happened.

 

This deepfake scenario is entirely hypothetical but is becoming increasingly plausible as artificial intelligence (AI) systems/models such as OpenAI’s ChatGPT continue to operate unregulated and available for anybody to use. The AI video system Sora, also developed by the U.S.-based company OpenAI, could allow the scenario to become a reality. There are currently no effective AI-related laws that can ensure the American public is well safeguarded. Left unregulated, AI could pose a national security threat to the United States and its allies, and embolden U.S. adversaries. Ineffective legal standards will allow domestic/international non-state and state actors to produce disinformation using AI.

 

Additionally, bad actors will be able to more effectively carry out cyber-attacks targeting U.S. entities. AI regulations should be implemented at the federal level so private and public organizations have legal parameters by which they can operate. Although there are no effective AI-related laws as noted above, 18 U.S. Code § 1030—“Fraud and related activity in connection with computers cybercrime”—details improper access of computer systems and outlines the punishment for accessing computer systems, governmental and non-governmental, without authorization. This provision can be used to deter adversaries from disseminating disinformation in the United States and will be discussed later in this article. Deepfakes present their own unique set of AI-related issues in the national security field. This makes understanding these capabilities crucial to addressing the disinformation campaigns and cyber-attacks that can negatively affect national security in the United States.

 

AI Laws

The Biden administration has developed a "Blueprint for an AI Bill of Rights" which aims to curb the threat AI poses against the American public. Specifically, the Blueprint focuses on five main pillars: (1) Safe and Effective Systems; (2) Algorithmic Discrimination Protections; (3) Data Privacy; (4) Notice and Explanation; and (5) Human Alternatives, Consideration, and Fallback. While it is encouraging to see the Biden administration focusing on combatting the dangers of AI by establishing the Blueprint, and through Executive Order (EO) 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, technological advancements in AI are likely outpacing the time it takes for federal government practices and procedures to mitigate the implications of AI.

 

Moore's Law, a theory that claims the processing capacity of computer chips doubles every other year, is becoming less relevant over time. In fact, many experts have stated that Moore’s Law is coming to an end as obstacles in AI development and chip manufacturing become more evident. While the development predicted under Moore’s Law may be slowing, it is still a national security concern because of how reactive, rather than proactive, U.S. legislation can be. This could be beneficial for the U.S. government to “catch up” in terms of mitigating the national security implications and ramifications of AI usage. The United States must capitalize on the potential slowing of rapid growth in the AI technology sector by initiating and passing legislation that is centered on the concerns listed below, which are reflected in the proposed bipartisan legislation, H.R. 7532, Federal AI Governance and Transparency Act:

 

  • Define federal standards for responsible AI use

  • Strengthen governmentwide federal AI use

  • Establish agency AI governance charters

  • Create public accountability

  • Repeal repetitive law

  • Harmonize new efforts with existing law

 

These provisions could help mitigate potential U.S. national security risks. For example, establishing agency AI governance charters will outline the guidance and ensure U.S. entities comply with regulations set forth by Congress and the executive branch. Defining federal standards for responsible AI use is of the utmost importance. Federal standards are absolutely required to safeguard the American people. Without any current federal standards, the U.S. government has no binding authority to enforce proper AI activity, and therefore cannot ensure that actors will use AI in the best interest of the American people. We have been without federal standards in AI for too long and are beginning to see the ramifications, including a breadth of national security risks from data hacks to disinformation campaigns.

 

Legal Avenues to Mitigate Future Disinformation and Cyberattacks in the Homeland

Perhaps the most significant law to discourage disinformation campaigns and cyberattacks is 18 U.S. Code § 1030—“Fraud and related activity in connection with computers.” This statute provides for punishment of up to ten years of imprisonment for those convicted of a violation. Subsection (a)(7) focuses on ransomware attacks. While § 1030(a)(7) is the code applies to ransomware attacks, it may be more difficult to apply it to disinformation campaigns. While another provision, 18 U.S. Code § 35, addresses disinformation, it is colloquially known as the “Bomb Hoax” statute and focuses on “non-malicious false reports.” These statutes should be used cooperatively to mitigate cyberattacks that rely on disinformation campaigns using AI. As will be discussed later, current U.S. statutes do not seem to deter adversaries (as seen with the Colonial Pipeline attack in 2021). Without more robust legislation targeting specific disinformation campaigns, these types of attacks will likely continue to occur.

 

National Security Risks

Legislation can take months, or even years, to move through Congress and become law, for the fraction of bills that ever become law at all. This is by design and has its benefits, but the United States is experiencing——and will continue to experience——the consequences of that process when it comes to AI. As seen during the lead up to the 2016 and 2020 U.S. presidential elections, disinformation campaigns spearheaded by foreign state actors can be quite successful if deployed correctly. Adversaries can (and have) use(d) social media accounts to infect conversations online and stoke fear within the American public, forcing a deep divide within the United States. AI models only make it easier for these adversarial state actors to create and control a narrative that could effectively destroy all public confidence in the U.S. government. When people lose confidence in their government, that tends to lead to a "reduced support for government action to address a range of domestic policy concerns."

 

The ramifications of not having AI regulation continue to be prevalent. In July 2024, the U.S. Department of Justice announced it disrupted a “Russian propaganda campaign using fake social media accounts” that was powered by AI. The goal of the campaign was to spread disinformation throughout the United States. This campaign was approved and funded by the Kremlin and was run by a Russian intelligence officer. These types of campaigns will continue until the federal government implements regulations specific to AI detection technology. While this campaign was a foreign-sponsored effort, proper AI regulation could ensure that domestic actors cannot effectively use AI to conduct a similar type of campaign against the American public.

 

Deepfake Media

Scenarios like the hypothetical deepfake described earlier are already happening to some extent. We have seen users create videos of former President Barack Obama giving a speech he never gave. The AI models have an uncanny likeness to their real-life counterparts and will only improve as advancements continue. This type of deepfake is particularly dangerous in the national security sector because malign actors can use any political official’s voice and likeness, and make it appear as though that person is saying anything that may serve the bad actor’s purposes. This technology is on the brink of becoming a reality when Sora is released later in 2024. Sora, developed by OpenAI (ChatGPT’s developer), will allow any user to use text prompts to develop a video based on the user’s inputs. For example, Toys “R” Us released a commercial that was created entirely by using Sora.

 

AI-generated images and videos have taken the internet by storm. Earlier this year, BBC News reported on an AI picture of former President Trump at a cookout that tricked many people online into believing that he was grilling with an African-American community. In reality, the picture was entirely fabricated, and the event never occurred. The picture was generated to push a “strategic narrative” in favor of former President Trump, even though the people in the image do not exist. AI can be used to develop these fake pictures and videos at an ever faster and more alarming rates. If left unregulated, there truly is no limit to what foreign adversaries and domestic actors can create using AI tools, whether it is pictures, videos, edited text messages and emails, or other forms of communication.

 

Proposed Regulation and Hurdles

The Cybersecurity and Infrastructure Security Agency (CISA) within the Department of Homeland Security (DHS) is probably the most well-equipped agency to support the adherence to AI compliance from private companies. CISA has experience responding to a variety of data hacks, including the 2021 Ransomware attack on the Colonial Pipeline. AI feeds into these types of attacks by automating various activities and makes the entire process faster and easier. Potential regulation could cover two (2) main topics:

 

  1. Creating a DHS AI Agency that oversees AI advancements and prepares for and responds to AI related incidents; and

  2. Include all elements of the Blueprint for an AI Bill of Rights into a bipartisan bill similar to the recent proposed bill that outlines proper federal agency AI usage.

 

While DHS has various agencies that work with AI advancements, it would be more beneficial to create a small agency that specializes in preparing for and responding to AI attacks. Other DHS agencies could transition to a support role and being an engaged stakeholder. This AI agency could also work with CISA to ensure private companies that create these AI programs for commercial use cannot be misused. If an American resident does misuse the AI program, then there should be penalties, such as a fine, to deter others from committing the same acts.

 

Concerning the potential proposed legislation or regulations, using the Blueprint for an AI Bill of Rights as a guideline would be beneficial. Instead of focusing on the federal government’s use of AI, it should focus on the American public’s proper use of AI to promote safe and non-malicious activities. Additionally, the federal government should work closely with big corporations like Microsoft, Apple, and OpenAI to guarantee that companies developing AI systems create them in a way that cannot be used against the American public.

 

There are two main hurdles in establishing an AI agency. The first hurdle is ensuring that the American public’s First Amendment rights are not violated. The second hurdle is the requirement for Congress to establish the agency. Congress will only establish the agency if they believe that it is an issue that needs to be addressed. With how divided both parties in Congress are, it is unrealistic to expect an AI agency to be created.

 

The proposed regulation is not a complete proposal, as subject matter experts (SMEs) are required to provide their insight and expertise. The SMEs could be government agencies or experts from private entities. Their insights would be invaluable to the development and refinement of proposed regulations that should be aimed at ensuring proper AI usage and other cybersecurity measures to prevent and quickly identify potential risks associated with AI.

 

On October 25, 2024, the Biden administration released the Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence. This newly released memo covers a wide breadth of directives to DHS, the Department of State, Department of Defense, Department of Energy, and other intelligence community (IC) and non-IC entities. The memo directs the agencies and departments to use all legal authorities to work collaboratively on semiconductor design and production, AI development, information sharing, and many more objectives and goals. This memo is a bright light in the ever-evolving field of AI in the national security field and will hopefully lead to preventing and countering disruptive AI related attacks in the homeland. The first step in solving this massive issue is recognizing the United States’ shortfalls. The Biden administration is undoubtedly aware of the shortfalls and has a comprehensive plan that uses all necessary departments within the U.S. federal government.

 

Conclusion

As of the date of publication of this article, TIME Magazine has reported that the incoming Trump administration will likely repeal the Biden administration’s Executive Order on AI. Repealing the Executive Order on AI would show a stark difference between the Biden administration and the Trump administration’s stance on AI and on national security as a whole. Hopefully, the Trump administration develops another executive order on AI that focuses on similar themes detailed in President Biden’s executive order. To effectively ensure companies and individuals in the United States are going to use the impending AI tools humanely, regulation needs to be implemented at the federal level. While the AI Blueprint and the newly introduced H.R. 7532, Federal AI Governance and Transparency Act show promise in ensuring a safe environment, the act is still a proposed idea that has not been established as law. The proposed legislation outlined above has the potential to ensure that the United States remains an economic and technologically advanced powerhouse. Until the passage of AI regulation, the ramifications outlined above will continue to pose a threat to U.S. national security for the foreseeable future.



Comments


bottom of page