top of page

Legal Means to Prosecute Actors Behind Deepfakes

By Daniel de Zayas

In June 2019, many U.S. government officials learned that nothing was real about their recent LinkedIn connection Katie Jones, not even her profile picture. In addition to using the typical fake employment history, experts believe that the suspected foreign state LinkedIn account also featured a “deepfake” profile picture generated by a computer—more specifically, by generative adversarial networks—to further conceal the covert account. The Katie Jones “incident” exemplifies the potential threat of deepfakes and generative adversarial networks. However, while they pose new national security threats, this piece demonstrates that the actors behind them can be prosecuted under existing federal laws.

The Technology Underlying Deepfakes

Deepfakes are generally the product of ever-improving generative adversarial networks (GANs). Every GAN consists of two sets of competing, or “adversarial,” algorithms (called neural networks or “networks”): a generator network and a discriminator network. In the illustrative context of image creation, in extremely simplified terms, the generator network takes random input variables and generates an image. The discriminator network, which has access to a “real image” to know what an image is supposed to look like, analyzes the generated image and determines whether the generated image is real or fake. When the discriminator identifies the generated image as fake, the generator “learns” what factors the discriminator used to determine that the generated image was fake and, in turn, deduces what a real image looks like. In evaluating the generated image, the discriminator also learns new factors to identify fake images. With this new knowledge in hand, the process repeats, potentially millions of times, until the generator produces images that the discriminator generally identifies as real.

The Capabilities of GAN-Generated Content and Deepfakes

GAN-generated content is already of sufficient quality to threaten national security. In addition to the Katie Jones example, researchers have produced deepfakes of politicians, including Bernie Sanders, Elizabeth Warren, and Hillary Clinton. Some scholars caution that deepfakes could cause significant damage, including “distortion of democratic discourse on important policy questions; manipulation of elections; erosion of trust in significant public and private institutions; enhancement and exploitation of social divisions; harm to specific military or intelligence operations or capabilities; threats to the economy; and damage to international relations.” While researchers and DARPA continue to develop forensic techniques to identify and combat deepfakes, the U.S. legal system currently features several laws to prosecute actors using deepfakes to cause some of the “damage” noted above. 

Prosecuting Malicious Actors Behind Deepfakes

The following projections are not exhaustive, and they assume that those responsible for deepfakes can be identified and brought before the U.S. criminal justice system. 

If a deepfake is used to impersonate a federal employee and to obtain or demand intelligence products, prosecutors could bring charges under 18 U.S.C. § 912, which criminalizes “falsely assum[ing] or pretend[ing] to be” a federal employee and using that false identity to demand or obtain “any paper, document, or thing of value.”

If a deepfake is used to impersonate a company’s CEO to obtain trade secrets for the benefit of a foreign government, prosecutors could bring charges under 18 U.S.C. § 1831, which criminalizes knowingly or intentionally obtaining trade secrets “by fraud, artifice, or deception” for the benefit of a foreign government, instrumentality, or agent.

If two or more people conspire to use a deepfake to manipulate a federal election by impersonating a law enforcement authority to threaten individuals who vote, prosecutors could bring charges under 18 U.S.C. § 241, which prohibits conspiracy to “oppress, threaten, or intimidate” any individual “in the free exercise or enjoyment of any right . . . secured to him by the Constitution.” 

Moreover, 18 U.S.C. § 953 theoretically proscribes the use of a deepfake to impersonate a U.S. politician to communicate with a foreign government “with intent to influence the measures or conduct of any foreign government . . . or to defeat the measures of the United States.” 

Consequently, while technology typically outpaces the law, deepfakes may be an unexpected exception. These relatively new technological feats arguably fit within the existing legal framework. Congress should evaluate the adequacy of existing legal means to prosecute actors behind deepfakes before drafting new deepfake-specific legislation.

Comments


bottom of page