by Hala Mounib

Click Here to download Reference List

The fabrication and distortion of truth in the media isn’t new; it’s been done since the dawn of the printing press and journalism, with new methods constantly being introduced to push certain agendas and narratives onto consumers.

Deep fakes, a product of the utilization of deep learning to generate fake images and videos, is the latest technology to undermine the confidence in the authenticity of the media. Deep learning refers to a specialized form of machine learning, where algorithms function as neural nets, processing architectures that emulate biological neurons (John Fletcher, 2018).

The technique that this technology uses is called Generative Adversarial Networks (GANs). These are deep neural net architectures made up of two nets by contrasting one of them against the other. This allows the mimicking of any data domain, whether it be images, videos, speech, or music, with the main currently being images and videos. The Artificial Intelligence (AI) assisted tech analyzes the images of a subject’s face, manipulates it, and then maps it onto a different person’s body (Rebecca A. Delfino, 2019).

The results rendered are becoming more refined and easier to acquire. Due to the widely available apps that generate deep fakes, the process of machine learning becomes easier as the AI is now allowed to practice on users’ pictures, contributing to its rapid evolution. The technical threshold is also decreased as anybody who downloads these apps is capable of producing deep fake media (Rebecca A. Delfino, 2019).

This has led to a rise in fake, and particularly defaming, news. In 2017, TensorFlow, an open source machine learning software owned by Google, was used to superimpose celebrities faces onto the bodies of pornographic actresses. This has spread to the political sphere too, where videos of politicians and presidents have been doctored to alter their behaviours and what they’re saying.

While the technology remains in its infantile stages because of the relative ease of identifying the falsehood of such videos, deep learning continues to grow. The once easy to detect irregularities will soon become difficult to intercept.

The U.S. Congress has introduced two bills, both of which are in the first stage of the legislative process to combat incriminating deep fakes. In December 2018, the Malicious Deep Fake Prohibition Act was introduced by Senator Ben Sasse (R-NE). It aims at amending article 18 (Crimes and Criminal Procedure) of the United States Code by adding a section to Chapter 47 (Fraud and False Statements). The section is to be titled “fraud in connection with audiovisual records” and carries a sentence of 2 to 10 years if violated.

The second Act is the Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act, or simply the DEEP FAKES Accountability Act. Introduced by Representative Yvette D. Clarke (D-NY-9) in June 2019, the Act is much more thorough than the Malicious Deep Fake Prohibition Act of 2018, with many requirements to ensure that altered visual and audio elements are clearly stated in embedded digital watermarks and unobscured written statements at the bottom of the doctored media. It also carries prison sentences if violated.

While these acts are still in their primary stages, they face multiple challenges. From a libertarian perspective, it’s an infringement on the 1st Amendment to prohibit the freedom of the press. Additionally, the freedom of expression is taken away when individuals have to alter their work, even if it isn’t authentic in itself. While the DEEP FAKES Accountability Act mentions that the Attorney General may grant exceptions to certain cases where the production of such media is protected by the 1st Amendment, it’s unreasonable to vet and process every single instance that deep fake technology has been used. It’s both time consuming and unfeasible.

Moreover, there exists a bigger weakness in both acts, which is the determination of intent. Four main mentions of intent are mentioned in both acts:

  1. Intent to facilitate criminal or tortious conduct.
  2. Intent to humiliate or otherwise harass the person falsely exhibited.
  3. Intent to cause violence or physical harm, incite armed or diplomatic conflict, or interfere in an official proceeding.
  4. Intent of influencing a domestic public policy debate, interfering in a Federal, State, local, or territorial election.

No intent can be proven unless its confirmed by some sort of media extracted from the defendant; textual, audio, or video. These happen to be the same type of media that can be altered by GANs, thus making the grounds of intent not justified for arrests to take place.

Both acts are also tricky to carry out because of the difficulty of determining the origins of altered media; anyone can get away with producing deep fakes and not embedding digital watermarks in them given they don’t provide their real details while distributing them online.

Finally, it’s worth noting that under the “Exceptions” subsection of the DEEP FAKES Accountability Act, the requirements and penalties do not apply to governmental officers and employees “in furtherance of public safety or national security”. This legalizes the doctoring of media for the abovementioned incriminating intents, making the Act an effective tool for the distribution of false news and altering the population’s perception.

While the acts may seem reasonable at first, they obscure a much more dystopian purpose rather than just fighting cybercrime. From the infringement on the First Amendment to the lack of proper methods of determining intention, the policymaking process is fundamentally flawed. There are several ways to go about formulating policies that favour the citizen and his well-being to ensure his protection from falsified media, the Malicious Deep Fake Prohibition Act and the DEEP FAKES Accountability Act are not examples of that.


Featured Image Credit: Council on Foreign Relations 2018

Hala Mounib is a Policy Research Fellow at the American Freedom Institute

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s