Course:JRNL 503B/Deepfakes

From UBC Wiki


3. External tools for spotting deepfakes

  • Google reverse image search - allows you to see other copies of the image if they exist in a different context
  • YouTube Data Viewer or inVid - tells you the time/date of the uploaded content
  • Google Maps Street View, Google Earth - these can help corroborate locations by comparing background images
  • EXIFdata and Exiftool - this scrapes the metadata from the video/image, by downloading the content and running it through the program you can get an idea of who posted it (though metadata can be scrubbed/altered)

4. What are deepfakes used for?

Despite the alarm, the majority of deepfakes have so far been created for entertainment with no intent of sowing geopolitical discord.

There have not been any examples of a deepfake being powerful enough to convince a newsroom that they’re real. Although plenty have gone viral and are shared widely on social media, none have reached the heights of influencing governments. Yet.

Entertainment

The majority of deepfakes that you find on YouTube and Reddit are funny face-swaps of celebrities. Most are marked with a [DeepFake] tag and are seemingly harmless forms of comedy, often involving the actors Nick Cage and Rowan Atkinson.

Some of the most infamous examples include:

  • Dr. Phil on Dr. Phil  
  • Sylvester Stalone in Home Alone  
  • Mr. Bean as Charlize Theron
  • Ron Swanson as Wednesday Adams
  • Nick Cage as just about everyone

Politics

Misrepresenting what someone says or does is nothing new. But deepfake technology makes the potential for harm higher than ever. There are many deepfake videos involving Donald Trump, but none have yet fooled any other heads of state.

U.S. Senator Ben Sasse said that deepfakes were “likely to send American politics into a tailspin” and introduced a bill that would make it a crime to create or distribute deepfakes with malicious intent.

Despite the panic, there is no evidence that deepfakes were used in the 2020 U.S. elections to influence either campaign. Yet.

There have been several doctored videos that have made headlines, including one of Nancy Pelosi appearing intoxicated, and journalist Jim Acosta, which was changed to make it seem like he ‘chopped’ the arm of an intern who was taking a mic from him.

One of the more nefarious uses of deepfake technology to influence journalism is AI bots creating fake Twitter profiles of reporters.  

Business

There are now businesses that sell fake people.

The New York Times reports that, “on the website Generated.Photos, you can buy a ‘unique, worry-free’ fake person for $2.99. If you just need a couple of fake people — for characters in a video game, or to make your company website appear more diverse — you can get their photos for free on ThisPersonDoesNotExist.com.”

5. Deep fakes and pornography  

“Deepfakes are rarely used to help usher in a dystopian political nightmare where fact and fiction are interchangeable: They exist to degrade women.”

  • Aja Romano

In 2019, Sensity.ai released a report which found that 96% of deepfake videos are simulating porn of female celebrities without their consent. Most appeared on the Reddit thread r/deepfake which has now been banned for violating Reddit’s policy against sharing content that contained involuntary pornography.

Nina Jankowicz, author of “How to Lose the Information War,” says “This is all part and parcel of the broader abuse and harassment that women have to deal with in the online environment.”

It’s not just famous women anymore. An AI bot can use a single image of any woman to create pornographic images. Earlier this year, an AI bot was trawling Telegram for women’s user images and turning them into porn.

This has opened the floodgates of deepfaked revenge porn.

Noelle Martin is a victim of deepfake porn. She was emailed graphic videos of a porn star with her face. “It was convincing, even to me," she said. The ordeal led to her starting a campaign for the Australian government to do more to tackle the issue.

Rana Ayyub is an investigative journalist and sexual violence activist, who also became the victim of deepfake pornography bearing her image. The abuse, intimidation and sexual blackmail that deepfakes can be used for results in self-censorship and building an atmosphere of fear around female public figures.

“Perhaps fake porn is not critically addressed in the news media because it does not dupe or disempower the editors and executives who set media agendas, which disproportionately focus on men. While political deep fakes threaten their individual truth-seeking power, fake porn appears to reinforce it.”

  • Sophie Maddocks

The greatest threat that deepfakes pose to journalism is in their ability to threaten and potentially silence female journalists.

6. Threats to truth and transparency

Experts like Ashish Jaiman, Director of Technology and Operations at Microsoft, call deepfakes an “imminent threat” to society. There are also concerns around deepfakes threatening national security as well as creating significant economic losses. The harm caused by deepfakes can be broadly categorized into:

  • Disinformation (and related impact on fake news, national security, economic conduct, social distrust and the liar’s dividend)
  • Weaponized harm (to blackmail, bully or create emotional duress for individuals, especially using deepfake porn)
  • Threats to cybersecurity (by creating models that promote human error.)
  • Intellectual property abuse  

Easier access to cloud computing and the democratization of AI allows individuals with minimal resources to create deepfakes. Improvements in Generative Adversarial Networks not only make for more realistic deepfakes but also outpace efforts to spot and regulate them.  

There are two major solutions to deepfakes created to intend harm:

  • Detection

Detection software and tools are created to be able to spot whether a piece of media is a deepfake after its creation. Detection tools check the media for audio inconsistencies, inaccurate shadows, variation in physical features and other such differences that the human eye can’t see.

Due to the nature of GANs, the creation of detection tools requires large datasets and computing power and runs the risk of quickly becoming obsolete.

Major technology companies like Facebook, Twitter, Google, Microsoft, etc) and government organisations (like the U.S. Defense Advanced Research Projects Agency) are the few places where the funding and creation of detection technology can be sustained in the long run.

  • Authentication and provenance

Authentication and provenance tools are considered a long-term solution to combat harmful deepfakes.

Using metadata, author tags and an attempt to create a single certification database allows platforms to spot a deepfake before it’s published. These solutions rely on proofing the source and reliability of all published media, making a deepfake stand out.

Since these solutions do not require creation of computation intensive software, they are less expensive and not as resource heavy. However, only a wide adoption of a standardized framework by media creators, generators and publishers will be an effective solution against deepfakes.      

Some apps are also building databases of verified media which can be used to source and detect deepfakes.

7. What can journalists do about deepfakes?

Deepfakes pose a dual threat to journalists and journalism as a whole. Their capacity for disinformation, along with their technical prowess, means journalists will have to develop new tools and processes to verify videos which could be deepfakes, or deepfakes that become newsworthy themselves.

They will also need to investigate claims where true media is accused of being a deepfake.

Meanwhile, the threat of deepfakes could lead to an erosion of trust as a whole, where potentially, any piece of media could be a deepfake, increasing polarization and reducing trust in the news media.    

“We have to avoid playing into the hands of people who want to call everything ‘fake news’ and to technology solutions that will completely substitute a technical signal for human judgement, rather than complement human judgement. Yet we do have to prepare.”

- Sam Gregory, Program Director, WITNESS.org

While there have been no major incidents of inaccurate reporting due to a deepfake, or prolonged investigations into the authenticity of media suspected to be a deepfake, there is a need for newsrooms to build a collaborative approach to dealing with them.

Major newsrooms have signed up with technology companies to combat disinformation, specifically, deepfakes. The New York Times, CBC and the BBC have partnered with Microsoft’s Project Origin to test the company’s authenticity technology.

The same companies are also reportedly testing the use of deepfakes to protect journalists reporting in hostile environments. In a positive twist, a deepfake could be employed to change the journalist’s identity while allowing them to report through audio or video.

The Wall Street Journal has formed a committee of 21 newsroom members across different departments that will help identify disinformation, including “AI-synthesized media” (deepfakes.)    

8. Coverage of deepfakes

The mainstream media has been responsive to the threat of deepfakes with coverage appearing in The New York Times, The Wall Street Journal, The Washington Post, The Guardian, the BBC, and the CBC.

Most outlets focused on, yet struggled to specify, how much of a problem deepfakes can be. While accurately pointing out the potential harm, they were less certain on where its next realistic target, and consequential damage, would be. More recent coverage discusses whether the deepfake threat to the U.S. election was overblown by the media.       

The media has also done a reasonable job of covering the harm done by deepfakes against women, usually in response to inciting incidents.

While coverage focuses on the incident and the enormity of harm, it usually lacks in providing context to solutions or policy redressal in this area.