Believe Half of What You See? Maybe not.

In an era of rampant cries of fake news, the aphorism that instructs us to “believe half of what you see, and none of what you hear” has engendered an appropriate amount of healthy skepticism. However, the increasing use of deepfake technology makes such guidance seem a bit naive.

Deepfake combines computer vision, artificial intelligence, and machine learning to alter video and pictures in extremely convincing ways. We’ve all seen “Photoshopped” images online that attempt to combine images seamlessly so that the resulting mix seems like an original photo. Depending on the skill of the creator, the result can be thoroughly convincing or, perhaps more often, quite humorous. Photoshopping efforts aim to change still imagery, but Deepfake technology extends these techniques to animation and video. In 2017, for example, researchers at the University of Washington created the “Synthesizing Obama” project, in which they convincingly substituted recordings of other speeches for the audio track of an existing video of the former President delivering an address. Over the past year, deepfake technology has been used to create everything from porn to political propaganda, and many new applications have been created to make the technology easier to use for a wider audience.

A deepfake production made headlines this week when a video emerged on Instagram of Facebook founder Mark Zuckerberg supposedly claiming to control billions of people’s stolen data. This was not Facebook’s only high-profile run-in with deepfakes this month. Three weeks prior, Facebook allowed a doctored video of House Speaker Nancy Pelosi that made her look drunk and stammering to remain on the social media site, even after it had verified that the video was fake. Acting consistently, Facebook didn’t yank the Zuckerberg deepfake from its Instagram platform either, choosing instead to adjust its filters so that the video would appear less prominently on its sites. Thus far, Facebook has chosen to suppress rather than remove such content, and it currently lacks the ability to identify it and prevent its publication proactively.

These incidents have raised grave concerns about the potential impact of deepfake videos on the public’s perception of truth. The timing could not be more concerning. The coming election season will surely be marked by continuous controversy, acrid accusations, and sophisticated attempts to shape the narrative. Many of these efforts will again be waged by foreign governments that succeeded so well at interfering with our elections last time. If the public can be effectively swayed by fake Twitter accounts and targeted ads on social media, how much more susceptible will they be to messages promulgated by extremely realistic video simulations of their most despised political bogeymen?

We have to mobilize quickly in advance of the 2020 Presidential election to try to minimize the impact of deepfake video on public discourse. I see two approaches, one in the public relations sphere, and the other a technical solution. Clearly, we have to educate the public on the existence of deepfake technology and what it can do. The technology hasn’t received enough attention, either from the public or from social media platforms, which will likely be caught flat-footed as more and more of this manufactured content starts to appear on their sites. The public must be educated that the technology exists and what it can do so that they may know to question the legitimacy of what they see online. People do tend to believe what they want to believe, but they are left defenseless if they don’t recognize that there is chance that their eyes are deceiving them.

On the technical side, we need to extend the use of digital watermarking beyond embedding copyright information in digital assets to asserting the authenticity of the participants in a video. A digital watermark is a code added to a data file that identifies its owner. Even if that data is included as part of other content, its digital watermark is preserved and can be checked. It is usually employed to enforce copyright, but it need not be limited to that. Instead, it can identify whether a particular person, say a political candidate, is actually the person that appears in a video. For example, if a candidate gives a press conference or other filmed event, they can digitally sign the footage to vouch for their involvement in that video. Of course, that perhaps places too much control in the hands of the candidates, allowing them to limit potential damaging disclosures of actual gaffes, or to treat certain outlets or audiences preferentially. Alternatively, or in combination with candidate-initiated watermarking, news agencies could digitally watermark the content they film. That way, when a deepfake artist tries to manipulate the video, the watermark will be corrupted, a clear sign that the video is fake. That will give open-minded people plenty of reason to discount verifiable deepfake propaganda.

The threat posed by deepfake technology is real and imminent. We have to act now to educate the public about its capabilities and to counteract its usefulness using digital watermarking. Otherwise, propagandists will soon make the shenanigans of 2016 seem quaint by comparison.

About Ray Klump

Professor and chair of Mathematics and Computer Science Director, Master of Science in Information Security Lewis University http://online.lewisu.edu/ms-information-security.asp, http://online.lewisu.edu/resource/engineering-technology/articles.asp, http://cs.lewisu.edu. You can find him on Google+.

2 thoughts on “Believe Half of What You See? Maybe not.

  1. July 16, 2019 at 6:54 am

    thanks very nice

  2. July 9, 2019 at 7:09 am

    that was perfect
    I really enjoyed

Leave a Reply

Your email address will not be published. Required fields are marked *