Deepfakes are coming and lawmakers are looking for answers

Experts worry false video and audio created by deep learning algorithms can wreak havoc during the next election, but finding those responsible and punishing them is no easy task.

shutterstock image: meyer_solutions ID 1020952429
 

Deep fakes are coming, and policymakers and experts are still grappling with what to do about them. But some in Congress want to address the problem of highly realistic fake audio and video before the next election, and they want social media platforms to play a part.

"Now is the time for social media companies to put in place policies to protect users from this kind of misinformation, not in 2021 after viral deep fakes have polluted the 2020 elections," House Intelligence Chair Adam Schiff (D-Calif.) said during a June 13 hearing. "By then it will be too late."

Rep. Yvette Clarke (D-N.Y.) introduced a bill this week that would impose a range of civil and criminal penalties on those who create or distribute such videos without clearly labeling them, and form a federal task force to examine ways to improve detection. Clarke's office told FCW that the legal requirement to label fake media would fall on the individual creating the manipulated video, not social media platforms.

Last year, Sen. Ben Sasse (R-Neb.) introduced legislation that would criminalize those who "create, with the intent to distribute, a deep fake with the intent that the distribution … would facilitate criminal or tortious conduct."

While deep learning algorithms are capable of sniffing out false videos, they often rely on the same underlying technology that bad actors use to refine and craft even more convincing fakes over time.

Additionally, such audiovisual content can be uploaded and shared anonymously from around the world, creating problems of attribution. When the purveyors of such content are foreign, they may not be reachable through traditional law enforcement or regulatory mechanisms.

"You have to be able to find the defendant to still prosecute them, and you've got to have jurisdiction over them," Danielle Citron, a University of Maryland professor who has studied the potential implications of deepfakes, told lawmakers.

When the actors are domestic, it creates the same First Amendment freedom of expression issues that plague the broader debate over regulation of disinformation campaigns. Several witnesses noted that manipulating content for the purposes of comedy or political satire have long been considered protected speech. Additionally,  there often isn't a legally clear or meaningful way to distinguish and quickly remove content based on intent, particularly when a video or audio file can go viral and be shared millions of times before attribution can even be discerned.

An altered video of House Speaker Nancy Pelosi (D-Calif.) set off a firestorm of debate over how policymakers and social media platforms should respond. While the video was a "cheapfake" rather than a deepfake (meaning it was likely done using traditional video editing software, not deep learning algorithms) it gave the public a taste of the widespread confusion such tactics can create.

While Facebook eventually labeled the video as manipulated content, it had already been widely shared, and many called for the company to remove the video entirely, something Facebook executives declined to do.

"As awful as I think we all thought that Pelosi video was, there's got to be a difference if the Russians put that up, which is one thing, versus if Mad Magazine does that as a satire," Rep. Jim Himes (D-Conn.) said.