With the 2020 US presidential election looming, political leaders, presidential candidates and the country’s intelligence chief are worried about doctored videos being used to mislead voters.

One professor is building tools to detect faked videos of major political figures such as Donald Trump, Theresa May and Justin Trudeau, as well as the US presidential candidates. It could help fight off the next generation of misinformation, where artificial intelligence is likely to play an increasingly prominent role in engineering deceptive media.

Deepfakes — a combination of the terms “deep learning” and “fake” — are persuasive-looking but false video and audio files. Made using cutting-edge and relatively accessible AI technology, they purport to show a real person doing or saying something they did not.

They’ve already been used to embarrass celebrities and politicians, and the videos are easier and cheaper than ever to produce — and look increasingly realistic. The seemingly endless real footage of politicians speaking on YouTube, including US presidential candidates, is a gold mine for anyone considering using this type of AI for election meddling. A new wave of tech is making cities smarter. But can it make their citizens happier?

Deepfakes are not yet pervasive, but the US government is concerned that foreign adversaries could use them in attempts to interfere with the 2020 election. In a worldwide threat assessment in January, Dan Coats, US Director of National Intelligence, warned that deepfakes or similar tech-driven fake media will probably be among the tactics used by people who want to disrupt the election. On Thursday, the House Intelligence Committee will hold its first hearing on the potential threats posed by deepfake technology.

In hopes of stopping deepfake-related misinformation from circulating, Hany Farid, a professor and image-forensics expert at Dartmouth College, is building software that can spot political deepfakes, and perhaps authenticate genuine videos called out as fakes as well.

With this new breed of falsified videos, it’s more difficult than ever to trust that what we see is real. Farid told CNN Business he is concerned that such videos could cause harm to citizens or democracies. “The stakes have gotten really high all of a sudden,” he said. Farid and a graduate student, Shruti Agarwal, are building what they call a “soft biometric” — a way to distinguish one person from a fake version of themselves.

The researchers are figuring this out by using automated tools to pore over hours of authentic YouTube videos of people like President Trump and former President Barack Obama, looking for relationships between head movements, speech patterns, and facial expressions.

For instance, Farid said, when Obama delivers bad news, he frowns and tends to tilt his head down; he tends to tilt his head up when giving happy news.  These correlations are used to build a model for an individual — such as Obama — so that when a new video is spotted the model can be used to determine if the Obama pictured in it has the speech patterns, head movements, and facial expressions that correspond to the former president.

Yet while he thinks Farid’s approach is unique and could be useful for spotting deepfakes of celebrities including politicians — of whom there is ample online footage — he’s concerned about whether it can be generalized to help a larger group of people.

As of April, Farid said that his tool is 95% accurate in identifying deepfake videos of famous people it’s been trained on. It can confirm about 95% of genuine videos as the real deal. He thinks he can get to 99% accuracy within the next six months, which would be just in time for a handful of primary debates.

For all the fuss, some say the threat of deepfakes is being blown out of proportion, pointing out that deepfake video is not pervasive and has yet to cause the chaos some have predicted. But Farid pointed out with the current disinformation landscape and the active foreign disinformation campaigns targeting the US, coupled with a polarized electorate, it doesn’t take a wild stretch of the imagination to picture deepfakes being used.

Sam Gregory, program director at witness, a nonprofit that works with human rights defenders, says it’s better to be proactive, than reactive. “It’s clear,” he said, “seeing the response to previous misinformation and disinformation threats globally that we need to prepare better for this threat, rather than have the reactive, US-centric responses from platforms that took place after the 2016 elections. Even if the threat is less than anticipated – which would be good — it’s better to prepare than react.”

Farid noted that it only took a team of four at University of Southern California, a graduate student, two post doctorates, and a professor, to create the SNL fakes. “So can a nation state that is highly motivated to do this do it? Absolutely. This technology is in the ether,” he said.









Author avatar

Post a comment

Your email address will not be published. Required fields are marked *