Skip to main content

Advertisement

Advertisement

If fake news was a problem, you should get ready for ‘deepfakes’

Recently, I was shown dozens of small pictures of Donald Trump, some real, others digitally created. I found it impossible to distinguish among them. The exercise was an introduction to the looming security threat of “deepfakes”, the artificial intelligence-powered imitation of speech and images to create alternative realities, making someone appear to be saying or doing things they never said or did.

An attendee captures a video of U.S. President Donald Trump addressing the 'Face-to-Face With Our Future' event at the White House in Washington in June.

An attendee captures a video of U.S. President Donald Trump addressing the 'Face-to-Face With Our Future' event at the White House in Washington in June.

Follow TODAY on WhatsApp

Recently, I was shown dozens of small pictures of Donald Trump, some real, others digitally created. I found it impossible to distinguish among them.

Asked to pick three that were possibly fake, I got only one right.

The exercise was an introduction to the looming security threat of “deepfakes”, the artificial intelligence-powered imitation of speech and images to create alternative realities, making someone appear to be saying or doing things they never said or did.

In their simplest form, deepfakes are achieved by giving a computer instructions and feeding it images and audio of a person to teach it to imitate that person’s voice (and possibly much more).

There is already an app for that: FakeApp (and video tutorials on how to use it), and an underground digital community that is superimposing celebrity faces on to actors in porn videos.

Today’s deepfake productions are still imperfect and can be detected, but the technology is progressing rapidly.

Within two or three years we may be watching moving images and speeches without anyone being able to tell whether they are real or fabrications.

In a fascinating study, researchers at Washington university learnt to generate videos of Barack Obama from his voice and stock footage.

The shape of the former US president’s mouth was essentially modelled to create a “synthetic Obama”.

At Stanford University, researchers have manipulated head rotation, eye gaze and eye blinking, producing computer-generated videos that are hard to distinguish from reality.

The technology could do wonders for film editing and production and for virtual reality.

In the not-too-distant future, dubbing could be transformed: Mexican actors in a soap opera will appear as if they are speaking English (or Chinese or Russian) and look more authentic.

In business and world affairs, the technology could break the language barrier on video conference calls by translating speech and simultaneously altering facial and mouth movements so everyone appears to be speaking the same language.

But think also of the potential abuse, by individuals or state actors bent on spreading misinformation. Deepfakes could put words and expressions on to the face and mouth of a politician and influence elections.

Videos could fabricate a threat and spark a political crisis or a security incident.

“If the past few years are anything to go by, fake videos will be increasingly deployed to advance political agendas,” says Yasmin Green, director of research and development at Jigsaw, the Alphabet think-tank.

“Past efforts were not technically sophisticated and were therefore easily debunked, but the technology is developing . . . faster than our understanding of the threat it poses.”

There is already evidence of the problem.

In May last year, Qatar’s news agency and its social media accounts were hacked and statements were attributed to the emir that set off a diplomatic row.

Qatar’s neighbours used the remarks to justify an economic boycott of the emirate.

“The Qatar episode showed appetite for the use of fakes to pursue political agendas,” says Ms Green. “Imagine if they had the capabilities for deepfakes.”

More recently, ahead of local elections in Moldova, a video of a news segment from Al Jazeera was posted on the network’s Facebook page with Romanian subtitles.

It claimed to be about a mayoral candidate’s proposal to lease an island to the United Arab Emirates.

It was a fabrication, but the video went viral.

The damage from current fake news pales in comparison to the harm that could come from deepfakes. Aviv Ovadya, chief technologist at the Center for Social Media Responsibility at the University of Michigan, worries that deepfakes will not only convince people of things that are not real but also undermine people’s confidence in what is.

“This impacts everything in our society, from the rule of law to how journalism is done,” he says. Intelligence agencies and defence departments are well aware of the advances in computer-generated videos (and may be deep into the research themselves).

Some of the leading researchers in the field are also looking at detection solutions, as are tech companies and governments.

I hope they catch up to this problem a lot faster than they did in spotting and weeding out fake news. THE FINANCIAL TIMES

 

ABOUT THE AUTHOR:

Roula Khalaf is Deputy Editor of the Financial Times.

Read more of the latest in

Advertisement

Advertisement

Stay in the know. Anytime. Anywhere.

Subscribe to get daily news updates, insights and must reads delivered straight to your inbox.

By clicking subscribe, I agree for my personal data to be used to send me TODAY newsletters, promotional offers and for research and analysis.