Facebook is making its own deepfakes and offering prizes for detecting them

0
41
- Advertisement -

Image and video manipulation powered by deep learning, or so-called “deepfakes,” represent a strange and horrifying facet of a promising new field. If we’re going to crack down on these creepy creations, we’ll need to fight fire with fire; Facebook, Microsoft, and many others are banding together to help make machine learning capable of detecting deepfakes — and they want you to help.

Though the phenomenon is still new, we are nevertheless in an arms race where the methods of detection vie with the methods of creation. Ever more convincing fakes appear regularly, and though while they are frequently benign, the possibility of having your face flawlessly grafted into a compromising position is very much there — and many a celebrity has already had it done to them.

Facebook, as part of a coalition with Microsoft, the Partnership for AI, and several universities including Oxford, Berkeley, and MIT, is working to empower the side of good with better detection techniques.

“The most interesting advances in AI have happened when there’s a clear benchmark on a dataset to write papers against,” said Facebook CTO Mike Schroepfer in a media call yesterday. The dataset for object recognition might be millions of images of ordinary objects, while the dataset for voice transcription would be hours of different kinds of speech. But there’s no such set for deepfakes.

We talked about this challenge at our Robotics and AI event earlier this year in what I thought was a very interesting discussion:

Fortunately Facebook is planning on dedicating around $10 million in resources to make this Deepfake Detection Challenge happen.

“Creation of these datasets can be challenging, because you want to make sure that everyone participating in it is clear and gives consent so they aren’t surprised by the usage of it,” Schroepfer continued. And since most deepfakes are made without any consent whatsoever, they’re not really permissible for usage in an academic context.

So Facebook and its partners are making the deepfake content out of whole cloth, he said. “You want a dataset of source video, and then a dataset of personalities you can map onto that. Then we’re spending engineering time implementing the latest most advanced deepfake techniques to generate altered videos as part of the dataset.”

And while you’re entirely justified in wondering, no, they aren’t using Facebook data to do this. They’ve got paid actors.

dfdc

This dataset will be provided to interested parties, who will be able to build solutions and test them, putting the results on a leaderboard. At some point there will be cash prizes given out, though the details are a ways off. With luck this will spur serious competition among academics and researchers.

“We need the full involvement of the research community in an open environment to develop methods and systems that can detect and mitigate the ill-effects of manipulated multimedia,” said the University of Maryland’s Rama Chellappa in a news release. “By making available a large corpus of genuine and manipulated media, the proposed challenge will excite and enable the research community to collectively address this looming crisis.​”

Initial tests of the dataset are planned for the International Conference on Computer Vision in October, with the full launch happening at NeurIPS in December.

Written by Devin Coldewey
This news first appeared on https://techcrunch.com/2019/09/05/facebook-is-making-its-own-deepfakes-and-offering-prizes-for-detecting-them/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29 under the title “Facebook is making its own deepfakes and offering prizes for detecting them”. Bolchha Nepal is not responsible or affiliated towards the opinion expressed in this news article.