OpenAI releases 'Deepfake' detector to disinformation researchers

As experts warn that AI-generated images, audio and video could influence the fall elections, OpenAI is releasing a tool designed to detect content created by its popular image generator, DALL-E. But the prominent AI start-up recognizes that this tool is only a small part of what will be needed to combat so-called deepfakes in the months and years to come.

On Tuesday, OpenAI said it would share its new deepfake detector with a small group of disinformation researchers so they could test the tool in real-world situations and help identify ways it could be improved.

“It's about sparking new research,” said Sandhini Agarwal, an OpenAI researcher who focuses on security and politics. “It's really needed.”

OpenAI said its new detector could correctly identify 98.8% of images created by DALL-E 3, the latest version of its image generator. But the company said the tool was not designed to detect images produced by other popular generators such as Midjourney and Stability.

Because this type of deepfake detector is driven by probabilities, it can never be perfect. So, like many other companies, nonprofits, and academic labs, OpenAI is working to combat the problem in other ways.

Like tech giants Google and Meta, the company joins the steering committee of the Coalition for Content Provenance and Authenticity, or C2PA, an effort to develop credentials for digital content. The C2PA standard is a kind of “nutrition label” for images, videos, audio clips and other files that shows when and how they were produced or modified, including with artificial intelligence

OpenAI also said it is developing ways to “watermark” AI-generated sounds so they can be easily identified at the time. The company hopes to make these watermarks difficult to remove.

Anchored by companies like OpenAI, Google and Meta, the artificial intelligence sector is facing growing pressure to account for the content its products make. Experts are calling on the industry to prevent users from generating misleading and harmful material and to offer ways to trace its origin and distribution.

In a year full of important elections around the world, calls for ways to track the provenance of AI content are becoming increasingly desperate. In recent months, audio and images have already influenced political campaigns and voting in places like Slovakia, Taiwan and India.

OpenAI's new deepfake detector can help stem the problem, but it won't solve it. As Ms. Agarwal said: In the fight against deepfakes, “there is no silver bullet.”

Leave a Reply

Your email address will not be published. Required fields are marked *