YouTube expands AI deepfake detection for politicians, government officials, and journalists

0
4

YouTube is expanding its likeness detection technology, which identifies AI-generated deepfakes, to a pilot group of government officials, political candidates, and journalists, the company announced Tuesday. Members of the pilot group will gain access to a tool that detects unauthorized AI-generated content and lets them request its removal if they believe it violates YouTube policy.

The technology itself launched last year to roughly 4 million YouTube creators in the YouTube Partner Program, following earlier tests.

Similar to YouTube’s existing Content ID system, which detects copyright-protected material in users’ uploaded videos, the likeness detection feature looks for simulated faces made with AI tools. These tools are sometimes used to try to spread misinformation and manipulate people’s perception of reality, as they leverage the deepfaked personas of notable figures — like politicians or other government officials — to say and do things in these AI videos that they didn’t in real life.

With the new pilot program, YouTube aims to balance users’ free expression with the risks associated with AI technology that can generate a convincing likeness of a public figure.

“This expansion is really about the integrity of the public conversation,” said Leslie Miller, YouTube’s Vice President of Government Affairs and Public Policy, in a press briefing ahead of Tuesday’s launch. “We know that the risks of AI impersonation are particularly high for those in the civic space. But while we are providing this new shield, we’re also being careful about how we use it,” she noted.

Image Credits:YouTube

Miller explained that not all of the detected matches would be removed when requested. Instead, YouTube would evaluate each request under its existing privacy policy guidelines to determine whether the content is parody or political critique, which are protected forms of free expression.

The company noted it’s advocating for these protections at a federal level, too, with its support for the NO FAKES Act in D.C., which would regulate the use of AI to create unauthorized recreations of an individual’s voice and visual likeness.

To use the new tool, eligible pilot testers must first prove their identity by uploading a selfie and a government ID. They can then create a profile, view the matches that show up, and optionally request their removal. YouTube says it plans to eventually give people the ability to prevent uploads of violating content before they go live or, possibly, allow them to monetize those videos, similar to how its Content ID system works.

The company would not confirm which politicians or officials would be among its initial testers, but said the goal is to make the technology broadly available over time.

Image Credits:YouTube

These AI videos will be labeled as such, but the placement of these labels isn’t consistent. For some, the label appears in the video’s description, while videos focused on more “sensitive topics” will apply the label to the front of the video. This is the same approach YouTube takes with all AI-generated content.

“There’s a lot of content that’s produced with AI, but that distinction’s actually not material to the content itself,” explained Amjad Hanif, YouTube’s Vice President of Creator Products, as to the label’s placement. “It could be a cartoon that is generated with AI. And so I think there’s a judgment on whether it’s a category that maybe merits from a very visible disclaimer,” he said.

YouTube isn’t currently sharing how many removals of these sorts of AI deepfakes have been managed by this deepfake detection technology in the hands of creators, but noted that the amount of content removed so far has been “very small.”

“I think for a lot of [creators], it’s just been the awareness of what’s being created, but the volume of actually removal requests is really, really low because most of it turns out to be fairly benign or additive to their overall business,” Hanif said.

That may not be the case with deepfakes of government officials, politicians, or journalists.

In time, YouTube intends to bring its deepfake detection technology to more areas, including recognizable spoken voices and other intellectual property like popular characters.

Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: techcrunch.com