Meta removes AI deepfake video of Irish presidential candidate

0
1

Meta has removed a deepfake AI video of Irish presidential candidate Catherine Connolly, which featured a false depiction of the politician saying that she’s withdrawing from the election. According to The Irish Times, the AI-generated video was shared nearly 30,000 times on Facebook just days before Ireland’s election on October 24 prior to it being removed from the website. Connolly called the video “a disgraceful attempt to mislead voters and undermine [Ireland’s] democracy” and assured voters that she was “absolutely still a candidate for President of Ireland.”

The video was posted by an account which had named itself RTÉ News AI, which is not affiliated with the actual Irish public service broadcaster Raidió Teilifís Éireann. It copied the likenesses not just of Connolly, but also of legitimate RTÉ journalist Sharon Ní Bheoláin and correspondent Paul Cunningham. “It is with great regret that I announce the withdrawal of my candidacy and the ending of my campaign,” the AI version of Connolly said in the fake video. Ní Bheoláin was shown reporting about the announcement and confirming the candidate’s withdrawal from the race. The AI version of Cunningham then announced that the election was cancelled and will no longer take place, with Connolly’s opponent Heather Humphreys automatically winning. Connolly, an independent candidate, is leading the latest polls with 44 points.

Meta removed the RTÉ News AI account completely after being contacted by the Irish Independent. The company told The Irish Times that it removed the video and account for violating its community standards, particularly its policy prohibiting content that impersonates or falsely represents people. Irish media regulator Coimisiún na Meán said it was aware of the video and had asked Meta about the immediate measures it took in response to the incident. Meta has been struggling to keep deepfake and maliciously edited videos featuring celebrities and politicians under control for years now. The company’s Oversight Board warned it earlier this year that it wasn’t doing enough to enforce its own rules and urged it to train content reviewers on “indicators” of AI-manipulated content.

Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: engadget.com