Site icon Occasional Digest

As AI use rises, Meta boosts efforts to crack down on disinformation ahead of EU elections

Occasional Digest - a story for you

People walk by a sign on the Meta campus In Menlo Park, Calif., in 2022. Meta announced that it has expanded its election disinformation team in Europe ahead of parliamentary elections there. File Photo by Terry Schmitt/UPI | License Photo

Feb. 26 (UPI) — Facebook’s parent Meta said on Monday that it has been working with numerous countries in Europe to root out disinformation in their upcoming parliamentary elections this year, particularly with the rise of artificial intelligence.

Marco Pancini, head of European Union Affairs at Meta, said in a statement that its Elections Operations Center soon will go live to hunt down potential threats and take mitigating actions in real time.

“We have the largest fact-checking network of any platform and are currently expanding it with three new partners in Bulgaria, France and Slovakia,” Pancini said. “We have committed to taking a responsible approach to new technologies like GenAI, and signed on to the tech accord to combat the spread of deceptive AI content in elections.”

GenAI is generative artificial intelligence that produces text and images, often in response to prompts.

Pancini said Meta will capitalize on their experiences from last year when the company activated its election team to develop a more tailored approach to help ensure the integrity of information appearing on its platforms, including Facebook, Instagram and Threads.

“While each election is unique, this work drew on key lessons we have learned from more than 200 elections around the world since 2016, as well as the regulatory framework set out under the Digital Services Act and our commitment in the EU Code of Practice on Disinformation,” Pancini said.

He said Meta spent more than $20 billion on increasing the election security team members to 40,000, which includes 15,000 content reviewers who examine content on its platforms in more than 70 languages.

Meta started putting election guardrails on AI last November when it announced the requirement that all advertisers disclose the use of AI for all ads addressing social, election, and political issues. The international policy went into effect in January.

Pancini said that, when Meta’s fact-checkers debunk content, they will attach a warning label to the content and reduce its distribution in the “Feed” section so people are less likely to see it. He said in 2023 that, when articles received the label, 95% of viewers never clicked on the content and passed it by.

Source link

Exit mobile version