Thu. Dec 5th, 2024
Occasional Digest - a story for you

"Since 2016 we have been evolving our approach to elections to incorporate the lessons we learn and stay ahead of emerging threats," Meta officials wrote Tuesday. However, "we’ve seen a number of influence operations shift much of their activities to platforms with fewer safeguards than ours." File Photo by Terry Schmitt/UPI
“Since 2016 we have been evolving our approach to elections to incorporate the lessons we learn and stay ahead of emerging threats,” Meta officials wrote Tuesday. However, “we’ve seen a number of influence operations shift much of their activities to platforms with fewer safeguards than ours.” File Photo by Terry Schmitt/UPI | License Photo

Dec. 3 (UPI) — Meta on Tuesday unveiled its report on the 2024 election revealing, among other things, how the global impact of AI on the election results were minimal at best on its platforms as it removed at least 20 covert influence operations.

The company released the report detailing new information on its work to help secure this year’s election on its global platforms in multiple nations — including the United States — after known interference in past U.S. elections by foreign actors like Russia which, according to Meta, is still the “number one source” of covert influence operations.

“Since 2016 we have been evolving our approach to elections to incorporate the lessons we learn and stay ahead of emerging threats,” Meta officials wrote in the report.

Meta added, however, that its teams have “seen a number of influence operations shift much of their activities to platforms with fewer safeguards than ours.”

It “closely monitored” the threat of generative AI by covert campaigns in what the social media company described as a Coordinated Inauthentic Behavior network — or a CIB — which Meta discovered had only made “incremental productivity” and “content-generation gains” using AI this election.

According to Meta, this “unprecedented” year was expected to see as many as 2 billion people vote in some of the world’s biggest democracies.

Meta said it currently has a “dedicated team” tapped to lead Meta’s “cross-company election integrity efforts” which include “experts” on multiple fronts in Meta’s intelligence, data science, product and engineering, research, operations, content, public policy and legal teams.

“Striking the balance between allowing people to make their voices heard and keeping people safe is one that no platform will ever get right 100% of the time,” the company stated. Meta did acknowledge its “error rates are too high” when enforcing content policies “which gets in the way of the free expression we set out to enable,” it added.

But company officials claim Meta ran “a number of” what it called “election operations centers” around the world to “monitor and react swiftly to issues that arose” in major elections in the United States, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, Britain, South Africa, Mexico and Brazil.

In addition, Meta pointed out it also continued its 2020 policy of forbidding new political, electoral and social issue ads in the U.S. the final week of the election campaign because “in the final days of an election there may not be enough time to contest new claims,” the company said.

It added since January this year, in certain cases, Meta has also required advertisers to disclose when AI is utilized or other digital techniques weaponized to create or alter a political or social issue ad.

Meta’s Imagine AI image generator, meanwhile, reportedly rejected nearly 600,000 requests in the month leading up to election day to generate fake images of President-elect Donald Trump, Vice President Kamala Harris, Vice President-elect JD Vance, Minnesota Gov. Tim Walz and President Joe Biden.

“This has not impeded our ability to disrupt these influence operations,” Meta added, because “we focus on behavior when we investigate and take down these campaigns, not on the content they post — whether created with AI or not.”

According to Meta officials, the 20 new covert influence operations it took down largely were located in the Middle East, Asia, Europe and the United States.

Meanwhile, Russia remains the number one source of operations for covert influence. Meta said to date it disrupted 39 Russian CIB networks since 2017 and took multiple other steps over the last few years to limit the spread of Russian disinformation on its social media platforms.

“With every major election,” Meta officials wrote Tuesday, “we want to make sure we are learning the right lessons and staying ahead of potential threats. Striking the balance between free expression and security is a constant and evolving challenge.”

Ahead of the U.S. election, Meta took steps to ban Russian state media outlets like Rossiya Segodnya, RT and others despite Russia’s own ban of Meta platforms — including Facebook — inside the communist-run nation.

But the next frequently cited source of foreign interference came from Iran which had 31 CIB networks Meta removed, and then China with 11.

“The majority of the CIB networks we’ve disrupted have struggled to build authentic audiences, and some used fake likes/followers to appear more popular than they were,” Meta added.

As an example, Meta said it took down a CIB network that appeared to originate in the Transnistria region of Moldova — a small eastern European nation neighboring the south-western part of Ukraine — which targeted only a Russian-speaking audience.

“We removed this campaign before they were able to build authentic audiences on our apps,” Meta said.

However, the “vast majority” of disrupted CIB networks, according to the company, then took steps to migrate as well to other digital platforms like X, YouTube, TikTok, Telegram, Reddit, Medium, Pinterest and others.

Likewise, the “vast majority” of CIB networks run their own websites likely to withstand takedowns by any one company, Meta said. But the “largest and most persistent” is known as “Doppelganger” which Meta said “struggled to get through on our apps and largely abandoned” its schemes.

“The vast majority of Doppelganger’s attempts to target the U.S. in October and November were proactively stopped before any user saw their content,” it added.

Meta said Doppelganger utilizes a “vast web of fake websites” which include spoofing legitimate news and government agencies. The social media giant claimed to of exposed more than 6,000 and said it created the “largest public repository” of Doppelganger’s threat signals so investigators and researchers can take appropriate action in addition to blocking on all Meta platforms.

“These findings and policies are intended to give you a sense of what we’ve seen and the broad approach we have taken during this unprecedented year of elections, but they are far from an exhaustive account of everything we did and saw,” said Meta, adding “nor is our approach inflexible.”

Meta pledged to “take stock of what we’ve learned during this remarkable year” and “keep our policies under review and announce any changes in the months ahead,” it stated.

Source link

Leave a Reply