Thu. Jan 9th, 2025
Occasional Digest - a story for you

The announcement by Mark Zuckerberg that Meta, the parent company of Facebook, Instagram and Threads, will dramatically overhaul its approach to content moderation on the eve of the second Trump inauguration comes as no surprise. Trump and major social media platforms have been warring for years over perceived anti-conservative bias, including Meta’s decision to shutter Trump’s accounts after the Jan. 6, 2021, Capitol riot.

While the latest moves look like corporate self-preservation in a vindictive political environment, they also reflect stubborn realities about stewarding a global public square — and they will reverberate around the world in ways that could fundamentally reshape the possibilities of a truly global digital platform.

The changes Zuckerberg announced are sweeping. Meta will do away with the U.S. third-party fact-checking program in favor of mimicking X’s “community notes” model of crowd-sourced corrections. Except for the most severe types of unlawful or dangerous content, such as “terrorism, child sexual exploitation, drugs, fraud and scams,” the company will drop reliance on automated systems to flag potential violations, instead leaving it up to users to report objections when they spot a policy breach. Meta will err on the side of leaving up content that does not violate the law or otherwise foster offline harm, and loosen speech restraints on political subjects including immigration and gender identity. Corporate teams that oversee trust and safety will relocate from California to Texas “and other U.S. locations,” presumably to reflect more mainstream — meaning red state — political sensibilities.

Meta has wasted no time revamping its rules in keeping with this shift. Updates on Tuesday to the company’s hateful conduct policy, part of its “Community Standards,” narrowed its definition of “dehumanizing speech.” They eliminated prior bans on, for example, likening people to feces and on speech denying the existence of protected characteristics such as religious affiliations and gender identities. After years of campaigning for more and updated internet regulations, Meta leadership is now pledging to work with the Trump administration to ward off guardrails around the world, repudiating Biden-era efforts to curtail disinformation and online abuse. Dana White, a Trump ally and the Ultimate Fighting Championship chief executive, is joining Meta’s board.

While the policy changes are tailored to U.S. politics, Meta’s community standards apply globally, with some variation to comply with local laws that require more content to be taken down than do the company’s own rules. International users and civil society organizations have long chafed at social media platforms’ orientation around American priorities. They have decried the scarcity of content moderators fluent in foreign languages and companies’ failure to invest adequately in local cultural competence. In adjudicating content amid conflicts including the Ukraine and Israel-Hamas wars, Meta is regularly accused of willful blindness; the company’s platforms have been accused of reducing engagement with Palestinian news outlets and enabling the spread of Russian state-sponsored disinformation.

These latest changes confirm that while Meta claims billions of users worldwide, at least for now the audience that matters most resides at 1600 Pennsylvania Avenue. The relocation of trust and safety teams to Texas risks trading years of expertise in grappling with global moderation dilemmas for an uncertain bid to have a finger on the pulse of what’s assumed to be middle America.

Narrower hate-speech policies may make sense to enable more debate on hot-button U.S. issues including the border and gender identity. But they will also end up loosening the spigot on ethnic vilification in parts of the world including Myanmar, South Sudan and Ethiopia, where such hatreds can explode into uncontrolled violence. Meta has not explained how the new global standards will apply in other jurisdictions, including not just repressive societies but also liberal democracies such as the United Kingdom, Germany and Canada that define and prohibit hateful speech more broadly than does the United States.

Although the U.S. is rightly proud of its 1st Amendment tradition and wide berth for free speech, it should not force its standards on the world. And we can’t rule out connections between online hate speech and physical violence in the U.S. at a time when the link between online extremism and real-world terror may be intensifying.

Zuckerberg’s announcement is likely to drive some Meta users off the platform. Those in the U.S. who aren’t interested in seeing more hateful speech or politics can simply abandon their accounts and stay informed and connected to friends via other online alternatives.

But in the West Bank, Maldives, some 30 countries in Africa, and elsewhere, Meta platforms essentially are the internet. In Africa, Meta invested in a program known as Free Basics to give users free online access through their phones, providing a lifeline — though a controversial one — for individuals and businesses that makes them dependent on the platform.

Meta is not wrong to point out that the assertive policing of online speech is riddled with line-drawing exercises that are impossible to carry out at scale across billions of posts every day. Their choice to err on the side of more speech is a defensible, if imperfect, response to a political moment in the U.S. in which a large segment of the population feels constricted by the limits of acceptable public discourse.

Around the world, though, users will have little say over how these changes play out in vastly different contexts, with consequences that may reshape societies and even threaten lives. America first, indeed.

Suzanne Nossel is a member of Facebook’s Oversight Board and the author of “Dare to Speak: Defending Free Speech for All.”

Source link

Leave a Reply