Occasional Digest - a story for you

Authors: Christopher Jackson and Aaron Spitler*

Digital technologies, particularly AI, are accelerating democratic backsliding and revitalizing authoritarian governments. AI-focused companies have been forming close partnerships with government actors, often in ways that undermine democratic norms. Around the world, private firms are supplying or co-designing technologies that enhance mass surveillance, predictive policing, propaganda campaigns, and online censorship. In places like China, Russia, and Egypt, a blurring of boundaries between the state and the technology industry has led to serious consequences. This collusion has undercut privacy rights, stifled civil society, and diminished public accountability.

This dynamic is now playing out in the United States. Companies like Palantir and Paragon Solutions are providing government entities with powerful AI tools and analytics platforms, often under opaque contracts. In September, U.S. President Donald Trump approved the sale of TikTok to U.S. private entities friendly with the administration. Unchecked public-private integration within the technology industry poses serious risks for democratic societies, namely that it offers increased power to unaccountable actors. The focus of this article is to examine case studies on how these emerging alliances are enabling authoritarian practices, as well as what they might mean for the future of democratic societies.

Russia: Manipulating Digital Tools

In Russia, democratic norms under Vladimir Putin have eroded while Russian tech companies continue to work hand in glove with state authorities. Sberbank, the country’s largest financial institution, and their development of Kandinsky 2.1, an AI-powered, text-to-image tool owned by the firm, illustrate this long-running trend.

Despite the quality of its outputs compared to rivals like DALL-E, the solution came under fire in 2023 from veteran lawmaker Sergey Mironov, who argued that it generated images that tarnished Russia’s image. He would go on to charge that Kandinsky 2.1 was designed by “unfriendly states waging an informational and mental war” against the country.

Not long after, some in the tech space noticed that Kandinsky 2.1’s outputs changed. For instance, while the tool previously churned out images of zombies when prompted with “Z Patriot,” users noted that it now repeatedly produced pictures of hyper-masculine figures. Critics claim that this alteration not only represented an overt manipulation of the technology itself but also an attempt to curry favor with those in the government.

This episode shows how AI-powered tools are being specifically tailored to serve the needs of authorities. The modifications made to the model transformed it into an invaluable resource the government could use to amplify its messaging. As a result, users are no longer likely to see Kandinsky 2.1 as a tool for creativity, particularly if its outputs remain blatantly skewed. Developers in countries like Russia may look to this case for inspiration on how to succeed in restrictive political contexts.

United States: Supercharging Mass Surveillance

AI-centric firms in the United States have also taken note. Palantir Technologies stands as the most prominent example of how private technology firms can deepen government surveillance capabilities in ways that test the limits of democratic accountability. The firm, established in the wake of 9/11, has expanded its domestic footprint through lucrative contracts with local police departments and, most notably, Immigration and Customs Enforcement (ICE).

Investigations reveal that Palantir’s software enables ICE agents to compile and cross-reference vast amounts of personal data, from Department of Motor Vehicle (DMV) records and employment information to social media activity and utility bills. This capability gives the government a unique opportunity to build detailed profiles on individuals and their community networks. This has helped facilitate deportations and raids on immigrant communities. Critics argue that Palantir’s tools create a dragnet that vastly expands state power, all while shielding the company and its government clients from public oversight.

Beyond immigration enforcement, Palantir’s Gotham platform has been adopted by police departments for predictive policing initiatives, which attempt to forecast locations and suspects for crimes. Civil liberties groups have warned that such uses reinforce systemic biases by encoding discriminatory policing practices into algorithmic decision-making. Predictive policing algorithms inherit bias because they rely on historical data shaped by discriminatory over-policing of Black communities, among others. Scholars of “surveillance capitalism” also note that these partnerships normalize the commodification of personal data for state security purposes.

The deeper concern lies in how this private-public nexus erodes societal trust and transparency. Unlike government agencies bound by Freedom of Information Act (FOIA) requirements, companies like Palantir operate under corporate secrecy, limiting democratic oversight of technologies that profoundly affect civil rights. In this sense, the Palantir case illustrates how authoritarian-style practices, combined with technological breakthroughs, can be incubated within democratic societies and later contribute to their overall decline.

Challenging Anti-Democratic Alliances

The deepening collaboration between AI firms and authorities in developing repressive technologies is alarming. Across the globe, these partnerships have flourished, often to the detriment of average citizens. The examples of Russia and the United States underline how AI firms have been willing and able to work with governments engaging in repression when convenient, leaving the public in the lurch.

Advocates for democracy must educate themselves on how to combat the misuse of AI. Leaders in civil society, for example, could build up their technical knowledge as a starting point. Capacity-building may also have the bonus of enabling pro-democracy groups to create their own AI solutions that support civic accountability actions. Activities like these may provide a counterbalance to corporate-state collusion that places citizens at a disadvantage. It may also help ensure that AI tools are designed in ways that strengthen democracies, not undermine them.

*Aaron Spitler is a researcher whose interests lie at the intersection of human rights, democratic governance, and digital technologies. He has worked with numerous organizations in this space, from the International Telecommunication Union (ITU) to the International Republican Institute (IRI). He is passionate about ensuring technology can be a force for good. You can reach him on LinkedIn

Source link

Leave a Reply