As powerful artificial intelligence systems improve their ability to create images, video and text, researchers are increasingly worried about the technology’s seemingly inevitable role in disinformation and propaganda campaigns.
Advancements in generative AI have the potential to radically alter our information environment to the point where humans – and even machines – may not have the ability to distinguish between content that is AI-generated versus human-made.
The proliferation of tools powered by generative AI are making disinformation easier to produce, paving the way for a host of new problems with no clear solutions for online content moderators. AI ethicists and others within the industry are calling for more regulatory measures.
And while many have hailed generative AI for its ability to provide highly personalised recommendations, the same online personal data that trains AI programmes could potentially be used to manipulate people en masse via chatbots to share conspiracy theories or foreign propaganda. How good will AI be at manipulating us, and what can be done to make the technology safer?
In this episode of The Stream, we’ll look at how AI could worsen the online disinformation landscape.
On this episode of The Stream, we speak with:
Henry Ajder, @HenryAjder
AI expert and broadcaster
Sam Gregory, @SamGregory
Executive director, Witness
Asra Nadeem, @asranadeem
Chief Operating Officer, Opus AI