Sun. Dec 22nd, 2024
Occasional Digest - a story for you

Bengaluru, India – On February 20, India’s chief opposition party, the Indian National Congress (INC), uploaded a video parodying Prime Minister Narendra Modi on Instagram that has amassed over 1.5 million views.

It is a short clip from a new Hindi music album named “Chor” (thief), where Modi’s digital likeness is grafted onto the lead singer. The song’s lyrics were humorously reworked to describe a thief’s – in this case, a business tycoon’s – attempt to steal, and Modi handing over coal mines, ports, power lines and ultimately, the country.

The video isn’t hyperrealistic, but a pithy AI meme that uses Modi’s voice and face clones, to drive home the nagging criticism of his close ties to Indian business moguls.

That same day, the official Bharatiya Janata Party (BJP) handle on Instagram, with over seven million followers, uploaded its own video. The one-minute clip is a supercut of Modi campaigning on the streets atop his car, spliced with real visuals of all the beneficiaries. What’s unique is the background score: an old patriotic Hindi song sung by legendary singer Mahendra Kapoor, who passed away in 2008, recreated in an AI voice.

The lyrics were modified to highlight Modi’s achievements over the past nine years, including helping farmers and enabling Indian scientists to land a rover on the moon. Audio forensic experts, who tested the clip upon Al Jazeera’s request, confirmed that they are AI-generated.

While AI-enabled meme wars have been taking place over the past year, this is the first time both BJP and the INC have created and shared AI-crafted political content on official party handles, without explicit disclosures.

“This is at the inflexion point of an entirely new way of conducting visual politics and arguably one that will foundationally change the way we consume multimedia artefacts during political campaigns,” said Joyojeet Pal, an associate professor at the University of Michigan.

An Al Jazeera review and subsequent forensic testing found at least three instances of AI-created or altered content published on the official Instagram handles of both the INC and the BJP since February 20. Political parties are pushing the limits of AI’s use to both ridicule opponents and boost their own popularity on official pages, and in doing so, testing the boundaries of platform policies on labelling deceptive political content. (Some clips were cross-posted on YouTube and Facebook, as well.)

Meta’s current rules require advertisers to disclose when they use AI-edited political advertisements, but such provisions don’t apply to political pages and accounts.

“On top of their own labelling, when they detect signals showing AI usage, there’s no reason for an exception for this disclosure requirement around content on political pages and accounts,” Sam Gregory, executive director of non-profit Witness, which studies the use of deepfakes to defend human rights, told Al Jazeera. “Even more so than elsewhere on the web, there should be transparency on AI usage in political contexts, given the gaps in ability to detect its use and the risks of deception.”

Meta did not respond to Al Jazeera’s queries on AI-labelling policies.

Earlier this year, the independent Oversight Board that weighs in on significant content moderation decisions at Meta criticised the company’s manipulated media policy and said it needs to be expanded to cover fake audio, and attach labels to misleading content. Meta subsequently in a blog post announced that it will label AI-generated images created using its own AI tools, and is working with industry partners on technical standards that will help identify and label AI audio and video created using other company’s tools.

YouTube did not comment on the specific AI-enhanced videos in question. “We have started displaying labels for content created with YouTube generative AI features, like Dream Screen,” a YouTube spokesperson said in an emailed statement to Al Jazeera. “Creators will soon be required to disclose when they’ve created realistic altered or synthetic content,” the statement said, while linking to a November 2023 blog post.

Covert AI meme communications

In the Modi “Chor” video shared by the INC on its @INCIndia Instagram handle, one could observe obvious discrepancies of deepfakes, such as a mismatch in lip-syncing, and a mild glitch in AI-Modi’s face. The video, however, received a wave of laughter online, generating more than 1.6 million views. “Don’t know about 2024 but clearly you guys have won the meme Fest !!,” commented historian Eshan Sharma.

Congress party leader Rahul Gandhi, with other leaders during his months long Bharat Jodo Yatra in Hyderabad, India
Congress party leader Rahul Gandhi, centre, is one of the opposition leaders in India’s upcoming elections [File: AP Photo/Mahesh Kumar A]

Vaibhav Walia, chairman of the INC communications war room for the 2024 election, told Al Jazeera that meme-based communication is more effective in terms of getting across the party’s message on social media. One of the reasons for the use of AI is the need to create “standout content that can catch the eyes” of the voters, and keeping up with current social media trends, he said.

“When everything is communicated in terms of memes and sarcasm, you can’t be really formal,” Walia said. “In India, official political party channels are also moving in a direction wherein the blows are getting nastier by the day. The Congress [INC] party … has some young blood in the party, and they are pretty nasty in giving it back to the BJP.”

Walia did not comment on the voluntary labelling of AI content.

The two suspected AI clips, shared by the BJP’s official Instagram handle @BJP4India, were harder to discern as AI-created or not. Al Jazeera shared the two clips with two independent deepfake detection experts to test if they were AI-enhanced. The analysts were divided on the second video, where one judged the voice could be human “mimicry” of politicians. Both analysts unanimously agreed the singing voice in the first video was an AI voice.

“We have run the two samples through several proprietary state-of-the-art research AI-voice detection algorithms [and found that] the voice sample is unanimously determined by these algorithms to be AI-generated with likelihood scores ranging from 60 percent to 99 percent,” Siwei Lyu, director of UB Media Forensic Lab, told Al Jazeera.

BJP spokesperson Gaurav Bhatia did not respond to Al Jazeera’s queries.

Synthetically transform

The first video that recreated a 1970s Hindi song in the voice of singer Mahendra Kapoor was declared to be AI-generated by two experts.

Varshul Gupta, co-founder of India voice AI startup Dubverse.ai, told Al Jazeera the creators have used the retrieval-based voice conversion (RVC) model to synthetically transform a unique voice into Kapoor’s voice.

“This is 100 percent RVC,” Gupta said.

Ram temple India Modi
India will hold national elections in the next few weeks [File: Deepak Sharma/AP Photo]

Since most of Kapoor’s hit songs are from the early 70s, his voice would have been recorded in old equipment, using analogue mixing formats that aren’t compatible to train a modern RVC model. To make an AI-generated version of Kapoor’s voice, Gupta said, the creators are likely to have gathered high-quality voice data and “manually cleaned” it to get rid of the yesteryear blemishes in the audio, before training the AI model.

“Some audio filtering was also done to make it sound like an ‘old [music] record’,” Gupta said about the clip. He added that the creators have also downgraded the final audio to make his sound “more natural”.

UB Media Forensic Lab’s Lyu concurred: “Three out of four algorithms gave the singing sample as AI generated with a likelihood over 95 percent, but one deemed it real.”

The second video features caricatures of opposition politicians including Mamata Banerjee and Shashi Tharoor, reciting poetry in their own voices. The video mocks the new INDIA alliance party formed by 28 opposition political parties to counter the BJP in the upcoming elections. While Lyu and his team’s evaluation was that these voices were synthetically generated, Dubverse’s Gupta said that it could be mimicry.

“Getting a mimicry artist would make more sense,” Gupta told Al Jazeera.

“It was inevitable for political parties to use AI-enhanced clips from official handles, but the strange thing was it was shared without anyone noticing,” said Sagar Vishnoi, a political consultant. “Both clips were so fine-tuned that no one noticed it.”

On February 24, the All-India Anna Dravidian Progressive Federation (AIADMK) party’s official X handle posted a minute-long audio clip of J Jayalalithaa, the iconic starlet of Tamil politics who has been dead since 2016. She was resurrected using AI by her party’s digital media wing, to seek support for her successor contesting the upcoming 2024 elections.

“This generation of party workers have grown up hearing Amma – to hear her voice is something like rocket fuel to us,” Raj Sathyan, the secretary of the party’s IT wing, told Al Jazeera. Jayalalithaa is fondly known as “Amma” or “mother” by followers. “To recreate Amma, and to hear voices is what almost 20 million [party] members would look for.”

This particular clip came with the disclaimer that it’s AI-generated, and Sathyan told Al Jazeera the party will utilise AI throughout the 2024 campaign trail to “reach out to more people who are the loyal followers of Amma”.

Need for mandatory disclosures

Political organisations leveraging social media and influencers aren’t new, but based on these new examples, the use of AI-enhanced memes and videos marks a unique moment in party-people communications.

“It will also change the way that technology companies and parties themselves manage the authentication process, since the future will require ways in which viewers will need to confirm where content is coming from, something that technology companies do a very poor job with now,” Pal of the University of Michigan said.

As more instances of subtle AI touches or enhancements get shared on official political handles, experts said that platforms should enforce mandatory disclosure of AI-edited content for political pages.

“This is especially important for AI-generated voices – unlike images or videos, we cannot tell artefacts from audios,” Lyu said. “Audiovisual media’s editing nature must be disclosed to the audience, not only AI-editing but also non-AI editing, such as Photoshopping. This is to avoid potentially misleading the audience.”

Misinformation researcher Tarunima Prabhakar concurred that as the ability of AI-generated content to mimic reality increases rapidly, disclosures on when AI is being used for creating or enhancing content become necessary.

Prabhakar is part of an effort to set up a “deepfake analysis unit” where users can flag deepfakes by sending them to a WhatsApp chatbot which will aim to debunk false claims. The helpline is expected to be live this month.

The Election Commission of India did not respond to queries on guidelines around political parties sharing AI-altered content.

AI humour isn’t always welcome

While political parties are getting by without disclosing the use of AI, a few Indian citizens using AI to parody the powerful have not been as lucky.

In January, a youth was arrested by police in the state of Tamil Nadu for sharing a satirical AI video on Twitter, now X. The 87-second video, reviewed by Al Jazeera, is a satirical poem reimagined in the voice of an influential local poet, Vairamuthu, humorously criticising the former chief minister of Tamil Nadu for financial corruption and dynastic politics.

“The alleged poem in the aforesaid video contains malicious, false, and defamatory statements about the former Chief Minister of Tamilnadu, with the sole intent of defaming DR. Kal[a]ignar, and using Artificial Intelligenc[e] technology to cause disturbance to the public order,” the complainant notes in the police report filed on January 15, a copy of which Al Jazeera has obtained.

The video was reposted multiple times and gained more than 19,500 views on X, the police complainant noted, and caused irreparable damage to the reputation and dignity of the poet and the politician.

Despite having a clear disclaimer that “voices are AI-generated and are not real”, at the start of the video, the police arrested an individual connected to the case.

“Even if one is to hold that there is ‘impersonation’, there’s no ‘cheating’ [as is] alleged in this complaint,” said Pranesh Prakash, affiliated fellow at the Yale Law School’s Information Society Project. “So how section 66D of the IT Act applies even prima facie is beyond me,” he said, referring to a law that pertains to punishing anyone who cheats by impersonation using a computer resource.

The accusation of forgery alleged in the complaint required dishonest or fraudulent intent, “but in this case, there is a prominent disclaimer. So there’s no dishonesty and certainly no fraud,” Prakash said, on reviewing the video and the police report.

“Protecting the ability to use generative AI and deepfakes for satire is an important dimension of free speech,” said Gregory of Witness. “A video and audio montage like this satirical poem clearly signals at the start that it is generated with AI and uses a stylised format rather than pretending to be realistic. Disclosure of the use of AI and a clear understanding through form that a work is satirical are two key components that should make this highly defensible as satirical speech.”

This is the second instance of AI satire being punished in India. In 2021, a 28-year-old from Gujarat was arrested for creating a satirical shallowfake of Chief Minister Vijay Rupani singing ‘I Knew You Were Trouble’, a song by American singer Taylor Swift. The youngster created the video using Wombo.ai, an off-the-shelf AI app.

Even as platforms and governments advocate for disclosures on AI uses, experts caution that deepfakes used to create political satire need to be protected.

“Legislation worldwide is starting to consider how to label and disclose the use of AI, as well as limitations on deepfakes. In democratic contexts, these proposed regulations, like the [European Union] AI Act, make sure to account for satirical and parodic uses [by online creators],” Gregory said. “In nondemocratic contexts, like China, they push for disclosure of both the creator of AI-generated content, as well as restrictions on satire and parody.”

While deepfakes complicate the line between satire and photorealistic or audio realistic, we have witnessed global instances such as Brazilian deepfake satirist Bruno Sartori, who makes lifelike but clearly implausible pastiches of key political figures in Brazil from President Luiz Inacio Lula da Silva to former President Jair Bolsonaro.

Gregory points out that satirical speech is also often pseudonymous, as witnessed in the case of a popular Instagram account that shares AI-generated images of British politicians performing menial jobs or as refugees.

“This is why it’s so critical that emerging infrastructure and laws that help us understand how AI was used in the making of the images, audio or video we consume should focus on the ‘how’ of the media-making, not the ‘who’ made it. Knowing that an image was made with AI, even if it’s photorealistic, can help us understand that this is satire, not disinformation.”



Source link