Mon. Dec 23rd, 2024
Occasional Digest - a story for you

On the Big Take podcast, a deep fake operation in India makes videos of candidates directly addressing voters — a trend that could soon spread globally.

Article content

(Bloomberg) — Listen to the Big Take podcast on iHeart,Apple Podcasts,Spotify and the Bloomberg Terminal. 

Divyendra Jadoun, known as “The Indian Deepfaker,” is having a pretty busy year. The 31-year-old has built a business around making deep fakes for politicians in India — campaign-style videos where candidates appear to address voters by name.On today’s Big Take podcast, Bangalore-based AI reporter Saritha Rai and EU policy reporter Jillian Deutsch join host David Gura to explore the growing demand for these campaign deep fakes, concerns about disinformation, and the challenges of regulating the technology as India goes to the polls in the world’s biggest election.

Advertisement 2

Article content

Article content

Listen to the Big Take podcast every weekday and subscribe to our daily newsletter

Here is a lightly edited transcript of the conversation:

David Gura: In the small town of Pushkar, in Northwest India,  Divyendra Singh Jadoun is hard at work, in his studio.

The vibe is kinda similar to a Silicon Valley startup in its early days:

Jadoun has set up a small workshop, where he and a handful of co-workers create digital content. 

He’s gone from making Bollywood spoofs, to creating videos like this one: 

Deep Fake NARENDRA MODI: My dear Vohit Jain, my dear Manoj Bhagat, my dear Abhishek Samson, my dear God Chauhan   

Gura: It appears to be Narendra Modi, the prime minister of India. But it’s actually a deep fake. 

Jadoun, who calls himself “The Indian Deepfaker,” is using AI technology to make it seem like Modi is speaking directly to individual voters. 

In that clip, you hear him sub in four different names. He refers to each of them as “my dear.” 

The largest election in history just got underway, in India, and Jadoun    is in high demand. 

He makes videos like these, and A-I chatbots for candidates looking to capitalize on a largely unregulated technology to reach a wider range of constituents quickly.

Advertisement 3

Article content

And what Jadoun is doing is changing how politicians campaign in the world’s largest democracy, and that could have implications worldwide as countries grapple with the disinformation risks that come with AI.

This year, the stakes are really high. There are national elections in more than 60 countries, including the US. And this technology doesn’t recognize borders. 

Here’s  another video from Jadoun. You might recognize this voice, even if you’ve never heard him speak in Hindi. 

Deepfake Trump: I am not the real Trump, Rajesh what are you doing? I am not the real Trump Deepak, what are you doing? Sunita, what are you doing? I am not the real Trump

Gura: That was a deep fake of Donald Trump, from the same public-facing highlight reel Jadoun made that featured  the Modi video. 

In it, “Donald Trump” is  also addressing different people by name, and  noting  he is not the real Donald Trump.

On today’s show, what a deep-fake operation in  India tells us about the year ahead, when there are elections all over the world, and the  challenges of regulating this  new and fast-growing technology. This is The Big Take, from Bloomberg News, I’m David Gura.

Article content

Advertisement 4

Article content

Really. I promise.

Gura: Divyendra Jadoun is having a pretty busy year. The 31-year-old has built a business  around making deep fakes. At first, his work was mostly entertainment-focused.

Saritha Rai: During COVID, he told me that he started using a lot of these, technology tools where he started making, videos and, superimposing other celebrities on Bollywood, dance videos or superimposing, celebrities from Bollywood onto Hollywood movie clips 

Gura: That’s Bloomberg’s Saritha Rai. She covers AI in  Asia, from her home base in Bangalore. 

Rai: I mean, really clunky little videos. He showed me a few examples. They weren’t really very technically advanced.

Gura: Saritha says the  advent of Open-AI’s Chat GPT made it easier for Jadoun to make  higher-quality videos, and soon, he saw an  opportunity to branch into politics in India, where there’s a big demand for AI-generated videos. There are, broadly speaking, two kinds of Deep Fakes we’re seeing in elections this year. There are deep fakes from campaigns who want the next-generation, AI-driven version of a text message tailored to each voter.

Advertisement 5

Article content

The other kind are intentionally misleading videos in which someone’s likeness is used without that person’s knowledge or consent to influence the outcome of an election. Jadoun told Saritha he has ground rules for what kind of work he takes on: 

Rai: He tells me that he doesn’t accept contracts and commissions that come to him directly from candidates. He would rather it come from a digital agency or an AI agency or a contractor or even a political party. But he says usually they come through these intermediaries that are contracted by the the mainstream parties.

He’s definitely not playing sides, in terms of his work. He’s told me that he’s working for practically all the big major political parties.

Gura: What does the operation that he’s built up look like?

Rai: He started off as a one man shop, but he’s since expanded because of all of the work that is coming to him. The first time I spoke to him, he said he had a couple of workers and that was a few months ago, And then he told me he has five other employees who help them make these videos.

Gura: Making those personalized videos, like the ones of Modi and Trump, involve feeding an AI model audio or video of a person to capture their voice, facial expressions, and movements.

Advertisement 6

Article content

Rai: It takes a few days up to a week to train the AI models But once trained, he can produce these very quickly, like instantly almost, like within minutes, he can make these videos.

Gura: Saritha says that, in India, politicians are using deep fake videos like Jadoun’s for voter outreach. 

It’s a relatively inexpensive way to appeal to people all over this huge country. Directly, in a way they can’t at big campaign rallies. 

And that’s valuable in India, where there are some 968  million eligible voters: 

Rai: the level of hyper-personalization that is possible with generative AI. You can just train the model to say the main message and then feed it with thousands and thousands of names. And it will instantly reproduce that same video and sync it perfectly with Narendra Modi calling out each person by name.

Gura: The videos are so good, Saritha says, it can be very difficult to discern they’re deep fakes. Especially in a country where technological literacy varies so widely. 

Rai: These are millions and millions who have their first encounter of any technological device because they have access to a cheap smartphone.

Advertisement 7

Article content

They are the ones who have never experienced the internet except for experiencing it or accessing it through this device for the first time in the last few years. So this, the kind of impact that AI would have in India is certainly magnified because people are so much more vulnerable to the kind of deceptions that AI can be used for

I think that really is the power of these AI technologies colliding with the ubiquity of smartphones and the cheap broadband in a country like India. where  otherwise it would be a  really expensive exercise to get out and get the message to the voters. 

Gura: You said that he can work quickly, and it’s not extremely inexpensive. 

Rai: I’d certainly say it’s peanuts compared with what physical campaigning will cost some of these politicians. For instance, if Modi flies his private plane to get to a remote corner of the country.

Here, Modi can address all of those voters with a deep fake, personalized, hyper personalized, individualized message for maybe 6,000 or 7,000 dollars.

Gura: Are you able to ascertain the effect that this kind of technology is having and indeed is likely to have on the election?

Advertisement 8

Article content

Rai: Elections were, a magical time when all kinds of performers, all kinds of theater, all kinds of stuff came to, came by onto your neighborhood, uh, because that was part of the whole, as we call it in India, the election tamasha, which is the election theater, almost very theatrical stuff.

But when I look out today, outside in my neighborhood, there’s hardly any visible sign of an election that is coming up. And in fact, Bangalore,  where I’m based, is going to vote in  a few days and there is absolutely no sign of any large-scale campaigning or any overt display of election material.

There are no banners, there are no posters, there is nothing of that sort. Which was literally what it used to be like plastered all over the streets, hung on the, you know, on the, on, on buildings and everything. None of it, none of that is happening today.

What I see, and I see that this is a change that Technology is bringing about that a lot of the campaigning is going to be hyper personalized to the electors.

Gura: On Friday, India began seven rounds of phased voting. The election will continue until June 1st, and we expect to learn the outcome three days after that. 

Advertisement 9

Article content

Modi is expecting to be re-elected, and Jadoun is anticipating a lot of his work will come days before voting opens in tighter state races.

Rai: A lot of it is going to reach them via their phones.  And a lot of it will be AI generated material.

Gura: After the break, the challenges of regulating the technology behind deep-fakes, not just in India, but all over the world. 

Gura: The explosive growth of generative AI and its potential have made it a challenge to regulate. 

Its proponents tout its potential to boost productivity, and in the case of elections, to reach more voters. But its critics note its pitfalls in an age of heightened disinformation and concerns about cybersecurity. 

And when it comes to  regulation, many governments are unable or reluctant to move fast in the technology’s infancy.

In the US, AI legislation is tied up in Congress. The UK is  also debating legislation. China  does have rules in place for AI companies, but there’s little transparency into how they work in practice. 

The European Union has passed some of the most comprehensive regulations.  These could be a blueprint for lawmakers elsewhere.Last year, it adopted the “Artificial Intelligence Act.” which placed reporting requirements and restrictions on how companies operating in EU are able to develop and use AI. 

Advertisement 10

Article content

Bloomberg’s Jillian Deutsch covered the debate over that law from Brussels.

Jillian Deutsch: I think the best way to look at the way the EU is approaching AI is to see that they’re not really regulating the technology itself. They’re regulating the uses of the technology.

They’re required to submit risk assessments to the European Union and these kinds of more serious harms posed by AI for example, assigning social scores to citizens based on their behavior, or even using emotional recognition technology in the workplace or in schools, these are flat out banned in the EU.

These are the kind of good enough rules to actually make sure that this kind of scary and obviously very promising technology, the generative AI, has the correct guardrails that we should put in place.

Gura: But, Jillian says, in spite of how this bill is framed, and its splashy rollout, the “AI Act” is still limited in scope.

There’s disagreement over how strict regulations should be, and there’s a fear that regulations will stifle innovation. 

Deutsch: There is also a very kind of quick, sudden, pushback from countries like Germany and France saying actually that those are too many rules and actually Europe is going to shoot itself in the foot. Um, if we are the first out of the gates to place really kind of what we view as very strict requirements on generative AI companies.

Advertisement 11

Article content

They will just go elsewhere. They’re not going to invest on this continent. And we also need to gain the benefits of AI, not just, over regulate based on the concerns. So, so what ended up happening is really, we have more just basic transparency requirements.

Gura: Nothing in the law is explicitly about how AI can or can’t be used in European elections. Other countries are studying Europe’s approach, but many of them are calling on  companies to self-regulate. I asked reporter Saritha Rai about this: 

GURA: What is the regulatory landscape when it comes to AI look like in India today?

Rai: India has no regulation at all, when it comes to AI.  Certainly there have been attempts to regulate deep fakes on social media. That’s kind of a roundabout way of getting there. What India’s government has done is asked social media platforms such as Facebook and Google to regulate their own platforms: YouTube, for instance, WhatsApp, Facebook, Instagram. The government has told these platforms that when content that is deep fakes or AI generated content hits these platforms and people bring it to their notice, they have only 24 hours to take the content down. 

Advertisement 12

Article content

Gura: Representatives from Meta, Alphabet and other tech companies say they’re trying to address AI’s potential problems. 

Jillian says experts assert it’s hard to know how big a role deep fakes will play in determining the outcome of elections in 2024.

Deutsch: We don’t want to overemphasize how, how widespread or how impactful deep fakes are, But it obviously has captured a lot of people’s attention because this is something very new, it’s much cheaper to make, they’re much faster to make, they’re much more sophisticated than they used to be. And, they don’t have to influence every single person in a country, you know, a lot of these governments, a lot of these elections are decided on very small percentile differences.

Gura: For the time being, voters will need to bring a critical eye to everything they see online. Jillian points out  some deep fakes are harder to detect as fakes than others. Many videos still have tell-tale signs: There could be too many  fingers on someone’s hand, or inconsistencies in how clothing or accessories look.

Deutsch: Audio deep fakes are even trickier to identify. We have, you know, far fewer social cues to see that it’s incorrect or that it’s been faked.

Advertisement 13

Article content

Gura: Saritha says this played out in New Hampshire, during the run-up to the presidential primaries there, in January: 

Rai: Robocalls using voice cloning technology, impersonating Biden, asking to vote for a candidate that certainly he wasn’t endorsing. 

Gura: Increasingly, deep fakes are being seen as a serious disinformation threat.

Gura: There is a fear about how this technology could be used or exploited in American politics. Is there the same apprehension or anxiety in India about the way that this is being used or, or could be used? Or is there just a fascination with kind of the whiz bang nature of this?

Rai: Certainly, I think the world over there are apprehensions of about how these   deep fakes can be used to fool or deceive voters. We’ve seen it happen in multiple countries, right from Indonesia to Pakistan, and even in the US.

Gura: But deep fake creators, like Divyendra Jadoun, see opportunity, Saritha says. His business is booming.

Gura: Does he have designs or a desire to take this work to other countries, uh, and get involved in other countries’ politics?

Rai: So Divyendra has told me that he has agents working for him in multiple countries, such as Canada, which is going to an election in 2025. India has always been known for technology services outsourcing, but AI and election deep fake outsourcing may now see a new lease of life with the likes of Divyendra. I think he certainly is ambitious. So India has always been known for technology services outsourcing, but AI and election   deep fake outsourcing may now see a new lease of life with the likes of Divyendra. this might be a new type of outsourcing. 

So I think he’s building a new business model.

Article content

Source link