deepfake

Deepfake Fraud Threatens CFOs: Protecting Corporate Finance

Multifactor verification and other precautions are becoming essential as AI enables more sophisticated scams.

Video and phone call freezes are typically attributed to poor service or some exterior cause. But if you notice unusual white hairs around the edge of your CFO’s beard just before a freeze, and when the call resumes seconds later, the beard is once again jet black, should you follow his instructions to transfer funds?

Perhaps, but not without further verification. Fraudsters, aided by AI applications, may one day—soon, even—perfect so-called deepfake audio and video calls. But even now, “tells” can indicate something is amiss, and the temporary freeze could actually be AI’s doing.

“I was recently testing a platform that had a feature designed to help hide artifacts, glitches, or synching issues,” recalls Perry Carpenter, chief human risk management strategist at KnowBe4, a security awareness and behavior change platform. “The program would freeze the video on the last good deepfake frame to protect the identity of the person doing the deepfake. It’s clear that some attackers are using adaptive strategies to minimize detection when their deepfakes start to fail.”


“There should never be an immediate need to wire a large amount of money without first verifying [it].” 

Perry Carpenter, Chief Human Management Strategist, KnowBe4


To what extent such attacks are successful or even attempted is unclear since companies typically keep that information under wraps. A significant attack reported last year by CNN and others involved a Hong Kong-based corporate finance executive of UK-based engineering firm Arup, who warily eyed an email requesting a secret, $25 million payment. He sent the money anyway, after a video call with several persons who looked and sounded like colleagues—but were, in fact, deepfakes.

In another incident reported by The Guardian last year, scammers used a publicly available photo of Mark Read, CEO of advertising giant WPP, to establish a fake WhatsApp account. That account in turn was used to set up a Microsoft Teams meeting that used a voice clone of one executive and impersonated Read via a chat window to target a third executive, in an attempt to solicit money and personal details.

A WPP spokesperson confirmed the accuracy of The Guardian’s account but declined to explain how the scam was foiled, noting only, “This isn’t something we are eager to relitigate.”

Self-Correcting Deepfakes

Unlike deepfake video clips, which are extremely difficult to detect, real-time voice and video via social messaging platforms are still prone to errors, says Carpenter. Whereas earlier deepfakes had obvious tells, like facial warping, unnatural blinking, or inconsistent lighting, newer models are starting to self-correct those irregularities in real time.

Consequently, Carpenter doesn’t train clients on the oftenfleeting technical flaws, because that can lead to a false sense of security. “Instead, we need to focus on behavioral cues, context inconsistencies, and other tells such as the use of heightened emotion to try to get a response or reaction,” he says.

Rapid deepfake evolution poses an especially significant risk for corporate finance departments, given their control over the object of the fraudsters’ desire. Distributing a new code word to verify identities, perhaps daily or even per transaction, is one approach, says Stuart Madnick, professor of information technology at MIT Sloan School of Management. There are various ways to do so safely.

When executives in corporate finance who deal with large fund transfers are well acquainted, they can test their voice or video counterparts by asking semi-personal questions. Madnick has asked alleged colleagues what their “brother Ben” thinks about an issue, when no such brother exists.

A clever, but not a permanent solution, Madnick cautions: “The trouble is that the AI will learn about all of your siblings.” Ultimately, all companies should use multifactor authentication (MFA), which bolsters security by requiring verification from multiple sources; most large companies have broadly implemented it. But even then, some critical departments may not consistently use MFA for certain tasks, notes Katie Boswell, US Securing AI leader at KPMG, leaving them susceptible.

“It’s important for corporate leadership to collaborate with their IT and technology teams to make sure that effective cybersecurity solutions, like MFA, are in the hands of those most likely be exposed to deepfake attacks,” she urges.

Perry Carpenter
Perry Carpenter, Chief Human Management Strategist, KnowBe4

Identifying Multifaceted Scams

Even with MFA, devious fraudsters can mine social media and online resources and use AI to conjure authentic looking invoices and other documents, and along with deepfake video and/or audio, create backstories persuasive enough to convince executives to make decisions they later regret. That makes training critical, conditioning executives handling large sums of money to automatically pause when they receive unusual requests and demand additional verification.

“There should almost never be an immediate need to wire a large amount of money without first verifying through a known internal channel,” says Carpenter. An interlocutor who communicates over a private phone or email account is also problematic, especially if they resist moving the conversation to the company’s secure systems. Ploys like adopting a tone of urgency, authority, or high emotion are also red flags, “so it’s critical that people give themselves permission to pause and verify,” he said.

While two or more verifications help, companies must still ensure their verification sources are secure. Madnick recalls a client company losing money when a fraudster passed a phony check. Suspicious, the bank called the company’s corporate finance department to verify the transaction, but the fraudster had already instructed the phone company to reroute calls to a number where it validated the check.

“Companies can set up procedures with their phone company that require them never to reroute calls without further verification with the company,” Madnick says. “Otherwise, it’s at the discretion of the phone company.”

Given corporate finance’s allure for fraudsters, KPMG’s Boswell stresses the importance of keeping abreast of emerging threats. Since CFOs and other top finance leaders must focus on their immediate duties, they can’t be expected to read the latest research on deepfake attacks. But companies can establish policies and procedures that ensure IT, or other experts regularly update them, raising finance’s awareness of the latest types of attacks, both internally and at other companies.

Madnick regularly asks corporate finance executives to raise their hands if they know their departments have faced cyberattacks. Many do not.

Katie Boswell, KPMG
Katie Boswell, US Securing AI leader at KPMG

“The trouble is that cyberattacks on average continue over 200 days before they’re discovered,” he says. “So, they may think they haven’t experienced an attack, but they’re just not aware of it yet.”

Corporate finance can also include deepfake scenarios in its risk assessments, including tabletop exercises incorporated in the company’s security initiatives. And employees should be encouraged to report even unsuccessful attacks, or what they believe may have been attacks, that they might otherwise dismiss, Boswell advises.

“That way, others in the organization are aware that it has potentially been targeted, and what to look out for,” she says.

In addition, while c-suite executives at large companies may have significant public profiles, information available externally about lower-level executives and departments such as accounts payable and accounts receivable should be limited. “Threat actors use that type of information more frequently using AI, to help manipulate targets through social engineering,” Boswell notes. “If they don’t have access to that data, they can’t incorporate it in attacks.”

Such precautions are only becoming more important, as deepfake fraudsters broaden and deepen their reach. While they have been spreading fastest in major economies such as the US and Europe, even countries whose populations use fewer common languages are increasingly exposed.

“Most criminals may not know Turkish, but what’s great about AI systems is that they can speak just about any language,” Madnick cautions. “If I were a criminal, I would target companies in countries that have been targeted less in the past, because they are probably less prepared.”

Source link

3 things that should scare us about Trump’s fake video of Obama

On Sunday, our thoughtful and reserved president reposted on his Truth Social site a video generated by artificial intelligence that falsely showed former President Obama being arrested and imprisoned.

There are those among you who think this is high humor; those among you who who find it as tiresome as it is offensive; and those among you blissfully unaware of the mental morass that is Truth Social.

Whatever camp you fall into, the video crosses all demographics by being expected — just another crazy Trump stunt in a repetitive cycle of division and diversion so frequent it makes Groundhog Day seem fresh. Epstein who?

But there are three reasons why this particular video — not made by the president but amplified to thousands — is worth noting, and maybe even worth fearing.

First, it is flat-out racist. In it, Obama is ripped out of a chair in the Oval Office and forced onto his knees, almost bowing, to a laughing Trump. That imagery isn’t hard to interpret: America’s most esteemed Black man — who recently warned we are on the brink of losing democracy — forced into submission before our leader.

The video comes as Trump claims that Tulsi Gabbard, director of national intelligence, has uncovered a “treasonous conspiracy in 2016” in which top Obama officials colluded with Russia to disrupt the election. Democrats say the claim is erroneous at best.

If you are inclined to give Trump the benefit of the doubt, right before this scene of Obama forced to kneel, a meme of Pepe the Frog — an iconic image of the far-right and white supremacy — flashes on the screen.

Not subtle. But also, not the first time racism has come straight from the White House. On Monday, the Rev. Amos Brown, pastor of San Francisco’s Third Baptist Church and a student of Martin Luther King Jr., reminded me that not too long ago, then-President Woodrow Wilson screened the pro-KKK film “The Birth of a Nation” at the executive mansion. It was the first film screening ever held there, and its anti-Black viewpoint sparked controversy and protests.

That was due in no small part to a truth that Hollywood knows well — fiction has great power to sway minds. Brown sees direct similarities in how Wilson amplified fictional anti-Blackness then, and how Trump is doing so now, both for political gain.

“Mr. Trump should realize that Obama hasn’t done anything to him. But just the idea, the thought of a Black person being human, is a threat to him and his supporters,” Brown told me.

Brown said he’s praying for the president to “stop this bigotry” and see the error of his ways. I’ll pray the great gods give the reverend good luck on that.

But, on the earthly plane, Brown said that “the more things change, the more they remain the same.”

Trump courted the Black vote and has his supporters among people of all colors and ethnicities, but he’s also played on racist tropes for political success, from stoking fear around the Central Park Five, now known as the Exonerated Five, decades ago to stoking fear around Black immigrants eating cats and dogs in Ohio during the recent election. It’s an old playbook, because it works.

Reposting the image of Obama on his knees is scary because it’s a harsh reminder that racism is no longer an undercurrent in our society, if it ever was. It’s a motivator and a power to be openly wielded — just the way Wilson did back in 1915.

But the differences in media from back in the day to now are what should raise our second fear around this video. A fictional film is one thing. An AI-generated video that for many people seems to depict reality is a whole new level of, well, reality.

The fear of deepfakes in politics is not new. It’s a global problem, and in fairness, this isn’t the first time (by far) Trump or other politicians have used deepfakes.

Trump last year reposted an image of Taylor Swift endorsing him (which never happened). Also last year, during the election and the height of the Elon Musk-Trump bromance, the billionaire posted a fake photo of political challenger Kamala Harris dressed in what looked like a communist military uniform.

Trump himself has not been immune. In 2023, Eliot Higgins, the founder of the investigative outlet Bellingcat, said he was toying with an AI tool and created images of Trump being arrested, never thinking it would go viral (especially since one image gave Trump three legs).

Of course it did, and millions of people looked at these fake pictures, at least some assuming they were real.

The list of deepfake political examples is long and ominous. Which brings us to the third reason Trump’s latest use of one is unsettling.

He clearly sees the effectiveness of manipulating race and reality to increase his own power and further his own agenda.

Obama on his knees strikes a chord all too close to the image of Latino Sen. Alex Padilla being taken to the floor by federal authorities a few weeks ago during a news conference. It bears chilling resemblance to the thousands of images flooding us daily of immigrants being taken down and detained by immigration officers in often violent fashion.

Videos like this one of Obama are the normalizing, the mockery, the celebration of the erosion of civil rights and violence we are currently seeing being aimed at Black, brown and vulnerable Americans.

There is nothing innocent or unplanned about these kinds of videos. They are a political weapon being used for a purpose.

Because when repetition dulls our shock of them, how long before we are no longer shocked by real images of real arrests?

Source link

This entrepreneur spots celebrity deepfakes. Can he help average Joes too?

Celebrities are all too familiar with the world of deepfakes, the colloquial term for artificial intelligence-generated videos that depict actors and other Hollywood talent falsely doing or saying things that they never agreed to.

To protect themselves, actors including Steve Harvey, Beverly Hills talent agency WME and studios have enlisted the help of Vermillio, a Chicago-based company that tracks famous people’s digital likenesses and intellectual property online. Depending on what its clients want, it can have the material taken down .

But as AI technology continues to improve and becomes more widely available to the general public, regular people are getting scammed too.

Now, Vermillio says it is offering a version of its service for free to everyone.

The move comes as more and more convincing deepfakes continue to proliferate online, making it difficult for social media sites to police such activity. In 2019, there were about 18,000 deepfakes globally and this year, there have been roughly 2 trillion generative creations, said Vermillio Chief Executive and co-founder Dan Neely.

That leaves average Joes at a growing risk of being impersonated online, with little recourse.

“We can’t wait for governments to solve this problem,” Neely said. “We can’t wait for legislators to solve this problem. We can’t wait for other people to solve this problem. We just said it’s the right thing to do, so we should just be doing it.”

With this move, Vermillo is adopting a classic “freemium” model — offering partial service for no charge and up-selling for additional features.

Here’s how it works.

Using its TraceID technology, the company flags problematic content. For paying clients, Vermillio can send take-down requests to sites such as YouTube or Instagram. Additionally, Vermillio says clients can monetize their data by licensing it.

People who sign up for the free version enter information about themselves such as their name, date of birth and social media handles on sites including Instagram or YouTube.

Then, Vermillio will use that information to build a “likeness model” to scour the Internet for potential red flags involving the user’s identity. Then Vermillio alerts the user to what exists online. For example, if someone has created a fake Instagram account of that user, Vermillio would flag that.

Users are notified of this type of content and can decide for themselves what they would like to allow, or take action to remove. If the user wants Vermillio to request take-downs of the inappropriate content, users would need to upgrade to a paid account, which starts at $10 a month and includes five monthly take down requests.

While many social media platforms give an option to users to flag problematic content, Vermillio said it is faster and more effective than having users go directly to YouTube or Instagram to rectify the situation. It has built a network of partners and can escalate take-downs in as quickly as an hour, the company said.

Vermillio executives said some real life examples of deep fakes include celebrity voices used to raise money for fake charities or terrorist organizations, and high school students creating fake pornography of their classmates.

“It’s affecting regular people in the sense that they’re getting scammed by deep fakes, but it’s also affecting teenagers, so people need to understand where they stand,” said Kathleen Grace, Vermillio’s chief strategy officer. “This is an easy way for them to do that.”

While fake social media profiles have existed for years, “generative AI just poured gasoline on it,” Grace said.

The company said hundreds of people use Vermillio’s services, but didn’t specify numbers. By the end of the year, the company expects to have thousands.

Neely said the company isn’t profitable and declined to share revenue figures. Time magazine reported that revenue from Vermillio’s TraceID has increased tenfold from April 2023 to April 2024. The company makes money through the paid versions of its service and licensing. Vermillio has raised $24 million in funding.

Hollywood companies and talent are navigating artificial intelligence in different ways.

Groups such as performers guild SAG-AFTRA are pushing for more state and federal protections against deepfakes. Some celebrities such as Academy Award-winning supporting actor Jamie Lee Curtis struggled to get a fake ad of her on Instagram taken down showing her falsely endorsing a dental product.

WME announced a partnership with Vermillio last year.

“The scale of the issue is extraordinary, so if you’re a rights holder, just trying to understand how much of these AI outputs are based on or utilized my data, my IP in some way, shape or form, is a massive need,” said Chris Jacquemin, WME’s head of digital strategy.

“They’ve obviously proven that TraceID can protect the most important, most high profile public figures in the world,” Jacquemin added. “Opening it up in a much broader application, I think is a huge step forward in really democratizing how anybody can start to police use of their likeness with respect to AI and AI platforms.”

Source link