Fri. Nov 22nd, 2024
Occasional Digest - a story for you

DEEPFAKES are now the “biggest evolving threat” when it comes to cyber-crime.

That’s what a leading cyber-expert told The U.S. Sun in a stark warning over the dangers of the face-faking technology.

Convincing deepfakes can be created very quickly – and require increasingly less technical knowledge1

Convincing deepfakes can be created very quickly – and require increasingly less technical knowledgeCredit: Getty

Deepfakes are fraudulent videos that appear to show a person doing (and possibly saying) things that they never did.

It uses artificial intelligence-style software to clone the features of a person – and map them onto something else.

Of course, AI is being used for plenty of sinister purposes – including generally making scams quicker to create and execute – but deepfakes are one of the most serious threats.

The U.S. Sun spoke to Adam Pilton, a UK-based cyber-security consultant at CyberSmart and a former Detective Sergeant investigating cybercrime, about the threats we’re facing.

“AI can generate highly convincing phishing emails with ease and this means that unskilled cybercriminals are making hay while the sun shines,” Adam told us.

“The National Cyber Security Centre warned us in their latest annual report that cybercriminals are already using AI to develop increasingly sophisticated phishing emails and scams.

“The threat will continue to grow as the technology develops and the skills of those involved increase too.”

“Without a doubt, the biggest evolving threat is deepfakes,” he continued.

“Deepfake technology can create realistic video and audio impersonations of people.”

AI can now make creepy videos of people using just ONE photo – but Microsoft won’t release tool over impersonation fears

There are two key ways that criminals are using deepfakes, Adam explained.

SCAM SCHEMES

The first sinister use of deepfakes is to trick you into making some kind of security mistake.

This might be as simple as a crook using a deepfake to pretend to be a loved one – and convincing you to hand over some cash.

Deepfakes – what are they, and how do they work?

Here’s what you need to know…

  • Deepfakes are phoney videos of people that look perfectly real
  • They’re made using computers to generate convincing representations of events that never happened
  • Often, this involves swapping the face of one person onto another, or making them say whatever you want
  • The process begins by feeding an AI hundreds or even thousands of photos of the victim
  • A machine learning algorithm swaps out certain parts frame-by-frame until it spits out a realistic, but fake, photo or video
  • In one famous deepfake clip, comedian Jordan Peele created a realistic video of Barack Obama in which the former President called Donald Trump a “dipsh*t”
  • In another, the face of Will Smith is pasted onto the character of Neo in the action flick The Matrix. Smith famously turned down the role to star in flop movie Wild Wild West, while the Matrix role went to Keanu Reeves

Or they might pose as a colleague or even your boss to secure money or info from you.

“Scammers use deepfakes to create convincing fake videos or audio messages to manipulate victims into taking actions they wouldn’t normally take and deepfakes scams are already being used successfully in social engineering attacks,” Adam told us.

“At the start of the year, we saw cybercriminals use the deep fake of a Chief Financial Officer to trick an employee into transferring $25 million to them.

“This was initiated by a phishing email, which the employee was skeptical of.

The voices of the purported attackers could be heard too, aggressively instructing the victim’s loved one to do as they are told before the kidnapper speaks directly to the victim and makes their ransom demand.

Adam Piltoncyber-security consultant at CyberSmart

“However, when the employee joined a virtual meeting and saw and heard the CFO as well as other people he recognized, all suspicions were gone.

“We are also seeing phone calls being used to create highly emotional responses in which loved ones speak to us, claiming that they have been kidnapped.

“In late 2023 there was an apparent spike in reporting across the US of such calls.

“It wasn’t just the familiar voices that created the emotional response.

“The voices of the purported attackers could be heard too, aggressively instructing the victim’s loved one to do as they are told before the kidnapper speaks directly to the victim and makes their ransom demand.”

DEFENCE AGAINST THE DEEPFAKES

Here’s what Sean Keach, Head of Technology and Science at The Sun and The U.S. Sun, has to say…

The rise of deepfakes is one of the most worrying trends in online security.

Deepfake technology can create videos of you even from a single photo – so almost no one is safe.

But although it seems a bit hopeless, the rapid rise of deepfakes has some upsides.

For a start, there’s much greater awareness about deepfakes now.

So people will be looking for the signs that a video might be faked.

Similarly, tech companies are investing time and money in software that can detect faked AI content.

This means social media will be able to flag faked content to you with increased confidence – and more often.

As the quality of deepfakes grow, you’ll likely struggle to spot visual mistakes – especially in a few years.

So your best defence is your own common sense: apply scrutiny to everything you watch online.

Ask if the video is something that would make sense for someone to have faked – and who benefits from you seeing this clip?

If you’re being told something alarming, a person is saying something that seems out of character, or you’re being rushed into an action, there’s a chance you’re watching a fraudulent clip.

BAD NEWS

The second way deepfakes are being used for nefarious purposes is to spread fake news.

This is particularly worrying as voters head to the polls for upcoming elections in the United States and the United Kingdom.

Simple question that will stump AI voice clones

“The World Economic Forum has ranked misinformation and disinformation as the greatest global risk over the next two years,” Adam told The U.S. Sun.

“With a series of elections approaching for democracies around the world, it is easy to understand why.

“We continue to see the ability of AI to generate fake news articles, social media posts, and other content that spreads misinformation.”

Source link