Mon. Jul 1st, 2024
Occasional Digest - a story for you

LIVING totally offline is not enough to avoid being caught up in a deepfake attack, experts have warned.

It’s now so easy to create fraudulent videos – also known as deepfakes – using artificial intelligence that almost anyone is at risk.

It's becoming increasingly simple (and quick) to create convincing deepfake videos1

It’s becoming increasingly simple (and quick) to create convincing deepfake videosCredit: Getty

Deepfakes take advantage of AI apps to create videos that show people doing (or saying) things they didn’t.

Typically, they’ll train themselves on a person’s likeness, and then replicate their face on the body of someone else.

And now deepfakes are so advanced that they can be created from a single photo – transforming it into a video in minutes.

AI apps can even convincingly replicate your voice based on just a few seconds of audio.

The U.S. Sun spoke to cybersecurity expert Adam Pilton, who warned you risk “losing control” of your likeness.

“The risk has always existed with the internet that as soon as you upload material, whether that be a photo, text, audio or something else, you lose control of that,” said Adam, a cyber security consultant at CyberSmart and a former cybercrime detective.

“And anybody can do what they wish with it and display it in any context, positively or negatively.

“This is the case with deepfakes as well and the idea now of preventing or reducing the images we put online is too late, because that information for most people is already out there.

“And do we therefore need to adjust to a new method of acceptance that deepfakes will inevitably happen unless you live totally offline.

“And get better instead at spotting the signs you’re watching a deceptive.”

Two changes to your conversations to stop ‘AI voice cloning’ phone call from emptying your bank account – get them ready

Deepfakes will only get easier to create in the future.

And so being able to spot the signs that you’re watching a fraudulent video will become key.

It might sound scary enough to force you offline – but even that might not save you.

“Even if you were to live offline, there’s no way of controlling the content that does appear online as anyone can upload what they so wish,” Adam warned.

Deepfakes – what are they, and how do they work?

Here’s what you need to know…

  • Deepfakes are phoney videos of people that look perfectly real
  • They’re made using computers to generate convincing representations of events that never happened
  • Often, this involves swapping the face of one person onto another, or making them say whatever you want
  • The process begins by feeding an AI hundreds or even thousands of photos of the victim
  • A machine learning algorithm swaps out certain parts frame-by-frame until it spits out a realistic, but fake, photo or video
  • In one famous deepfake clip, comedian Jordan Peele created a realistic video of Barack Obama in which the former President called Donald Trump a “dipsh*t”
  • In another, the face of Will Smith is pasted onto the character of Neo in the action flick The Matrix. Smith famously turned down the role to star in flop movie Wild Wild West, while the Matrix role went to Keanu Reeves

“The reality of living in the modern world with AI is that over time, the images, text and any information about you will be online, AI will consume this and AI will probably use it.”

FAKING IT

Adam said that deepfakes are more likely to target well-known people – like celebrities and politicians.

That’s because there’s more to gain from replicating the likenesses of these people.

So Adam says you’re better off worrying about the dangers of seeing deepfakes – rather than being turned into one.

DEFENCE AGAINST THE DEEPFAKES

Here’s what Sean Keach, Head of Technology and Science at The Sun and The U.S. Sun, has to say…

The rise of deepfakes is one of the most worrying trends in online security.

Deepfake technology can create videos of you even from a single photo – so almost no one is safe.

But although it seems a bit hopeless, the rapid rise of deepfakes has some upsides.

For a start, there’s much greater awareness about deepfakes now.

So people will be looking for the signs that a video might be faked.

Similarly, tech companies are investing time and money in software that can detect faked AI content.

This means social media will be able to flag faked content to you with increased confidence – and more often.

As the quality of deepfakes grow, you’ll likely struggle to spot visual mistakes – especially in a few years.

So your best defence is your own common sense: apply scrutiny to everything you watch online.

Ask if the video is something that would make sense for someone to have faked – and who benefits from you seeing this clip?

If you’re being told something alarming, a person is saying something that seems out of character, or you’re being rushed into an action, there’s a chance you’re watching a fraudulent clip.

Being cautious about what you see online could protect you from sinister deepfake scams and propaganda.

“In the short term, it’s less likely that the average person will be subject to a deep fake, we’re more likely to see well-known and influential people,” he told The U.S Sun.

“The wider threat in the short term as a consumer of information online is to recognize what could be a deep fake.

“And if you are going to put any weight in the information that this potential deep fake has provided or even if will take action based on that information.”

Source link

Leave a Reply

Discover more from Occasional Digest

Subscribe now to keep reading and get access to the full archive.

Continue reading