Fri. Nov 22nd, 2024
Occasional Digest - a story for you

Jihadists with ties to Al-Qaeda have warned followers about the possibility that “deepfake” Artificial Intelligence technology could be used to get into, or influence their online discussions.

Security Services, the message suggests, could use fake audio to give fake commands while pretending to be jihadist leaders. Or they could disguise themselves by using the technology to generate authentic-sounding responses. 

There are no indications that security services are actually using AI in this way. But there is currently a discussion in the counterterrorism community about the possible uses of Artificial Intelligence to more effectively track and identify jihadists in cyberspace, experts say.

The warning was circulated as part of an instruction on communication discipline disseminated to followers of Al Qaeda through encrypted messaging services, including Telegram. 

The message was forwarded to members of the Ansaru group, a Nigerian Al-Qaeda affiliate, via a Telegram bot named Alharb al-Ma’alumat (Information Warfare), which shares different messages to Jihadists on how to engage on the internet. 

The Arabic language message contained instructions to be careful using phones, as security services can trace people using voice recognition and location finding software, from an intercepted signal.

Risks

“New technologies have made it possible to create voices. Although they are yet to be as sophisticated as natural voices, they are getting better and can be used against Jihadists,” the message said. 

In recent years, the computing power of AI has been used to create “deepfakes”; manipulated videos or audio recordings that mimic actual individuals, making it seem they have said or done something they did not do. 

This technology relies on machine learning and artificial intelligence, and has already been used to spread false information and propaganda.

The message gives a glimpse into how terrorist groups conceive what kind of threat AI could pose to them. 

Terror groups operate in a volatile world of factions and splits over doctrine, in a landscape of incomplete information. A fear they might quite rationally have is the use of a deepfake video featuring a jihadist leader, manipulated to give the impression that he is inciting violence against another faction or commander, sowing further discord among them. 

Artificial Intelligence has the potential to not only produce deepfakes but also to develop chatbots that can impersonate jihadist leaders or other members of the group, according to experts. These chatbots could then be used to gather information from new recruits. 

New weaknesses

However, while such deepfake plots might seem fanciful, some experts do believe that AI could be a valuable tool in the fight against terrorism. 

There is a debate about whether these new technologies should be deployed in the fight against extremists at all. There are worries that as well as being a boon, tech could expose new weaknesses. 

It could be used to further erode privacy and civil liberties, and present more problems to the legal system when prosecutions based on evidence extracted by AI are brought in court, experts say.

There is a concern that once Artificial Intelligence programmes are deployed in the real world, they may influence the outcome in ways not controlled by the human operators in the security services. 

If human operators cannot fully understand the way an Artificial Intelligence identifies someone as a potential terrorist, there may be more false positives or false negatives in identifying threats. 

That in turn would lead to either more miscarriages of justice, or even missed opportunities to stop attacks.

‘Cutting edge’

There are also concerns terrorists could also get their hands on the technology.

There is currently no widespread use of artificial intelligence by jihadi terrorists, according to researchers at the United Nations Office of Counterterrorism (UNOOC). But there is concern that, as they have used other new technologies, they could potentially abuse these latest innovations to create and disseminate terrorist content. 

“The lack of evidence of the direct use of AI in terrorism should also not be interpreted as indicating that terrorists are indifferent or disinterested in the technology,” the researchers concluded. 

For many years, terrorists have consistently incorporated cutting-edge technologies into their operations, from sophisticated bypassing of social media detection to the use of AI-powered drones and GPS tools. 

The use of these technologies has improved their ability to conduct physical attacks, spread propaganda and recruit members, and other operational tactics.

In 2016, Islamic State (ISIS) terrorists had been working on developing driverless cars for vehicle-borne improvised explosive devices. This revelation sent shockwaves across the world, highlighting the group’s willingness to embrace cutting-edge technology in their deadly operations. 

New threat

Experts are concerned that ChatGPT and other language-generating artificial intelligence systems may be employed by terrorists to produce content that could be used for propaganda. There is a possibility that the large language model could be manipulated to disseminate terror messages, misinformation, and propaganda that may prove challenging to identify. 

“Hundreds of millions of people across the world could soon be chatting to these artificial companions for hours at a time, in all the languages of the world,” said Jonathan Hall, independent reviewer of terrorism legislation in the United Kingdom.

According to him, there is a real possibility that Artificial Intelligence chatbots could be programmed to spread extremist ideology to vulnerable people. Or even worse, make the decision to do so on their own. 

Steven Stalkey, the executive director of the Middle East Media Research Institute (MEMRI), recently shared how members and supporters of ISIS have begun to explore the use of generative AI in their operations. 

While some generative AIs, such as ChatGPT, refuse to answer questions that could lead to violent activities, others, such as Perplexity Ask, give out the answers while warning users not to implement them. 

In Dec. 2022, an ISIS supporter on Rocket Chat, a decentralised social media platform that’s often used by Jihadists, said he prompted ChatGPT on how to support the caliphate, and it gave him the required answers, which he shared with other members. 

According to another user, he reportedly used Perplexity Ask to produce messages promoting terrorism. According to sources, several members share the belief that AI technology could potentially be used to support their violent campaigns.


Support Our Journalism

There are millions of ordinary people affected by conflict in Africa whose stories are missing in the mainstream media. HumAngle is determined to tell those challenging and under-reported stories, hoping that the people impacted by these conflicts will find the safety and security they deserve.

To ensure that we continue to provide public service coverage, we have a small favour to ask you. We want you to be part of our journalistic endeavour by contributing a token to us.

Your donation will further promote a robust, free, and independent media.

Donate Here

Source link