Sun. Dec 22nd, 2024
Occasional Digest - a story for you

Now technology evolves at a breakneck pace, artificial intelligence (AI) has emerged as a double-edged sword. It is lamentable that, while AI has opened an astonishing panorama of possibilities across almost every sector, it has also found its way to the cybercriminal universe to fuel new, more professional, complex, and effective cyberattacks. Consequently, it is challenging to overstate the categorical threat of AI-based cyber threats rising within the companies all over the world, so the defense strategy has to be integrated and multilayered.

Ever since AI began penetrating the dark web and other criminal activities, the threats have shifted. These attacks, hitherto, involved human participation and operation of the attack tools, but, the utilization of the still budding AI technologies makes such attacks more automatic, more specific, and more flexible. While human-operated attacks rely on their ability to learn about the target and are hence depended upon their experience, AI-operated attacks can gather huge sets of data in a very short time in addition to which these systems can adjust their operating parameters as per the goal, which in this case is to launch a phishing attack, which also makes these systems smart enough to learn the results of previous attempts and incorporate them into their operating system.

The COVID-19 pandemic that began in early 2020 escalated a new level of remote workforce, which exposed the business to new risks from cyber criminals. AI backed cyber-attacks started to appear as the cyber-attack utilized machine learning and natural language processing to craft the message and its attacks. For example, the attack that began in 2020 is known as the “SolarWinds”[1] attack where the hackers used AI in developing a customized backdoor through which they would be able to get to sensitive information through the organization’s systems without being spotted. This was a big step up from previous types of hacks, which proved that AI can be used to improve the anonymity and capabilities of the adverse campaigns.

During 2021, relative to the previous year there was an increased usage of AI in creating malware and phishing schemes. The threat actors have started employing generative models to generate ‘tailor-made’ malware and phishing subjects which almost looked like the actual ones. For instance, considered to have been deployed in May 2021, the “DarkSide” [2] ransomware attack employed AI-generating emails to secure the installation of a ransomware. These developments pointed out to the fact that AI tools, four, had become very easily available to hackers since they could now automate the production of numerous elaborate as well as sophisticated, individualized attack techniques on a scale that had not been possible before.

In the year 2022, attacks that employ social engineering with the help of AI became more complex. Fraudsters have been using artificial intelligence synthetic deep fakes to disguise identities of people with whom the target engages in communication. For instance, in August 2022, an employee of a software company to be face a loss whereby they succumbed to a social engineering blackmail using an AI-generated deep fake audio of the particular employee.[3] To this, the present misdeed was a telling example of AI’s shifting capacity to influence and deceive people with old-fashioned security systems.

In 2023, more than half of cybersecurity respondents opined that generative AI would be more beneficial to cyber attackers as opposed to defenders in the next two years.[4] Their biggest worry was what AI could do for adversaries in terms of phishing, in the development of specific malware, and the spread of fake news. Hackers were already using big generative models for building unsavory chatbots and making it considerably simpler to write phishing emails and generate tailored malware. Such shifting views among security specialists signified the need for organizations to review and enhance their approaches to combating the more elaborate AI-based security threats.

Security leaders said in 2024 that automation of cybercrimes through artificial intelligence would be the order of the day in the next one year. According to Netacea’s research[5], 95% of security leaders felt that AI would be used to automate attacks and 93% felt that AI would be used to generate deep fakes to impersonate persons who are actually trustworthy. The escalation of the utilization of AI in ceasefire operations is anticipated to be more prevalent in the future where researchers believe that the AI control type of cyber-attacks will be the most prominent in the near future. This is as worrying for organizations because it underlines the importance of organizations to adopt a layered protection scheme to combat threats and exploitations which involve use of artificial intelligence, analyze user behavior, train users, prepare for incidences and share information.

The costs of cyber-incidents are not limited only to mere thousands of dollars, but can run into hundreds of thousands or millions of dollars. In a report by Cybersecurity Ventures, it was revealed that the Cybercrime costs globally were expected to hit $10. 5 trillion per annum in the year 2025 from $3 trillion in 2015.[6] Other costs relate to organizational image, legal ramifications, and disruption of business. Many sectors, including the medical and the financial fields, are especially vulnerable to penalties and acrimonious consumer backlash in case of a data leak. This just goes on to signify the need for investments in a good cybersecurity policy and practice.

AI-driven threats require use of AI in order to mitigate such threats in the enduring fight against such cyber ilk. AI based security solutions can process large amounts of data in a matter of seconds and can pin point the events and activities that requires immediate attention, these activities could be related to a possible threat. ML algorithms can refreshingly learn from new set of data in case there is evolution in threat patterns. Threat intelligence solutions based on artificial neural networks are often capable of detecting and neutralizing threats before they are productive. For instance, AI can capture patterns of the network that are out of the ordinary which could indicate a breach. Consequently, these systems, owing to their integration of advanced analytics and machine learning, are capable of providing much faster and much more accurate threat detection and response compared to conventional methods.

Besides detecting the evident danger, organizations need to supervise unidentified activities that imply an ongoing attack. Behavioral analysis entails the observation of users as well as system and the identification of anomalies. For instance, if for some reason an employee’s account began to access a significant amount of data from the organization’s protected network after hours, this could be considered suspicious. Computerized behavioral analysis tools that are based on AI can develop standard behavioral models for the users and systems of an organization. These tools work by monitoring and analyzing behavior day by day; it would be easy for these tools to identify a slight shift that may be as a result of an attacker. It enables organizations to prevent a threat from evolving into a critical one since it is dealt with before it gets to that point.

Saying that, human factor leakage is still considered one of the most critical aspects that expose a system to hacking. Nevertheless, the growing presence of AI in cybersecurity breaks does not remove the reliance on the human factor, mainly phish, and social engineering. Thus, there is a need to increase the awareness of users to dangers and measures that they should take to secure their accounts from cybercriminals. The awareness programs and training sessions often entail advising the employees on various risks and how they should handle them. For instance, workers should be made to understand what a phishing email is and fail to click on links from strangers, and always report such events. This means that through training and creating consciousness on cybersecurity, organizations are likely to minimize a cyber-attack.

However, while preventive strategies for combating cybercrimes have been implemented strongly organizations cannot rule out the occurrence of cybercrime all together. Thus, recognizing an incident response plan is very essential in ensuring an organization is prepared for the occurrence of an incident. This plan should show how the threat will be discovered and isolated, how the bad actors will be removed from the network, and how recovery of the compromised networks/systems will be achieved. AI has applications in the process of handling an incident by providing help in the detecting the incident, its type, and how to prevent the incident. For instance, AI technologies are capable of determining where an attack originated from, which system has been penetrated and begins the fixing process. When such tasks are executed manually, it is time consuming thus the organization is able to handle the incident in a minimal time thus reducing the impact of the attack.

Cyber security is always a team play and often the organizations need to take help of their counterparts in the other organizations, government and with the experts in the field to defend themselves against the new threats. Great threat intelligence can be delivered to communities and this way the organization will be in a position of being able to understand the dangers it is facing and come up with defensive measures which can be effective at that given time. Cooperation with industries and information-sharing programs like the Cyber Threat Alliance (CTA) or Information Sharing and Analysis Centers (ISACs) give good opportunities to cooperation. Through such processes, organizations can get information about novelties in threats and use the collective experiences of different members for the improvement of the existing cybersecurity systems.

AI, including machine learning, has been on the rise stage and the same applies to the use of artificial intelligence by cybercriminals helping in enhancing the development of better, automatic and believable cyber-attacks. Consequently, as AI based threats persist and new ones are developed, it becomes necessary for organizations to employ layered model of defense against the threats. With the help of artificial intelligent driven threat identification, behavioral monitoring, user awareness, response to security incidents, and cooperation, an organizational cyber defense can be created to meet the demands of the contemporary world. The war against AI based cyber-crimes persists, however this all-encompassing and preventive strategy would ensure the protection of organizational assets in the future.


[1] Harvard Business Review. “How SolarWinds Responded to the 2020 Sunburst Cyberattack.” Podcast audio, January 2024. https://hbr.org/podcast/2024/01/how-solarwinds-responded-to-the-2020-sunburst-cyberattack.

[2] Sangfor Technologies. “U.S. Colonial Oil Pipeline Hack: Shutdown Due to Ransomware Attack.” Last modified May 26, 2021. https://www.sangfor.com/blog/cybersecurity/us-colonial-oil-pipeline-hack-shutdown-due-ransomware-attack.

[3] Forbes Technology Council. “Deepfake Phishing: The Dangerous New Face of Cybercrime.” https://www.forbes.com/sites/forbestechcouncil/2024/01/23/deepfake-phishing-the-dangerous-new-face-of-cybercrime/.

[4] Sanjeev. “AI in Cybersecurity: Should We Be Excited?” Medium, April 18, 2023. https://sanjeev41924.medium.com/ai-in-cybersecurity-should-we-be-excited-d052d7fbc226. s

[5]Netacea. Cyber Security in the Age of Offensive AI. April 24, 2024. https://netacea.com/reports/cyber-security-in-the-age-of-offensive-ai/

[6] eSentire. “Cybersecurity Ventures Report on Cybercrime.” https://www.esentire.com/cybersecurity-fundamentals-defined/glossary/cybersecurity-ventures-report-on-cybercrime.

Source link