The impact of AI on cybercrime

Dario Colacicco
Post by Dario Colacicco
May 4, 2023
The impact of AI on cybercrime

AI is quickly changing our lives.

The majority of you reading this will have most likely used an AI assistant like ChatGPT, interacted with algorithms to predict your preferences, or asked Siri what the meaning of life is (spoiler alert - she doesn’t know).

However, beyond our personal lives, AI is changing the workplace too. Recently, an AI passed the US legal bar exam and even the gold standard doctor’s exam. Artificial Intelligence may just transform the way we work and live forever.

This sounds like a good thing, and it is. But like many wondrous tools, AI can be used for good or evil.

In this blog, we’ll look at how cybercriminals can leverage AI, why this is a threat, and what you can do about it.

How AI can be used for cybercrime

The truth is that AI is not new - at least for cybercriminals. It’s already being used to improve the effectiveness of attacks, targeting techniques, subterfuge capabilities, and more. In fact, Europol reports that AI is already breaking anti-AI measures like CAPTCHAs found on most websites.

Cybercriminals are using AI to help automate and optimize their operations. Modern cybercriminal campaigns involve a cocktail of malware, DDoS-as-a-service delivered from the cloud, and AI-powered targeting.

Here are the ways cybercriminals will - and likely already - use AI.

Remote work targeting through social engineering

There is a remarkable amount of information on people these days, and remote workers are no exception. From social media usage to digital messaging and photographs, AI is used to gather information on targets.

For example, AI can single out who works for a specific company and what data they will likely hold through LinkedIn. From there, data on Facebook and Instagram can help criminals reach those targets and compromise their accounts.

However, this is just the basics. One tool, identified by Europol, performs real-time voice cloning. With a five-second voice recording, hackers can clone anyone’s voice and use it to gain access to services or deceive other people. In 2019, the chief executive of a UK-based energy company was tricked into paying £200,000 by scammers using an audio deep fake.

All this happens in the background without the criminal in question breaking a sweat. 

Security avoidance through monitoring and listening

If you think that your security will help with AI attacks - you may be in for a disappointment. AI has already been found to have the ability to “listen” for measures taken by cyber security software and companies. By discovering the moves security companies are making, AI can shift its actions to remain undetected or automatically counter security measures.

Additionally, AI can leverage monitoring and listening to mimic real users on a system. This includes their behavior, style of speech, work processes, and more to fool administrators.

Power DDoS through AI co-ordination

Cybercriminals are already using DDoS attacks through IoT devices to flood services. However, these attacks have usually required some form of human input and effort. These ‘bots’ need to know who to target and what devices would make the most impact.

However, AI is quickly becoming capable of coordinating such attacks from inception to execution. According to Todd Wade, an interim CISO and author of BCS’ book on cybercrime, this is already happening on some level. This means that AI will enable criminals to use compromised IoT devices better than ever, orchestrating multiple attacks on multiple targets.

Data-targeted BGP attacks

BGP attacks have always been a thorn in the side of organizations that use the internet to transmit sensitive data. These attacks often require an educated guess on the part of criminals to intercept the data and then sift through it to find valuable, ransomable information.

These days are now at an end. AI can now help malware search for specific pieces of information, such as employee data or protected intellectual property. They can even target likely routes the information will travel through, making BGP attacks even more dangerous for organizations that want to protect their data.

More attacks through more accessibility 

You no longer need to code to be a criminal - at least with AI.

Europol has warned that AI-powered software development, which businesses are beginning to use, could also be employed by hackers. These ‘no-code’ tools convert natural language into code - and are now used to create weaponized cyber attacks.

In essence, this could lead to a new generation of criminals with low technical knowledge but the ideas and motivation for cybercrime, making attacks more accessible and more common.

How to prepare for AI cybercrime

Shoring up your defenses in preparation for AI cybercrime is admirable but may prove futile. With cybercrime far outweighing cyber security, it’s only a matter of time before AI attacks become the norm and simply overpower security measures.

However, there is one way - getting off the grid entirely.

SCION allows companies to take their services off the public internet and into a secure corner of the internet that cyber criminals simply cannot access or see. This means that it won’t matter what AI systems the criminals employ; they won’t be able to reach your services. 

If you want to learn more about SCION, book a meeting with one of our SCION experts here >