News

Industries

Companies

Jobs

Events

People

Video

Audio

Galleries

My Biz

Submit content

My Account

Advertise with us

How AI is shaping the ransomware threat landscape

In 2024, the greatest contradiction could very well be Artificial Intelligence (AI). Although people are tired of the subject, it continues to dominate conversations. It's here to stay, so adapting is our only option. AI is transforming various digital sectors, including cybercrime. Nonetheless, it's important to move past the excitement and understand the actual data. There's been a lot of discussion about AI's potential effect on international ransomware dangers, yet the pivotal question remains: What is the actual extent of its influence?
AI is our biggest help and threat in the current age of cybersecurity
AI is our biggest help and threat in the current age of cybersecurity

While the future potential of AI, on cybercrime and society in general, is immense (and a little scary), it's more helpful to focus on the here and now.

Currently, AI is just another tool at threat actors’ disposal, but it is quite a significant one because it lowers the barrier to entry for criminals.

The UK’s National Cyber Security Centre recently warned how AI will increase the ransomware threat through supporting ‘reconnaissance, phishing and coding’.

Using AI to assist with coding is already common among legitimate programmers. Even if it's just reviewing broken code or answering specific questions faster than Google, AI will support people hacking systems just as much as those developing them.

But while this might make ransomware gangs’ lives easier, it won’t make things any worse for security teams.

The result hasn’t changed; depending on who you ask, the end product might even be worse.

AI makes phishing easier

However, the other current use cases are more consequential. AI algorithms can scan networks or environments to map architecture and endpoints and, crucially, spot vulnerabilities.

Threat actors will already do this manually, but AI will make it much easier and more effective. AI can also be used to automate information gathering for more targeted attacks.

These tools can scrap the internet (particularly social media) to collect as much information on a target as possible for phishing and social engineering.

This brings us to the last typical use of AI by cybercriminals. In a conversation where the hype is aplenty, describing AI as ‘supporting phishing’ is probably underselling it.

At its most basic, even the most readily available AI tools can be used to craft better phishing emails - bridging the language barrier that often makes such scams spottable.

That’s another example of AI improving malicious activity that already exists, but the voice cloning (deepfakes) of specific people is another entirely different thing.

When combined with automated information gathering on a target, we’re looking at the next generation of social engineering.

What it means for security

While cybercriminals having more tools at their disposal is never going to feel great, there are two things to bear in mind: one, security teams have access to these tools as well, and two, AI is going to make attacks more sophisticated and effective.

For now, it isn’t introducing any brand-new or entirely novel threats, so there’s no need to tear up the playbook.

AI is already used on both sides of the battle line. It's probably fair to say that while ransomware gangs have access to their dark marketplaces of solutions and services, we normies have access to far more.

The ransomware industry was valued at (a still massive) $14bn as of 2022, but the global security industry makes this look tiny compared to its $222bn.

On the security side, AI can be used for behavioural analytics, threat detection and vulnerability scanning to detect malicious activities and risks.

AI can be employed to monitor both the system itself (scanning for vulnerabilities and entry points) and activity on the system (behavioural analytics, data analysis, etc.).

AI-enabled security aims to predict and catch threats before they turn into breaches.

More advanced tools will automatically respond to these threats, alerting security teams or restricting access. Much like on the criminal side, most of these concepts exist now (such as firewalls and malware detectors), but AI is making them more efficient and effective.

You can’t beat basic principles

So, even though AI will be used on both sides, it's not a case of getting AI engines to battle each other in the cyber realm (although that does sound cool.) Ransomware isn’t changing (for now, at least), and attackers' tactics aren't transforming.

Digital hygiene and zero trust all still work. Security will need to keep up, sure. After all, social engineering only needs to work once, but ransomware prevention and resilience need to work every time.

Ultimately, the best practice remains the best practice. As AI-enabled ransomware becomes more common, having copies of your data becomes more critical than ever. When all else fails - you need backup and recovery.

All of these scary scenarios, even the most advanced phishing attack known to man (or machine), could all end up with - ‘thank god I had trusted backup and recovery’

As backup is your last line of defence; you must know you can rely on it.

Again, the best practice hasn’t changed here. You need multiple copies of your data, one offline and one off-site. You also need a well-rehearsed recovery strategy, including scanning backups for infection and setting up a recovery environment that is ready to rock.

Progression

It's less daunting than it seems. AI isn’t changing the game - it's just a natural progression.

Progression is the game's name in cybersecurity - you can’t do everything, but you should do something.

The basic principles still get you pretty far, so keep following those, keep up to date on best practices, and make sure you can trust your backup when all else fails.

About Rick Vanover

Rick Vanover is VP of product strategy at Veeam
Let's do Biz