Terry Ray, Senior Vice President and Fellow at Imperva, tells us how AI-enabled cybercriminals will alter the threat landscape – and what this means for CISOs and their teams.
Artificial Intelligence (AI) – essentially advanced analytical models – is not uncommon in the cybersecurity landscape. It has provided IT professionals with the ability to predict and react to cyberthreats more efficiently and quickly than ever before.
Surprisingly, the ‘good guys’ now have the Edge over the criminals. AI is being used to defend against cybercrime, but not yet to perpetrate it. This won’t last forever – AI will turn on us in the hands of cybercriminals eventually. Before then, the industry has some time to prepare itself for the rise of AI-enabled cybercriminals.
AI can allow companies to take large volumes of information and find clusters of similarity. This is always the focus of cybersecurity to a degree but organisations are often unequipped to do so in sufficient depth because of time and resourcing constraints.
By contrast, AI can whittle down vast quantities of seemingly unrelated data into a few actionable incidents or outputs at speed, giving companies the ability to quickly pick out potential threats in a huge haystack.
Replicating human hacking tactics
The ability to quickly turn large amounts of data into actionable insights is something that cybersecurity teams are going to need in the coming years, because AI could become a formidable enemy. Unlike malware, which is purely automated, AI is beginning to mimic humans to a worryingly accurate degree. It can draw pictures, age photographs of people, write well enough to persuade people of truths – or lies. Just recently, it has been found to impersonate human voices.
This means that AI could potentially replicate human hacking tactics, which are currently the most damaging but also the most time-consuming form of attack for hackers. The best, most difficult hacks to detect are those performed by humans – digging into systems, watching user behaviour and finding or installing backdoors. Attacks performed with tools are much easier to detect. They bang around, they hit things, they find the backdoor by knocking on every wall.
Hackers aren’t yet creating ‘AI-driven sneaky thieves’, but they could. AI could be used to build an independent, patient, intelligent and targeted attacker that waits and watches: an automated APT, if you will. That would be far more difficult to defend against than automated ’splash’ tactics and it could be executed or industrialised on a very large scale.
What an AI-cybercriminal would entail
The good news is that any such automated APTs will arrive slowly, because AI is complicated. An AI algorithm isn’t usually designed to be user friendly. Instead of pointing and clicking, you have to customise the hacking tool to a degree that needs AI expertise. Those skills are in short supply in the industry, let alone the hackersphere, so we’re likely to see this achieved first by nation-states, not by hobbyists. This means that the first likely targets are organisations with national interest.
Let’s look at some public examples. A while ago there were hacks on major healthcare providers in the US, all of which worked with a lot of federal employees. At the same time, organisations which handle Class 5 security clearance were hacked, losing fingerprint and personal data for thousands of people.
One theory about these hacks was that a nation state stole the data. As it didn’t turn up on the Dark Web for sale, where did it end up? If this nation does now possess it, they have terabytes of healthcare, HR, federal background check and contractor data at their command. The value of such data would make relating one set of data to another very difficult and time consuming if done by hand.
But an AI program could find clusters and patterns in the data set and use them to work out who could be a good target for a future attack. You could connect their families, their health problems, their usernames, their federal projects – there are lots of ways to use that information. Nation states steal data for a reason – they want to achieve something. So as AI matures, we could see far more highly-targeted attacks taking place.
AI phishing
While it’s likely that AI-powered hacking will begin its life as the preserve of nation-states, it’s only a matter of time before this sort of attack becomes commonplace in the regular market. Let’s consider phishing as a case study for how this might look.
At the moment, it’s often easy to tell if an email is a phishing attempt from the way it’s written with misspelled words and odd grammar. AI could eliminate that. Let’s say that AI can write better than 60% of people, using colloquialisms and idiomatic phrasing – it’d be pretty hard to spot. And even if AI is only ‘as good’ as humans, it can be much faster and therefore more effective.
Phishing is one of the most lucrative forms of hacking – if AI can raise the rate of success from 12% to 15%, say, with half the human effort, then it could be worth it for hackers. We haven’t seen any truly malicious, AI-crafted spearfishing attempts yet, but it’s likely to be a very effective first step for AI cybercrime.
Building an effective defence
An effective defence comes down to having the right people and the right tools in place. It’s been several years now that organisations have been working to solve the information-overload problem in cybersecurity, yet most security teams still have difficulty weeding out data theft incidents from the chaff.
Organisations have realised that the collection of user and application access to data is a responsibility of cybersecurity. Now security is feeling the pain of trying to understand this vast data. The most successful teams are leveraging AI or Machine Learning to perform these analysis activities to meet both the organisation’s and any regulation needs.
Companies should be reminded that not every attack will be protected. The focus should be shifted to discovering where your critical resources are and what you can do to mitigate the risk on those resources specifically. If data is your most critical resource, what do you know about it?
Your databases are where your most valuable data resides, making it a prime target for the hacker. Therefore, it is crucial for organisations to have visibility into their databases and files and having the appropriate security for key apps.
If you’ve been breached, it is vital you are able to tell a regulator what was taken. Otherwise, this will be costing the organisation hundreds of millions. AI cybercrime is coming. Make sure you can protect your data by knowing where it is.