Oliver Paterson, Director of Product Management, VIPRE Security Group, discusses how AI can truly play a game-changing role in cybersecurity.
Ensuring cybersecurity today requires limitless persistence. Losses from Business Email Compromise (BEC) scams are 78 times higher (US$2.7 billion) than ransomware (US$34.3 million). Criminals are leaving no stone unturned – macro-less Malspam attacks are a new trend, phishing emails with QR codes is a fresh tactic to bait victims and adoption of the Qakbot malware family (an especially pernicious strain) is rapidly on the rise.
Behavioural-driven AI capability is going to be the game-changer in cybersecurity. This said, all the basic best practice security measures remain essential. Contrary to general perception, to effectively adopt behavioural-driven AI capability, foundational security based on already proven technologies must be strong.
Email is the number one threat vector
Email continues to be the preferred attack surface for criminals and hence the strongest possible email protection is vital, supported by capability that can detect the core of the correspondence’s source. Say an email with an attachment comes in, enterprises must have the wherewithal to dissect it down to its nucleus to ensure it is genuine – is the format of the attachment legitimate? Where is the link in the email/attachment pointed to? If the link redirects to an ad, is it a phishing link? On the other hand, if the link goes to a website, is it an authentic site or is it compromised? Is the website a Microsoft page? If so, it could be a red flag as criminals are able to script very authentic looking websites literally on the fly, in real-time, as it is downloading in the user’s browser.
Today, many enterprises are allured by the numerous AI-led point email security solutions that have come to market. While good, they are exactly that – niche solutions with a narrow focus. They only address a small proportion of a particular type of problem. Threats are ever-changing as criminals are constantly deploying fresh techniques to deceive.
Endpoint threats
Threats don’t always come from emails alone. There are endpoints such as browsers and webpages, third-party document sharing solutions, network access control systems, sandboxing, encryption and many more, that can also be breached. So, measures such as antivirus protection, network traffic monitoring, vulnerability management and so on are crucial.
In such situations, applying existing technologies such as Natural Language Processing and Machine Learning are extremely effective. In fact, today these techniques are widely deployed to help stop Zero Day malware and ransomware activity that are not necessarily file-based. Data from these technologies is also needed to train the newer AI technologies.
Behavioural-driven, AI-led security
Embedding behavioural-driven, AI-led security presents an effective solution to a multi-layered approach to cybersecurity – from the first stage reconnaissance activity that criminals initiate, i.e. phishing, through to ransomware and the signature-less, ‘never before seen’ zero-day attacks.
Fundamental to deploying such technology is access to vast volumes of data that include all versions of AI formats for every aspect of security – from sandboxing, endpoint and process monitoring through to malware and deep link phishing detection and user date baselining.
Afterall, at its core, what is AI? Essentially it is a massive search engine that provides information by using natural language to surface insights for a rounded picture of what is happening in an enterprise’s environment and where they need to focus on to pre-empt security breaches.
Enterprises looking to deploy behavioural-driven security must ensure that the data input for the AI engine includes accurate internal and external data. For example, if employees’ email behaviour is only analysed based on usage at the current organisation and in niche areas – links, executables, bad files, macros and such – it greatly limits the ability of the AI Business Email Compromise solution to provide an accurate picture of online behaviour.
Besides, say a new starter receives a novel phishing link that the enterprise hasn’t seen before. Because there’s no historic data on the employee’s behaviour – such as the IP addresses the individual corresponds with, the preferred browser, email IDs used aside from the business email, email traffic patterns, devices used to sign in, writing style in emails, typical formats of attachment and so forth – there’s no way for the AI solution to identify and mitigate the threat.
Therefore, training AI systems on combined data from a wide variety of accurate and credible external sources alongside input from niche point solutions, is vital. When AI security systems are trained on the right data, there are thousands of exploits that are catchable by behavioral-driven technology approaches. For instance, it becomes possible to detect and analyse the sentiment and intent of the emails based on historic profiling of individuals. Is there some form of unfamiliar coercion that is being used that might be a warning sign of malicious purpose?
Likewise, Generative AI capability can be used for forensic investigation to determine whether particular types of incidents are repeatedly showing up – are they net new or has the organisation seen them before, how are users affected by the activity even though it doesn’t yet qualify as an incident, and so on. The time and cost savings this affords security teams is unprecedented.
All these tasks become supremely possible within the natural language processing and Machine Learning models, making behavioural AI adoption for security game-changing. In addition to cost of investigation and timely uncovering of attacks, the mean time to recovery and cost of investigation is significantly reduced.
Security awareness is indispensable
A word of advice though. Regardless of how advanced technology is, relying purely on solutions is a high-risk strategy. Vigilance on the part of employees is indispensable. Should a breach attempt take place, employees must be equipped with reasonable knowledge to identify a potential threat. A malicious link is a good example. If they have inadvertently acted on the malicious link or a phishing or social engineering attack, they must intuitively know the processes they must follow so that the impact of the breach on the organisation can be immediately mitigated and remedial activity undertaken. There is simply no substitute for this. Neglecting this very important activity could easily turn into an expensive mistake. The view that AI can be ‘switched on’ and the technology will magically take care of security is a grave misnomer.