A reality check of the security threat posed by ChatGPT (and Bard)

A reality check of the security threat posed by ChatGPT (and Bard)

Whether or not attacks get slicker or more frequent, defenders know what it takes to address the threat, says BeyondTrust Director, Solutions Engineering Asia Pacific, Scott Hesford.

The race to produce an accurate conversational AI platform is in full swing, as is a parallel effort to find and demonstrate powerful use cases for the technology and to locate flaws or limitations in the platform’s capabilities.

The end result is noise. Lots of it.

This is particularly the case in the security field, where a large and growing body of work by testers and researchers exploring ChatGPT’s possibilities already exists. But separating the hype from the reality in such a fast-moving and evolving space is challenging.

Conversational AI platforms are seen to favour hackers. Security researchers and hacker forums alike have reported the platforms as being useful for scripting malware or for crafting phishing emails that are harder to detect, absent the typos or misuse of language that corporate employees have been trained to look out for.

This rings true to an extent and it would certainly be a mistake to underestimate the boost that ChatGPT-like platforms can afford a would-be attacker.

The platforms will get better over time. Constant retraining of the underlying model with real-time data will improve the performance of the platforms. In addition, just as we see with different cloud ecosystems, each conversational AI platform is likely to evolve to have its own technical strengths. Emerging platforms may prove better enablers of certain attack types, for example, and this will need to be monitored.

But having said all of that, ChatGPT is also mostly enhancing or streamlining the ability to craft or execute existing – known – attack types. The defences and protections against these kinds of attacks are also known – principally, identity-based security, endpoint protection and cyber awareness training.

These remain current and effective risk mitigation and remediation techniques, albeit there’s now some added urgency to either implement solutions in these areas or to modify existing security strategies and environments to operate in this world where attackers are empowered – emboldened – by conversational AI platforms.

Scott Hesford, BeyondTrust Director, Solutions Engineering Asia Pacific

In hackers’ hands

It’s worth briefly explaining the reality of ChatGPT and what it does for attackers.

ChatGPT, like all forms of AI, acts as a kind of force-multiplier. It can enable less technical attackers to get most of the way to functional malware. The code output may have gaps that need checking but overall this is a much lower technical barrier to entry.

The platforms could also lead to more variants of existing malware strains. Currently, one trunk of malware often has a number of branches or variants, usually accessible via the dark web.

Conversational AI platforms may act as a simpler interface to locate many of these variants, and the platforms may also wind-up producing variants themselves. That may lead to an increase in 0day threats – and a higher frequency of attacks overall – as attackers can produce more malware code, more often.

In addition, we know the platforms are capable of helping ransomware actors craft more convincing phishing emails. It’s unlikely the sophistication of these attacks will increase, however. An attacker that wants to craft an effective spear phishing campaign today can use social media profiles to pull together a fairly convincing email. An AI platform could assist with that task, but the risk of a successful attack of this nature is ever-present – and doesn’t require AI to execute.

The same logic applies more broadly. Existing defensive responses are effective at combating the types of attacks that conversational AI platforms may be put to work on. Security teams need to double-down on efforts and layering of these protections to defend against any increase in threats.

The security team view

In a world where phishing emails become more indistinguishable from legitimate emails, two things need to happen.

First, user training remains a critical frontline defence. Phishing training will need to evolve to deal with the increased threat, but just as importantly there should be additional layers of defence in place in case a more well-crafted phishing email gets through that first line.

As we know, phishing is used in multiple threat scenarios, such as to distribute a ransomware payload or to trick users into divulging their login credentials.

If ransomware gets past a user, it needs to be prevented from executing, from spreading to other networked devices or being used as a foothold to escalate an attack. Endpoint protection solutions that include malware detection will continue to be important.

However, rapid iteration of malware strains by AI will require additional defensive measures.

Security teams should focus on removing local administrator accounts as a matter of course. Endpoint privilege management can be used to enforce least privilege, ensuring that a compromise is limited to a single endpoint or user workstation. In addition, trusted application protection as part of a privilege management solution can stop untrusted processes such as malware from executing. For users that require higher privileges, it’s important to enable multi-factor authentication (MFA) and to protect the accounts and endpoints with a specific privileged access management (PAM) solution.

Browse our latest issue

Intelligent CISO

View Magazine Archive