Haider Pasha, Chief Security Officer for EMEA & LATAM at Palo Alto Networks, highlights that the future of cybersecurity will hinge on the dynamic interaction between offensive and defensive AI.
It’s no secret that cyberthreats have increased dramatically in recent years, becoming more sophisticated and tougher to identify. Everyday, Palo Alto Networks analyses 750 million new and unique events; detects 2.3 million new and unique attacks; and blocks nearly 8.6 billion attacks. Businesses now find themselves facing more attacks as a result of AI, with the cost of cybercrime rising too.
While AI has been around for some time, the emergence and accessibility of GenAI in mainstream discourse over the last two years has ushered in an era of new and complex cybersecurity challenges. The weaponisation of AI by malicious actors is a growing concern, necessitating a proactive and strategic approach to defence.
Why is AI a cyberthreat?
The tactics of cyber gangs have greatly advanced. Phishing and ransomware attacks have now developed into utilising AI to engineer deepfakes or creating malware capable of bypassing established security measures. AI is being used to automate the process of discovering and exploiting vulnerabilities in systems, thus reducing the time taken by threat actors to identify weaknesses and launch targeted attacks at a large scale.
Another limitation that has gathered attention as a potential threat is GenAI being manipulated to generate malicious output through prompt injection attacks. By getting these tools to answer questions in an unintended way, threat actors use prompt injection to gather sensitive information and even execute malicious code.
Suppliers and IT vendors have become one of the biggest vulnerabilities for attackers to exploit. Cyberthreat actors are always looking for the path of least resistance to carry out their attacks. Even with the most mature third-party management systems and state-of-the-art cybersecurity defences in the world, financial institutions, such as banks or fintechs, have been targeted by rogue nation-states and malicious hackers who use AI and ML algorithms to launch ransomware attacks.
Compliance costs an added burden
This wave of external threats is coupled with the increasing regulatory requirements across the globe, adding increased complexity to security challenges. C-suite are struggling to keep up with the speed of change in requirements under directives such as GDPR, NIS2, and DORA. In fact, NIS2 Directive mandates swift reporting of cybersecurity incidents including a 24-hour early warning alert for affected organisations. This is to be followed by a more detailed incident report to be submitted within 72 hours of becoming aware of the incident, including information about the nature, scope and potential impact of the breach.
Similarly, GDPR guidelines across the EU demand organisations to notify the supervisory authority within 72 hours of becoming aware of the breach in cases where personal data breach can likely result in a high risk to the rights and freedoms of individuals. This is why now is a crucial time to prioritise cyber resilience, especially when regulatory and recovery costs can often be so costly that a business has to shut down.
Counter AI attacks with AI defence
While AI has become one of the most powerful tools at an attacker’s disposal, it also remains a great asset for cybersecurity experts to leverage and stay one step ahead. While it empowers adversaries to launch more targeted, evasive, and high-impact attacks, it also offers unparalleled capabilities for threat detection, incident response and automation.
Organisations should look to incorporate AI as a first line of defence, as its threat detection and analysis can help block the most sophisticated of cyberattacks. Proactively implementing lines of defence in case AI has detected a potential attack should be the first course of action, without the need for conducting manual triage to check if it’s a credible attack or a false positive.
Unit 42 at Palo Alto Networks recently found that the average days from compromise to exfiltration was 44 days in 2021, 30 days in 2022 and just five days in 2023. However, with the rise of AI in 2024, this has now been slashed to a mere handful of hours. Such instances are extremely time sensitive and organisations cannot achieve real-time monitoring without leveraging AI.
Historically, cybersecurity defences relied heavily on manual oversight by cyber sleuths and security analysts. However, the sheer volume and complexity of modern data have rendered these traditional methods insufficient. With its ability to analyse diverse security data sources in real-time to identify emerging threats and anticipate potential attack vectors, AI has made predictive analytics an essential component of a cybersecurity strategy. This includes leveraging adversarial AI techniques to generate and learn about attacks, so that defences can be continuously improved.
Despite the excitement about its explosive growth, it remains crucial we utilise the right kind of AI in our defence mechanisms. Within the umbrella term of AI, there are various different types and levels of sophistication that one needs to assess before implementing it across security operations. Adopting a platform approach such as Prisma or Cortex that incorporates sophisticated methods such as Precision AI, helps security teams trust AI outcomes by using accurate data. In turn, this enables them to safely navigate an evolving and challenging threat landscape.
While AI can automate many aspects of cybersecurity, human expertise remains indispensable. It does not replace the need for cyber experts. They still play a major role in establishing a security-first culture within an organisation that permeates all levels, from the C-suite to entry-level employees. Moreover, cybersecurity professionals can utilise AI tools to gather deeper insights and actionable intelligence, allowing them to make better informed decisions. For example, it can help analysts in examining security logs and highlight anomalies that warrant further investigation, thus enabling them to focus on high-priority threats.
Looking Ahead
The future of cybersecurity is projected to be an interplay between offensive and defensive AI.
Our challenge is to stay ahead of adversaries who are increasingly leveraging AI to enhance their capabilities. This requires a proactive approach, investing in cutting-edge AI technologies, fostering a culture of continuous learning, and building resilient systems that can withstand and adapt to new threats.
At the same time, we must recognise that AI is not a silver bullet. It is a tool- albeit a powerful one – that, when combined with human expertise and strategic thinking, can significantly enhance our cybersecurity posture.
So, while it is true that attacks have become more advanced in the last couple of years, so too has the technology that exists to fight them. We can continue to innovate to help businesses stay ahead of bad actors across the globe.