Vectra AI 2025 predictions warn of fatigue in AI, calls for strategic rethink to demonstrate value

Vectra AI 2025 predictions warn of fatigue in AI, calls for strategic rethink to demonstrate value

Sharat Nautiyal, Director of Security Engineering for APJ, Vectra AI, says organizations are adopting AI tools without understanding their intended purpose – leading to confusion about how these solutions address specific issues.

Vectra AI has released its 2025 security predictions for Asia Pacific and Japan (APJ), highlighting the significant role AI will play in cybersecurity and the growing focus on achieving measurable results.

While Gartner projects IT spending to reach US$5.74 trillion in 2025, and IDC forecasts AI-related technology spending to hit US$337 billion, the initial excitement around AI is shifting towards a more pragmatic approach.

As organisations integrate AI capabilities into their core operations, they are increasingly focused on accessing the business value of these investments. These predictions aim to assist CISOs in effectively allocating resources and anticipating potential attack vectors in 2025. 

However, Vectra AI warns of a growing fatigue with AI co-pilots as organisations in the APJ region grapple with high costs and a lack of demonstrated value. Sharat Nautiyal, Director of Security Engineering for APJ, Vectra AI, said: “Organizations are adopting AI tools without understanding their intended purpose, leading to confusion about how these solutions address specific issues.”

“Although AI has great potential, it is simply a toolset not a cure-all for cybersecurity problems. Organisations must evaluate strategies to effectively leverage AI for real challenges.”

Nautiyal notes that AI will increasingly be used by threat actors in the coming year.

“All attacks will likely involve Generative AI (GenAI), facilitating infiltration and identity-based attacks. Methods like deepfakes and remote code exploits will evolve. As AI matures, these tactics will continue to advance.”

Nautiyal emphasizes that while compliance is essential, it does not equate to security. “Compliance offers basic guardrails but compliance alone doesn’t determine how a threat actor will behave. Good security posture is what matters.”

Lastly, Nautiyal says that the marketing hype surrounding AI in cybersecurity has reached a peak – with many companies claiming to leverage AI capabilities without delivering on their promise.

“Educating customers on genuine AI application is crucial as is rigorous testing to separate hype from reality. While AI drives innovation in threat detection and response, not all advancements are equally effective. Organizations must prioritize outcomes and stay focused, to avoid the pitfalls of marketing noise.”

Vectra AI’s 2025 predictions

Prediction 1: Fatigue and confusion around the overuse of the term “AI” will push vendors to focus on demonstrating value

The initial excitement about AI in cybersecurity will fade, leading to disillusionment among security leaders. While 87% plan to adopt more AI tools, there’s cautious optimism due to concerns about increased workload. Organisations within Asia Pacific must move beyond vague promises of “AI-driven security” to deliver tangible results like faster threat detection and improved accuracy. AI is a toolset, not a one-size-fits-all solution. Understanding specific challenges is crucial; cybersecurity is about minimising risks and preparing for threats. Good hygiene and proactive threat response are essential – organisations need to practise identifying and responding to threats, ensuring they have the right protocols in place to catch attackers quickly and effectively.

Prediction 2: Attackers are using AI to exploit vulnerabilities in security tools

As attackers increasingly leverage AI, a divide will emerge between those who use it skilfully for adaptive attacks and those who employ it more simplistically. By 2025, threat actors are expected to exploit AI for initial access through tactics like deepfakes and sophisticated phishing. While AI will evolve, core attacker behaviors, such as establishing foothold and establishing a command-and-control tunnel, abusing identity, moving laterally etc will persist. This highlights the need for robust tools like Network Detection and Response (NDR) solutions, to effectively counter these evolving threats and enhance organisational security.

Prediction 3: Focus on regulatory compliance overwhelms the defenders and provides advantages to the attackers

The growing emphasis on regulatory compliance is overwhelming cybersecurity defenders, giving attackers an advantage. Security teams are stretched thin, prioritizing compliance over dynamic threat detection, which undermines proactive security. By 2025, attackers are likely to exploit this imbalance further. While compliance is essential for meeting regulations, it does not equate to security and often diverts focus from effective threat mitigation. Analyzing usage logs is important, but the key is how these logs are used to identify and respond to threats. Compliance and security must collaborate to strengthen defenses; compliance alone cannot replace robust security measures.

Prediction 4: Identity will remain a critical attack vector

Identity-based attacks will be a major concern in 2025, with attackers leveraging the dark web and GenAI to enhance phishing and business email compromise (BEC). Organisations must prioritise continuous testing for identity compromises, using dedicated red teams or third-party services, rather than relying solely on annual assessments. Open-source tools can simulate identity compromises, helping organisations prepare for real threats. As generative AI becomes more prevalent, robust identity management and security practices are essential to prepare for these evolving attacks.

Prediction 5: Enterprise breaches will be traced back to AI agent abuse

Agentic AI will increasingly analyse attack surfaces and existing threats, providing context and detecting natural-language based threats like phishing, which traditional models struggle with.

As reliance on these sophisticated tools grows, organisations must prioritise the security and responsible use of their AI systems. Implementing robust safeguards and ethical guidelines will be essential to prevent misuse. Ultimately, integrating of agentic AI will enhance threat detection but foster a proactive security culture, enabling organisations to stay ahead of evolving cyber threats and better protect their critical assets. 

As AI continues to evolve, organizations must stay ahead of emerging threats by adopting strategic, outcomes-focused approaches to cybersecurity. Proactive measures, such as real-time threat detection and actionable insights, are essential for optimizing resources and effectively mitigating risks in an increasingly complex digital landscape.

Browse our latest issue

Intelligent CISO

View Magazine Archive