Report finds without proper data security controls, GenAI turning employees into unintentional insider threats

Report finds without proper data security controls, GenAI turning employees into unintentional insider threats

Netskope, a leader in modern security and networking, has published new research revealing a 30x increase in data sent to Generative AI (genAI) apps by enterprise users in the last year.

This includes sensitive data such as source code, regulated data, passwords and keys, and intellectual property, significantly increasing the risk of costly breaches, compliance violations and intellectual property theft. The report also highlights how shadow AI is now the predominant shadow IT challenge hampering organisations as 72% of enterprise users are on their personal accounts in the genAI apps they are using for work.

The 2025 Generative AI Cloud and Threat Report from Netskope Threat Labs details the ubiquity of genAI usage in the enterprise. As of the writing of this report, Netskope had visibility into 317 genAI apps like ChatGPT, Google Gemini and GitHub Copilot. A broader analysis across the enterprise found that 75% of enterprise users are accessing applications with genAI features, creating a bigger issue security teams must address: the unintentional insider threat.

 “Despite earnest efforts by organisations to implement company-managed genAI tools, our research shows that shadow IT has turned into shadow AI, with nearly three-quarters of users still accessing genAI apps through personal accounts,” said James Robinson, CISO, Netskope. “This ongoing trend, when combined with the data in which it is being shared, underscores the need for advanced data security capabilities so that security and risk management teams can regain governance, visibility and acceptable use over genAI usage within their organisations.”

GenAI risk reduction

Many organisations lack full or even partial visibility into how data is being processed, stored and leveraged within indirect genAI usage. Oftentimes, they’re choosing to apply a ‘block first and ask questions later’ policy by explicitly allowing certain apps and blocking all others. Yet, security leaders must look to pursue a safe enablement strategy as employees seek efficiency and productivity benefits from these tools.

“Our latest data shows genAI is no longer a niche technology; it’s everywhere,” said Ray Canzanese, Director of Netskope Threat Labs. “It is becoming increasingly integrated into everything from dedicated apps to backend integrations. This ubiquity presents a growing cybersecurity challenge, demanding organisations adopt a comprehensive approach to risk management or risk having their sensitive data exposed to third parties who may use it to train new AI models, creating opportunities for even more widespread data exposures.”

Over the past year, Netskope Threat Labs also observed the number of organisations running genAI infrastructure locally has increased dramatically, going from less than 1% to 54% and this trend is expected to continue. Despite reducing risks of unwanted data exposure to third-party apps in the cloud, the shift to local infrastructure introduces new types of data security risks from supply chains, data leakage, and improper data output handling to prompt injection, jailbreaks and meta prompt extraction. As a result, many organisations are adding locally-hosted genAI infrastructure on top of cloud-based genAI apps already in use.

“AI isn’t just reshaping perimeter and platform security – it’s rewriting the rules,” said Ari Giguere, Vice President of Security and Intelligence Operations at Netskope. “As attackers craft threats with generative precision, defences must be equally generative, evolving in real-time to counter the resulting ‘innovation inflation.’ Effective combat of a creative human adversary will always require a creative human defender, but in an AI-driven battlefield, only AI-fueled security can keep pace.”

Browse our latest issue

Intelligent CISO

View Magazine Archive