Cloudflare has announced the development of firewall for AI, a new layer of protection that will identify abuse and attacks before they reach and tamper with Large Language Models (LLMs), a type of AI application that interprets human language and other types of complex data.
Backed by Cloudflare’s global network, one of the largest in the world, firewall for AI will position Cloudflare as one of the only security providers prepared to combat the next wave of attacks in the AI revolution – those targeting the functionality, critical data and trade secrets held within LLMs.

A recent study revealed that only one in four C-suite level executives have the confidence that their organisations are well-prepared to address AI risks. When it comes to protecting LLMs, it can be extremely challenging to bake in adequate security systems from the start, as it is near impossible to limit user interactions and these models are not predetermined by design. As a result, LLMs are becoming a defenseless path for threat actors – leaving organisations vulnerable to model tampering, attacks and abuse.
“When new types of applications emerge, new types of threats follow quickly. That’s no different for AI-powered applications,” said Matthew Prince, Co-Founder and CEO at Cloudflare. “With Cloudflare’s firewall for AI, we are helping build security into the AI landscape from the start.”