Workers are actually utilizing greater than 3,400 AI apps at work — most outdoors IT visibility. In response to Zscaler, that surge is creating a significant shadow AI compliance & safety problem, as delicate firm information flows into instruments that many IT groups can’t totally monitor.
As Jay Chaudhry, CEO and Founding father of Zscaler, mentioned on the corporate’s latest earnings name:
“Organizations are quickly adopting AI to drive productiveness and innovation, however doing so is creating new vulnerabilities, considerably increasing the assault floor and growing cyber threats in scale, sophistication, and pace — recasting AI from a productiveness engine right into a harmful safety menace.”
The size behind that warning can’t be ignored. Zscaler mentioned AI utility utilization throughout its prospects has expanded to greater than 3,400 apps, a quadrupling over the past 12 months. In the meantime, information transfers to AI functions exceeded 18,000 terabytes in 2025.
The corporate additionally reported that enterprise AI utilization rose 91% yr over yr, whereas information transfers to AI and machine studying functions climbed 93%.
Learn Extra:
Securing Uncontrolled AI Utilization at Scale
Zscaler is positioning its new AI Shield instruments as a response to this shift, arguing that enterprise AI safety should now focus as a lot on worker habits and governance as on conventional cyber protection.
In its latest monetary outcomes, they highlighted Zscaler AI Shield as a necessity for securing enterprise AI utilization at scale. Zscaler is attempting to present enterprises one thing most presently lack: visibility into how AI is definitely getting used. Meaning figuring out which AI instruments staff are utilizing, controlling entry, and monitoring how information flows into them.
This permits enterprises to maneuver from reactive coverage enforcement to proactive governance.
The AI Shield package deal is now not being framed as a distinct segment add-on for experimental AI tasks. As a substitute, it’s being positioned as a management layer for AI compliance and broader enterprise AI safety.
That is already displaying up in buyer offers. Zscaler mentioned a Fortune 500 semiconductor producer signed an eight-figure new brand deal that included Zscaler AI Shield and information safety merchandise. Their goal? To dam unsanctioned AI functions, stop information leakage into public giant language fashions, and supply visibility into prompts.
One of the crucial telling particulars from the quarter got here from an leisure buyer. In response to Chaudhry, a significant leisure firm activated Zscaler’s coverage enforcement for AI visitors and found that 4 million AI prompts per week had been now being secured. That type of quantity suggests corporations could also be a lot additional into shadow AI utilization than management groups understand.
Enterprise AI Safety Will get More durable as AI Brokers Enter the Workflow
Zscaler can also be attempting to widen the dialog past staff utilizing AI instruments manually. The corporate says the subsequent problem for enterprise AI safety will come from AI brokers working autonomously throughout workflows, functions, and information environments. Chaudhry defined:
“AI brokers shift the menace panorama and function autonomously at speeds far exceeding people, exponentially growing agentic visitors whereas compressing the time to stop, detect, and reply to threats.”
That warning issues within the worker expertise house as a result of AI is more and more being embedded into collaboration and workflow automation. As soon as AI brokers start performing throughout enterprise methods at scale, shadow AI compliance turns into more durable to handle. The problem is now not simply what staff kind into AI instruments, however what linked AI methods can entry, share, and set off on their very own.
Preserve updated on the newest UC safety traits by following UC At present on LinkedIn.
Compliance Strain Is Giving Zscaler One other Opening
The compliance dimension provides much more weight to Zscaler’s argument. In its latest enlargement of worldwide compliance capabilities, the corporate emphasised the necessity for stronger native controls. Misha Kuperman, Chief Reliability Officer at Zscaler, mentioned within the announcement:
“Efficient information sovereignty requires prospects to have verified authority over their information residency, telemetry and management information airplane information.”
For enterprises coping with shadow AI, this raises a important difficulty. It’s not nearly seeing how staff use AI, however guaranteeing that any information shared with these instruments doesn’t violate regional compliance necessities or information residency guidelines.
What This Alerts for IT and Safety Leaders
The larger takeaway from Zscaler’s quarter is that shadow AI compliance is now not a aspect difficulty brought on by a couple of curious staff testing new instruments. It’s changing into a mainstream enterprise governance drawback, pushed by widespread office adoption and the fast development of AI-powered workflows.
That’s the place Zscaler AI Shield is attempting to land its message. The corporate is betting that prospects will more and more want a devoted coverage and visibility layer between staff, AI functions, and delicate company information. If that thesis holds, enterprise AI safety will grow to be some of the vital price range conversations out there over the subsequent yr.
For a lot of enterprises, the uncomfortable actuality is easy: AI adoption is dashing forward – leaving governance on the wayside.
Wish to improve your enterprise safety? Take a look at UC At present’s Information to Safety & Compliance to kickstart your adoption journey and discover all of the steerage you’ll want.
FAQs
What’s Zscaler AI Shield?
Zscaler AI Shield is Zscaler’s platform for locating AI utilization, managing entry, and inspecting prompts. It additionally helps stop delicate information leakage throughout AI functions.
What does shadow AI compliance imply?
Shadow AI compliance refers back to the problem of governing worker use of AI instruments. Particularly, in circumstances the place utilization might not be accepted, monitored, or lined by current compliance controls.
Why is enterprise AI safety changing into extra pressing?
Enterprise AI safety is changing into extra pressing as a result of staff are utilizing extra AI instruments. This contains sharing delicate information with them, and starting to work together with AI brokers that may function autonomously at scale.

