Agentic AI is transferring from a promising productiveness instrument to a safety downside that enterprise leaders can not ignore. Within the dialog under, Kristian speaks with Irina Tsukerman, President at Scarab Rising; Shlomi Beer, Co-Founder & CEO at ImpersonAlly; and Roey Eliyahu, Co-Founder & CEO at Salt Safety, about what agentic AI is, the place the dangers are rising, and the way organizations can handle programs designed to behave on their behalf.
What makes the subject pressing is that agentic AI shouldn’t be merely a chatbot with a better interface. These programs could make choices, take actions, entry information, and transfer throughout enterprise programs with restricted human oversight. That creates effectivity, however it additionally creates publicity, particularly when firms are adopting the expertise quicker than they’re constructing the controls round it.
Kristian frames the dialogue round a easy however vital query: how do you safe programs that should behave autonomously when conventional safety assumptions are not sufficient? The reply, because the friends clarify, is that the businesses transferring quickest on agentic AI are sometimes those least ready for its penalties.
The place The Safety Dangers Emerge
The primary main theme within the dialog is that adoption is outpacing governance. Shlomi Beer says attackers don’t at all times want to interrupt via basic perimeter defenses; as a substitute, they will manipulate exterior inputs, immediate chains, or different content material that brokers ingest and belief. In that setting, the assault floor is not only a community or an endpoint. It’s the workflow itself.
Roey Eliyahu provides a broader operational view. He argues that consumer-facing sectors, the place help quantity is excessive and repetitive duties are frequent, are adopting brokers aggressively as a result of the enterprise case is compelling. However as soon as an agent is anticipated to behave like an worker, it additionally wants the permissions of an worker. That’s the place the safety downside begins to scale.
Each friends level to the identical underlying difficulty: the extra helpful the agent turns into, the extra entry it wants. And the extra entry it receives, the extra harmful it turns into whether it is abused, hijacked, or allowed to make the flawed name. What begins as automation can rapidly change into a privilege downside, an observability downside, and a governance downside on the similar time.
A second theme is that organizations typically depart the principles unclear as a result of the expertise is transferring quicker than inside coverage. Irina Tsukerman says some firms rush into deployment as a result of they need aggressive benefit, whereas others delay formal controls as a result of they don’t but perceive the dangers properly sufficient.
Why Governance Is Lagging
However the dangers usually are not summary. Irina factors to predictable failure modes: an agent being hijacked, exposing buyer data, or falling for a deepfake-style manipulation. Roey widens that lens by explaining that brokers additionally create compliance publicity, particularly in regulated sectors similar to finance, insurance coverage, and pharma. Even when the agent improves service, it nonetheless has entry to delicate information.
The dialogue additionally exhibits why the present safety market can really feel fragmented. Distributors typically promote level options for one layer of the stack, similar to identification, the mannequin, or the MCP layer, however the audio system argue that this not often maps cleanly to the actual enterprise danger. The issue shouldn’t be one remoted element. It’s the chain linking agent, immediate, information, API, and downstream motion.
The dialog turns to treatment, and right here the emphasis is evident: begin with visibility, then add guardrails, then add detection. Roey says readiness begins with full discovery and observability throughout brokers, MCP servers, APIs, code, runtime, and configuration. With out that holistic view, safety groups try to defend one thing they can’t totally see.
As soon as organizations perceive the total chain, they will apply business-specific restrictions. An airline might wish to stop brokers from issuing refunds or altering fares. A retailer may have to dam unauthorized buyer information entry or stop cross-customer leakage. Irina reinforces that time by arguing that prevention shouldn’t be sufficient by itself; firms additionally want monitoring that may detect misuse earlier than the injury turns into externally seen.
How Corporations Can Reply
The ultimate takeaway is that agentic AI doesn’t simply broaden what staff can do. It additionally expands what attackers, insiders, and careless customers can set off via the identical programs. That makes safety each extra pressing and tougher, as a result of the risk is embedded within the workflow itself relatively than sitting exterior it.
In the long run, the dialog leaves Kristian and his friends with a cautionary however sensible message. Agentic AI can ship actual productiveness features, however provided that organizations cease treating safety as an afterthought. The businesses almost definitely to profit from the expertise are those that pair adoption with observability, restrict privilege by design, and acknowledge that autonomy with out management shouldn’t be innovation. It’s publicity.

