Final week, Anthropic gathered twelve of the world’s largest expertise corporations to share an uncomfortable discovering. Its strongest AI mannequin had spent a number of weeks autonomously figuring out safety flaws in broadly used software program, together with vulnerabilities that had gone undetected for almost three a long time.
That disclosure got here alongside the final launch of Claude Opus 4.7. Anthropic is utilizing the newer mannequin to check the safety controls it wants earlier than it may possibly responsibly launch the extra succesful one. For enterprise patrons, each developments matter.
Analysis from Gravitee, printed in February 2026, discovered that 81% of enterprise groups have moved previous the planning part for AI brokers. But solely 14.4% have full safety or IT approval for the brokers they run. That governance hole seems to be significantly extra critical in mild of what Anthropic disclosed this week.
What Opus 4.7 adjustments for enterprise groups
The core drawback with working AI brokers at scale has all the time been reliability. Fashions that drop context between periods, stall on advanced duties, or want supervising at each step eat up extra time than they save.
Opus 4.7 addresses a number of of these points. It checks its personal outputs earlier than reporting again, retains context throughout periods, and follows directions extra exactly than its predecessor. For groups working multi-day workflows, that context retention issues most. Re-establishing background at the beginning of every session is an actual operational value that the majority productiveness assessments overlook.
Enterprise testers reported measurable beneficial properties. Notion noticed a 14% enchancment on advanced multi-step workflows with a 3rd fewer instrument errors. In addition they stated it was the primary mannequin to cross their implicit-need exams, the place the mannequin works out necessities with out express instruction. Ramp discovered it wanted far much less step-by-step steering throughout duties spanning a number of instruments and codebases.
Picture decision has elevated to greater than 3 times that of earlier Claude fashions. That makes doc processing and dense interface work extra sensible. These working Claude inside Microsoft 365 will see that enchancment throughout Groups, Outlook, and OneDrive workflows. Pricing stays at $5 per million enter tokens and $25 per million output tokens.
The safety discovering each IT chief must learn
Utilizing Claude Mythos Preview, Anthropic autonomously discovered 1000’s of essential zero-day vulnerabilities. These spanned each main working system and net browser. One was a 27-year-old flaw in OpenBSD that permit attackers remotely crash machines. One other was a bug in FFmpeg that automated testing instruments had run 5 million occasions with out flagging. Maintainers have now mounted all of them.
As UC Right this moment lined individually this week, the importance is just not the person bugs. It’s {that a} succesful AI mannequin can now discover critical vulnerabilities at scale, autonomously, and quicker than any present testing course of. The typical value of a knowledge breach stands at $4.4 million. Unified communications environments, constructed on browsers, shared media libraries, APIs, and virtualised infrastructure, sit squarely in scope.
Undertaking Glasswing, Anthropic’s response, brings collectively AWS, Cisco, CrowdStrike, Google, Microsoft, Palo Alto Networks, and others. The group dedicated $100M in mannequin credit to scanning and hardening essential software program infrastructure. In addition they directed an extra $4M to open-source safety organisations. Microsoft, which has been constructing its personal AI safety agent infrastructure in parallel, joined as a founding member.
Opus 4.7 is the primary Claude mannequin to ship with automated safeguards that block high-risk cybersecurity makes use of. Anthropic describes it as a take a look at mattress for the controls wanted earlier than Mythos-class fashions can attain a wider viewers. Safety professionals with official necessities can apply via the brand new Cyber Verification Programme.
Deloitte’s 2026 enterprise AI report discovered that just one in 5 corporations has a mature governance mannequin for autonomous AI brokers. For IT and safety leads, that determine and this week’s information belong in the identical dialog.

