In short
In a demo, Comet’s AI assistant adopted embedded prompts and posted personal emails and codes.
Courageous says the vulnerability remained exploitable weeks after Perplexity claimed to have mounted it.
Specialists warn that immediate injection assaults expose deep safety gaps in AI agent programs.
Courageous Software program has uncovered a safety flaw in Perplexity AI’s Comet browser that confirmed how attackers might trick its AI assistant into leaking personal consumer knowledge.
In a proof-of-concept demo revealed August 20, Courageous researchers recognized hidden directions inside a Reddit remark. When Comet’s AI assistant was requested to summarize the web page, it didn’t simply summarize—it adopted the hidden instructions.
Perplexity disputed the severity of the discovering. A spokesperson advised Decrypt the problem “was patched earlier than anybody seen” and mentioned no consumer knowledge was compromised. “We’ve a reasonably sturdy bounty program,” the spokesperson added. “We labored straight with Courageous to determine and restore it.”
Courageous, which is growing its personal agentic browser, maintained that the flaw remained exploitable weeks after the patch and argued Comet’s design leaves it open to additional assaults.
Courageous mentioned the vulnerability comes right down to how agentic browsers like Comet course of internet content material. “When customers ask it to summarize a web page, Comet feeds a part of that web page on to its language mannequin with out distinguishing between the consumer’s directions and untrusted content material,” the report defined. “This permits attackers to embed hidden instructions that the AI will execute as in the event that they had been from the consumer.”
Immediate injection: outdated thought, new goal
Such a exploit is called a immediate injection assault. As an alternative of tricking an individual, it tips an AI system by hiding directions in plain textual content.
“It’s just like conventional injection assaults—SQL injection, LDAP injection, command injection,” Matthew Mullins, lead hacker at Reveal Safety, advised Decrypt. “The idea isn’t new, however the technique is completely different. You’re exploiting pure language as an alternative of structured code.”
Safety researchers have been warning for months that immediate injection might develop into a significant headache as AI programs acquire extra autonomy. In Might, Princeton researchers confirmed how crypto AI brokers could possibly be manipulated with “reminiscence injection” assaults, the place malicious data will get saved in an AI’s reminiscence and later acted on as if it had been actual.
Even Simon Willison, the developer credited with coining the time period immediate injection, mentioned the issue goes far past Comet. “The Courageous safety staff reported critical immediate injection vulnerabilities in it, however Courageous themselves are growing an analogous function that appears doomed to have comparable issues,” he posted on X.
Shivan Sahib, Courageous’s vp of privateness and safety, mentioned its upcoming browser would come with “a set of mitigations that assist scale back the danger of oblique immediate injections.”
“We’re planning on isolating agentic looking into its personal storage space and looking session, so {that a} consumer doesn’t by chance find yourself granting entry to their banking and different delicate knowledge to the agent,” he advised Decrypt. “We’ll be sharing extra particulars quickly.”
The larger danger
The Comet demo highlights a broader drawback: AI brokers are being deployed with highly effective permissions however weak safety controls. As a result of giant language fashions can misread directions—or observe them too actually—they’re particularly susceptible to hidden prompts.
“These fashions can hallucinate,” Mullins warned. “They will go fully off the rails, like asking, ‘What’s your favourite taste of Twizzler?’ and getting directions for making a home made firearm.”
With AI brokers being given direct entry to e mail, information, and dwell consumer classes, the stakes are excessive. “Everybody desires to slap AI into every little thing,” Mullins mentioned. “However nobody’s testing what permissions the mannequin has, or what occurs when it leaks.”
Usually Clever E-newsletter
A weekly AI journey narrated by Gen, a generative AI mannequin.

