One aspect of the “AI revolution” keeping security professionals up at night is the continued prevalence of prompt injection attacks that enable exfiltration of sensitive data — even against dominant vendors like Microsoft and Salesforce.
Capsule Security, a vendor that sells AI agent runtime security, published research today concerning prompt injection vulnerabilities involving Salesforce Agentforce and Microsoft Copilot. Although both vulnerabilities have been addressed, the research is a reminder that the prompt injection remains an unsolved problem for large language models (LLMs).
In the case of the Salesforce flaw, which Capsule refers to as “PipeLeak,” an attacker could insert malicious instructions into an untrusted lead capture form, which a Salesforce agent would interpret as a trusted prompt. The form in question is a public-facing customer relationship management (CRM) form that a prospective client may use on the Salesforce customer’s website.
In an example prompt, Capsule instructed the agent to send all the leads it can find. There was no complex code or exploit as part of the instructions either — just a single line instructing Salesforce to list all the leads it can locate in the form email sent back to the attacker.
“The vulnerability stems from a fundamental architectural flaw: Agent Flows process lead form inputs as trusted instructions rather than untrusted data,” Capsule’s Bar Kaduri wrote. “Because lead forms accept arbitrary text from external, unauthenticated users, an attacker can embed malicious prompts that override the agent’s intended behavior.”
In parallel, Capsule refers to the Microsoft Copilot vulnerability as “ShareLeak.” In this attack (CVE-2026-21520), which was granted a high-severity (7.5 CVSS) designation, an attacker would have been able to insert malicious code into a SharePoint form input, which triggers the connected Copilot data and returns customer data to an attacker-controlled email. Even when safety mechanisms flagged the attack, data was still exfiltrated.
The attack was a bit more involved and required a more complex command in order to override intended AI agent behavior, but like the Agentforce bug, it’s a prompt injection triggered by a customer-facing form.
The Complexities of Human in the Loop
In response to Capsule’s report, Salesforce thanked the security vendor and addressed the prompt injection. However, regarding the data exfiltration component, Salesforce told Capsule (according to a response published in the blog post) that this is a configuration-specific issue and that customers can activate human-in-the-loop requirements as a configuration setting to prevent data leaking.
“We have determined this is a configuration-specific issue rather than a platform-level vulnerability,” the CRM vendor said in the blog post. “To ensure security, our out-of-the-box (OOTB) email actions require human-in-the-loop (HITL) oversight. The same HITL requirement is available as a configuration setting for your custom actions to prevent unintentional data transfers/actions.”
Naor Paz, co-founder and CEO of Capsule Security, found it “very surprising” that Salesforce’s response mostly boiled down to human in the loop configuration recommendations. “The whole thing about agents is they do things for you without you babysitting them, right?” he tells Dark Reading. As Paz explains, it’s not how many organizations use AI tools.
“We’re seeing agents like Claude Code, for example, running for days, writing code, querying production databases, and doing many dangerous things autonomously,” he says. “And I think their answer, like, ‘Do human in the loop,’ is just embarrassing, to be honest with you.”
In a statement, a Salesforce spokesperson tells Dark Reading that it is aware of the issue and has remediated it.
“Prompt injection is an evolving challenge across the AI industry, and our approach includes layered safeguards designed to help mitigate these risks, including controls around instruction isolation, tool-use restrictions, and human oversight,” Salesforce says in its response. “We continue to refine these safeguards and work with the security research community to enhance protections as these threats evolve.”
Capsule recommends organizations running Agentforce treat all lead form inputs as untrusted data, disallow Email Tool usage when processing untrusted inputs, apply input sanitation and prompt boundary techniques, require manual review before sending emails with CRM data, and to log all agent actions involving data access or external communication.
Prompt Injections Keep Doing Their Thing
“As organizations rush to deploy AI agents across their operations, they inherit significant risks that existing security tools weren’t designed to address,” Kaduri wrote in a blog post discussing Microsoft’s Copilot flaw. “The attack we demonstrated required no special access, no exploitation of traditional software vulnerabilities, and no advanced technical skills, just an understanding of how LLMs process instructions.”
Microsoft addressed CVE-2026-21520 following Capsule’s report. Dark Reading has contacted the company for additional comment.
Capsule Security’s research acts as a reminder that the prompt injection isn’t going anywhere for the foreseeable future. It’s also hard to say how the landscape for these attacks might change as exploit-hunting capabilities like those found in Anthropic’s Claude Mythos reach the threat actor masses.
Paz tells Dark Reading that in AI agent security, there’s a concept called the “lethal trifecta,” defined as the intersection of an agent with access to sensitive data, external exposure to untrusted content, and the ability to communicate externally. And when those three things exist together, data can be more easily manipulated.
“It’s not a resource problem,” he says. “I think it’s more of an approach problem, because all these large vendors, including Microsoft, still have to deal with this. They still don’t have the right approaches to match this newer problem.”
Don’t miss the latest Dark Reading Confidential podcast, Security Bosses Are All in on AI: Here’s Why, where Reddit CISO Frederick Lee and Omdia analyst Dave Gruber discuss AI and machine learning in the SOC, how successful deployments have (or haven’t) been, and what the future holds for AI security products. Listen now!

