In a cascading illustration of unintended consequences, threat actors compromised an AI tool vendor, then used that access this past weekend to compromise software security vendor Vercel, and possibly other organizations, downstream.
Vercel yesterday disclosed it was breached via a third-party AI tool, Context.ai. While Vercel is not a Context customer, the attacker appears to have used a compromised OAuth token belonging to a Vercel employee who signed up for Context’s AI Office Suite using their Vercel Google Workspace account, granting “Allow All” permissions in the process.
In a security bulletin on its website, Vercel said that this “enabled [the attacker] to gain access to some Vercel environments and environment variables that were not marked as ‘sensitive,'” the company said in its online statement.
As Hudson Rock pointed out in a blog post, the Context attack was apparently caused by an employee downloading cheats for the popular online game Roblox, and one of these scripts apparently contained an infostealer.
“No exploit. No zero-day,” David Lindner, chief information security officer (CISO) of Contrast Security, tells Dark Reading. “Just an unsanctioned AI tool, an overpermissioned OAuth grant, and a gaming cheat download. Vercel is now working with Mandiant on a breach that a threat actor [allegedly ShinyHunters] is selling for $2 million. Your employees are doing the same things on their machines right now. The question is whether you know about it.”
“Operational Velocity, Detailed Understanding”
Vercel noted that variables marked “sensitive” are stored in a way that prevents them from being read, and that the company has no evidence such variables were accessed. Vercel is working with Mandiant for its incident response alongside other security firms, peers, Context.ai itself, and law enforcement.
“We assess the attacker as highly sophisticated based on their operational velocity and detailed understanding of Vercel’s systems,” the company said.
Once Context learned of the OAuth theft, the company said it informed impacted customers along with next steps. “While we are continuing to assess this incident, the theft of the OAuth tokens occurred prior to the AWS environment being shut down,” Context’s notification read.
Further expanding the downstream impact, Vercel identified a limited subset of customers whose Vercel credentials were compromised; the company contacted them and recommended immediate credential rotation. Only those contacted are believed to have been compromised at this time.
Dark Reading asked Vercel whether accessed variables, even if they weren’t marked “sensitive,” may have contained sensitive data given customers were compromised. The company declined to respond directly but emphasized that it has “contacted customers that we believe could be at risk of being compromised.”
“We continue to investigate whether and what data was exfiltrated and we will contact customers if we discover further evidence of compromise,” the spokesperson says. We’ve deployed extensive protection measures and monitoring. Our services remain operational. We will continue to keep the Security Bulletin updated as well.”
Context, meanwhile, shared its own security advisory yesterday concerning an attack against a deprecated legacy consumer product, the Context AI Office Suite. Context said that last month, it “identified and stopped” a breach involving unauthorized access to its AWS environment.
While the company engaged CrowdStrike, conducted an investigation, closed the AWS environment, and took steps to fully deprecate the associated Office Suite product, Context learned through Vercel’s breach and additional investigation that the unidentified actor “also likely compromised OAuth tokens for some of our consumer users.”
Context Bedrock, the company’s current platform product, is unaffected.
Dark Reading has contacted Context for additional information.
Attacks Emphasize Importance of AI Data Security
Although some key details remain unknown (a given since both incidents remain under investigation), the supply-chain incident calls attention to the risks posed by AI products when data security isn’t appropriately locked down. AI tools require a wide range of permissions and privileges to work, meaning that without prioritizing segmentation, zero trust, and least privilege principles, organizations remain at increased risk.
It is unclear if the Vercel employee’s Context AI Office Suite instance was sanctioned or an example of “shadow AI,” what happens when employees use AI tools without IT oversight. Either way, it acts as a reminder to create an AI governance framework and emphasize expectations for how AI can and cannot be deployed using company resources.
Vercel’s blog contains indicators of compromise and recommendations. Customers should review their activity log, review and rotate environmental variables, use the sensitive environment variables going forward, investigate recent deployments for unexpected or suspicious activity, ensure that “Deployment Protection” is set to at least Standard, and to rotate Deployment Protection tokens if set.
Jaime Blasco, chief technology officer (CTO) at Nudge Security, tells Dark Reading that organizations who don’t want something like this to happen to them should start with OAuth consent.
“Most Google Workspace and Microsoft 365 environments are still configured to let any employee grant third-party apps access to their enterprise account. Move to admin-managed consent. New apps get reviewed before they can touch corporate data. That one change would have blocked a Vercel employee from granting Context.ai enterprise-wide scopes in the first place,” Blasco says. “That being said, there are hundreds of SaaS platforms that allow Oauth grants to be created and most of them allow to block these grants or gate this functionality behind an enterprise license.”
OAuth: The New Attack Surface
Blasco says OAuth tokens are “the new attack surface,” as played out in the Salesloft Drift attack, Gainsight attack, and others. Attackers compromise a small AI or SaaS vendor, steal the OAuth tokens held on behalf of customers, and conduct additional attacks downstream.
“None of this required a novel AI attack technique,” he says. “Agentic AI makes it worse because these platforms sit at the center of a hub of OAuth grants with expansive scopes, usually at young companies without mature security programs behind them. OAuth is the new lateral movement. Until the industry treats OAuth tokens as high-value credentials, we’re going to keep reading the same breach writeup with the vendor names swapped out.”
Guillaume Valadon, cybersecurity researcher at GitGuardian, says the mechanics of these attacks reflect “the same identity and credential problems we’ve been writing about for 15 years.”
“What AI has really changed is the distribution of trust: teams are wiring dozens of new SaaS integrations into their core identity providers and code hosts faster than they can vet them, and each one becomes a pre-authorized path that an attacker inherits the moment the vendor is popped,” Valadon says. “APIs, tokens, and OAuth scopes are still the softest part of the stack — AI didn’t create that problem, it just massively expanded the surface that depends on it.”

