Attackers used a combination of found credentials and artificial intelligence (AI) to gain administrative access to an Amazon Web Services (AWS) environment in less than 10 minutes. The incident demonstrates once again how AI is rapidly becoming a force multiplier for threat actors to move more quickly than ever.
A threat actor gained initial access to the environment via credentials discovered in public Simple Storage Service (S3) buckets and then quickly escalated privileges during the attack, which moved laterally across 19 unique AWS principals, the Sysdig Threat Research Team (TRT) revealed in a report published Tuesday.
Throughout the attack, which occurred on Nov. 28, 2025, the threat actor leveraged large language models (LLMs) to automate reconnaissance, generate malicious code, and make real-time decisions, according to researchers. In fact, it appears that using LLMs played a pivotal role in both the speed with which attackers operateed and the agility with which they moved laterally, according to Sysdig.
“This attack stands out for its speed, effectiveness, and strong indicators of AI-assisted execution,” Sysdig researchers Alessandro Brucato and Michael Clark wrote in the report.
During the attack, the threat actor collected and exfiltrated data from the cloud environment, provisioned GPU instances on Elastic Compute Cluster (EC2) for potential resource abuse or LLM model development, and abused Amazon Bedrock, an AI app-dev environment, for LLMjacking to gain access to cloud-hosted models.
Major Credential Gaffe Kicks Off Attack
While the speed and apparent use of AI were among the most notable aspects of the attack, the researchers also called out the way that the attacker accessed exposed credentials as a cautionary tale for organizations with cloud environments. Indeed, stolen credentials are often an attacker’s initial access point to attack a cloud environment.
“Leaving access keys in public buckets is a huge mistake,” the researchers wrote. “Organizations should prefer IAM roles instead, which use temporary credentials. If they really want to leverage IAM users with long-term credentials, they should secure them and implement a periodic rotation.”
Moreover, the affected S3 buckets were named using common AI tool naming conventions, they noted. The attackers actively searched for these conventions during reconnaissance, enabling them to find the credentials quite easily, they said.
AI Accelerates Attack
The attacker demonstrated use of AI and LLMs across various stages to both develop and accelerate the attack. At the same time, hijacking LLMs and making use of the cloud for their own development of models also appeared to be part of the objective of attackers.
The compromised credentials used by the attacker had only the ReadOnlyAccess policy attached to its user group, so the actor used Lambda function code injection — replacing the code of an existing Lambda function named “EC2-init” three times, iterating on their target user — to eventually gain access to an account for a user called “frick” who had admin privileges.
During this privilege-escalation part of the attack — which took a mere eight minutes — the actor wrote code in Serbian, suggesting their origin. Moreover, the use of comments, comprehensive exception handling, and the speed at which the script was written “strongly suggests LLM generation,” the researchers wrote.
The attacker also demonstrated LLM-assisted activity in its lateral movement, in which it made several attempts to assume multiple roles — including cross-account roles — by enumerating account IDs and attempting to assume OrganizationAccountAccessRole in all environments, regardless of whether the targets were member accounts.
“Curiously, they included account IDs that did not belong to the organization: two IDs with ascending and descending digits … and one ID that may belong to a real external account,” the researchers wrote. “This behavior is consistent with patterns often attributed to AI hallucinations.”
AI as Objective of the Attack
The threat actor also conducted LLMjacking during the account by targeting the environment’s implementation of Bedrock. Attackers invoked a broad range of AI models, including multiple versions of Anthropic’s Claude, DeepSeek R1, Meta’s Llama 4 Scout, Amazon’s Nova and Titan models, and Cohere’s embedding service.
To access some Claude models, the attackers programmatically interacted with AWS Marketplace APIs, accepting usage agreements on the victim’s behalf. They also used cross-region inference profiles to distribute model invocations across different AWS regions, a technique that can improve performance while complicating detection, the researchers noted.
After finishing with Bedrock, the actor pivoted to hijacking GPUs, likely for model training or resale. Throughout this process, hallucinated elements in the training script reinforce the notion that the actor used LLM generation during the process, they added.
Detection and Mitigation Steps
Of course, the entire attack could have been avoided had the organization not made the “mundane error” of leaving valid credentials exposed in public S3 buckets, Jason Soroko, senior fellow at Sectigo, tells Dark Reading in an email. Avoiding such an error is one way to prevent such intrusions, he says.
“This failure represents a stubborn refusal to master security fundamentals,” Soroko says. “It is impossible to defend a cloud environment when the keys are left visible to anyone who bothers to look.”
Still, the use of AI during the attack is worrisome, experts say, and they expect these attacks to become more commonplace. Indeed, 2026 does seem to be poised to be the year in which AI hits critical mass as both a threat enabler and attack surface.
“AI … removes hesitation,” Shane Barney, chief information security officer (CISO) at Keeper Security, says. “Tasks that once took hours of manual trial and error are now executed continuously and decisively. Enumeration, privilege testing and lateral movement collapse into a single, rapid sequence, and defenders lose the buffer time they have historically relied on to detect and respond.”
That speed is where AI is fundamentally changing the threat landscape, and the use of AI hallucinations demonstrated in the attack “will become rarer as offensive agents increase their accuracy and awareness of target environments,” according to the Sysdig researchers.
To defend this accelerating threat landscape, they said organizations must prioritize runtime detection and least-privilege enforcement, among other mitigation efforts that were outlined in the report.

