OPINION
Your perimeter is hardened, your SOC is on high alert for zero-days, and your firewalls are pristine. But while you’re watching the fences, the adversary is walking through the front door with a smile and a valid employee ID.
In the modern threat landscape, attackers aren’t always “breaking in” — they’re simply logging in. Nearly one in three cyber intrusions now involve valid employee credentials, making this a leading attack vector. Armed with stolen credentials and supercharged by AI, threat actors are now operating as a trusted colleague, turning the very identity of your workforce into your greatest vulnerability.
Credential theft isn’t new. What’s changed is the scale and the degree to which AI has made these attacks faster, cheaper, and easier to execute. Phishing campaigns that once required real technical skills can now be generated at volume in minutes. Stolen credentials can be tested and deployed across platforms automatically. The result is a threat that’s hard to detect because it doesn’t look malicious, and thanks to AI, it’s accelerating.
Security teams often underestimate how professional the credential-theft ecosystem has become. Threat actors have built business models around finding and validating stolen credentials, then selling that access to others. Buyers aren’t just financially motivated cybercriminals anymore. They include nation-state actors buying and using credentials from Dark Web forums to launch intrusion campaigns that look like standard cybercrime to evade attribution.
This professionalization is what makes the supply chain such a dangerous target. In a landscape of interrelated dependencies, a single set of credentials can act as a master key. Attackers understand this “network effect” perfectly. They are collaborating, sharing scripts, and selling access to one another to maximize their profit with the lowest possible risk.
Meanwhile, defenders are falling short because we aren’t sharing information with that same level of transparency. While attackers operate like a professional enterprise, security teams are often siloed by private vendor frameworks and a lingering culture of victim-blaming. This lack of communication makes it easier for attackers to carry out supply chain attacks. Attackers are collaborating to get in, while we are too isolated to notice the patterns.
AI has changed the economics of credential theft by stripping away barriers to entry that used to keep less-sophisticated actors at bay. In the past, launching a credential-based attack at scale required real technical skill; you had to write custom scripts to validate logins, move through a network without being caught, and blend your activity into normal traffic patterns to avoid detection.
Now, that technical hurdle is gone — not just for getting in, but for staying in. AI tools allow an attacker to take a file of stolen credentials and automate their deployment across platforms instantly. Once inside, AI-assisted tooling can generate convincing behavioral patterns, mimic normal user activity and help attackers navigate a network in ways that look indistinguishable from legitimate operations — tasks that once demanded advanced tradecraft and custom tools. Whether they are performing a mass “spraying” attack or a targeted intrusion, they can now do it at a velocity that traditional defenses weren’t built to stop.
According to research, the volume of information-stealing malware — the primary way these credentials are stolen in the first place—has surged 84% over the last year. With more credentials being stolen and AI making them easier to weaponize, the “blind spot” for security teams is only getting wider.
Shifting the Detection Model
Closing that gap requires a fundamental shift in the detection model itself. If an attacker is authenticated using real credentials and operating during business hours, traditional alarms often stay silent. To regain the advantage, practitioners should prioritize these measures:
Move identity monitoring upstream: Dark Web and underground forum monitoring needs to be integrated into active response workflows — not monthly reports. The moment a match surfaces externally, it should trigger automated credential rotation and mandatory multifactor authentication (MFA) long before that credential reaches your production environment.
Implement “phish-resistant” MFA: Traditional SMS or push-based MFA can no longer stop modern adversary-in-the-middle attacks. Move toward FIDO2-compliant hardware keys or certificate-based authentication. If the “something you have” can be intercepted by a proxy, it’s not a secure second factor anymore.
Treat authentication as a continuous process: Move away from the “binary” login where a user is trusted indefinitely after one successful MFA prompt. Adopt Continuous Adaptive Trust models that re-evaluate risks in real-time based on behavioral signals, such as sudden changes in typing cadence, unusual file access, or “impossible travel” logins from different locations.
Harden the help desk against AI social engineering: AI-generated voice cloning is making the “forgot my password” call a massive vulnerability. Establish out-of-band verification processes for help desk tickets, such as requiring a video call with a known supervisor or a physical token to ensure the person requesting a credential reset isn’t an AI-powered imposter.
Audit for “identity sprawl”: Inventory third-party integrations and service accounts, which often rely on static credentials that bypass MFA and are rarely rotated. Enforce the principle of least privilege and ensure every service account has a defined expiration date and a designated human owner.
Elevate credential compromise as a priority signal: When a compromised credential surfaces, the response should be immediate and holistic. This means not just changing one password, but conducting a look back for “What did this identity access in the 48 hours prior to the alert?” Security teams must treat a “valid login” alert with the same urgency as a malware detection.
The increasing shift to credential-based attacks is a calculated move toward the path of least resistance: low risk, highly automated, and devastatingly effective at bypassing even the most hardened perimeters. If we fail to evolve our verification models, we are essentially leaving the keys in the ignition. We must stop viewing identity as a static gate and start treating it as a continuous, high-priority signal, or we will continue to ignore the warning signs until the high cost of a breach makes them impossible to miss.
Don’t miss the latest Dark Reading Confidential podcast, Security Bosses Are All in on AI: Here’s Why, where Reddit CISO Frederick Lee and Omdia analyst Dave Gruber discuss AI and machine learning in the SOC, how successful deployments have (or haven’t) been, and what the future holds for AI security products. Listen now!

