| SecurityWeek’s Cyber Insights 2026 examines expert opinions on the expected evolution of more than a dozen areas of cybersecurity interest over the next 12 months. We spoke to hundreds of individual experts to gain their expert opinions. Here we explore offensive security; where it is today, and where it is going. |
Cyber red teaming will change more in the next 24 months than it has in the past ten years.
Malicious attacks are increasing in frequency, sophistication and damage. Defenders need to find and harden system weaknesses before attackers can attack them. That requires red teams to do more, faster.
Offensive security
“Offensive security is simply a branch of security that focuses on attacking systems to identify weakness in order to harden them/defend them better,” says Matt Mullins, head hacker at Reveal Security.
Eyal Benishti, CEO and founder at IRONSCALES calls it ‘proactive defense’.

“Offensive security is about proactively simulating attacker behavior to prioritize attack surface strengthening. It includes, but extends beyond, traditional penetration testing into red teaming and bug bounty programs, providing continuous, intelligence-led validation of how attackers actually operate. It combines human ingenuity, automation, and adversarial simulation to expose weaknesses before they are exploited,” expands Julian Brownlow Davies, Senior VP of offensive security & strategy at Bugcrowd.
Pentesting and red teaming are the two primary components of offensive security. Their methods of operation overlap, but they serve two separate purposes. Pentesting seeks to find and exploit bugs or weaknesses. Red teaming seeks to test a system’s ability to withstand an actual attack.
“Traditional pentesters tend to offer snapshot views – great for compliance but limited in depth. Red teams operate more like real adversaries: persistent, stealthy, and scenario based. Organizations with higher security maturity are moving toward red team operations because they provide more meaningful insights into gaps across people, processes, and technology,” says Benishti.
Both functions are evolving and will further evolve during 2026 and beyond. “As the threat landscape evolves, so will offensive security – shifting from isolated exercises to continuous, integrated programs,” he continues. “The future is more preemptive: combining offensive insights with threat intelligence, AI, and automation to stay ahead of attackers instead of reacting to them.”
While the role of the independent pentester continues, it is increasingly merging into bug bounty hunting. “The model is shifting toward coordinated offensive operations run through managed or crowdsourced platforms. The crowd provides reach and diversity while the red team provides strategy and narrative realism,” explains Davies.
We will concentrate on red teaming since it is often – not always – performed in-house.
Some organizations employ external red team specialist agencies; others have their own in-house team. “It depends on the size, risk profile, and maturity of the organization. Enterprises with mature security programs are investing in in-house red teams for continuous coverage and institutional knowledge,” suggests Benishti.
That said, he adds, “external red teams still play a vital role – especially for unbiased assessments, specialized expertise, and to avoid internal blind spots. A hybrid model is emerging: in-house teams for ongoing ops, external partners for fresh perspectives.”
Pablo Zurro, senior product manager at Fortra, adds, “Both are valid and complement each other. An internal red team will be able to run more periodical exercises and test the weakest points of the company while external consultants will simulate external attackers better and will be able to leverage their experience and learned lessons in other customers, which is very useful at least once a year.”
Offensive security should also seek out staff most likely to be susceptible to social engineering. “It’s mandatory since humans are probably the weakest points of the defensive chain,” continues Zurro, adding, “It’s not mandatory to be aggressive and hurt people’s feelings. In most cases doing harmless phishing/vishing/smishing simulations is good enough.”
Goncalo Magalhaes, head of security at Immunefi, says, “Everyone is susceptible to social engineering. Offensive security isn’t about identifying ‘soft targets’ in the workforce; it’s about building a company-wide culture where everyone with access to corporate systems adopts a security mindset.”
With the growing sophistication and scale of AI-enhanced social engineering, this part of offensive security will become increasingly urgent and important.
The primary purpose of red teaming is to discover how well the system can withstand attacks. This means red teams need real-time visibility across their entire ecosystem which means every asset, pathway, and third-party connection that supports mission systems. “That includes not only hardware and endpoints but also applications, workloads, and APIs that often serve as silent backdoors into critical systems,” says Christian Terlecki, director of federal at Armis.
But the speed and scale of AI-assisted malicious attacks means that future red teaming must become automated and continuous rather than periodic.
Another current evolution is into fixing rather than simply finding weaknesses. “Rarely do red teams ‘own’ remediation,” says Mullins.
“Traditionally, offensive [red] teams identify issues; defensive [blue] teams fix them. But that wall is crumbling,” suggests Benishti. “More organizations now expect red teams to collaborate with blue teams to prioritize fixes, retest patches, and guide remediation. While offensive security won’t fully ‘own’ the fix, it increasingly plays a hand in making sure issues are resolved – not just reported.”.
But collaboration on its own doesn’t solve the traditional problem: red teams can generate huge vulnerability lists that overwhelm engineering teams. “Finding vulnerabilities is table stakes. Fixing them automatically – that’s the future of red teaming,” suggests Alex Polyakov, co-founder and CTO at Adversa AI.
“AI is beginning to bridge the gap between identifying and fixing issues. What used to be separate steps can now happen in the same workflow. AI systems can find vulnerabilities, suggest safe fixes, and validate them,” agrees Wout Debaenst, AI pentest lead at Aikido Security.
The role of AI in the future of offensive security
Offensive security suffers from the same conundrum afflicting most areas of cybersecurity: there is a growing need for more output at a faster pace while firms battle with an ongoing and worsening skills shortage, and tighter budgets to employ the few available.
Artificial intelligence is the goose expected to provide the golden solution: more, faster, better, 24/7 automation – with fewer humans required.
Would that life were that simple!
Advantages of AI

Jason Soroko, senior fellow at Sectigo, sees four primary advantages offered by AI. First, “AI provides speed and efficiency by processing and analyzing large datasets much faster than humans, quickly identifying potential vulnerabilities.” Second, “It enhances advanced threat detection, as machine learning models can recognize complex patterns and novel attack vectors that traditional methods might miss.”
Third, “AI systems enable continuous monitoring by operating 24/7, providing constant vigilance against emerging threats.” And fourth, he adds, “Resource optimization is achieved by automating routine tasks, allowing human experts to focus on more complex issues that require human intuition and expertise.”
Few people see AI replacing red teams in the short term – but most accept they will assist the red teams. “We’ll see agentic AI applications running red team engagements, but the more sophisticated and novel attacks will probably come from well-funded AI assisted teams that will (mostly) always be capable of beating the machines,” says Zurro.
“I don’t see a replacement in the mid-term, but more a human/machine symbiosis that will raise the bar to a higher level,” he adds.
Polyakov is all in. “AI is exceptionally good at this work. Red teaming requires creativity, pattern-breaking thinking, and the ability to try thousands of unconventional attack paths. Humans get tired. AI doesn’t. Humans think linearly. AI explores in parallel.”
He adds, “Ironically, the same ‘hallucination’ that creates problems in normal LLM usage becomes a feature in offensive security – it fuels novel attack ideas and unexpected exploit chains when harnessed correctly by experts. In red teaming, AI’s hallucinations aren’t bugs – they’re superpowers.”
Concerns
“We still need human experts to conduct complex and sophisticated operations, as gen-AI is quite stupid at these tasks, and will probably remain the same in the near future,” warns Ilia Kolochenko, CEO at Immuniweb, and partner in cybersecurity at Platt Law LLP. “While some vendors pompously advertise ‘automated penetration testing’ or claim that their AI has replaced human experts, it is technically inaccurate and incorrect, to put it mildly.”
He also raises regulatory concerns. “In law, the notion of a penetration test remains pretty stable: involvement of independent and qualified human experts.” He warns that providing regulators with a report generated by an AI tool could lead to penalties.
“One of the main concerns is the potential for AI systems to generate false positives or miss certain vulnerabilities that require human intuition and contextual understanding,” says Amit Zimerman, co-founder and CPO at Oasis Security .”Additionally, AI systems must be properly trained, which can be resource-intensive, and may not always account for the nuances of every unique environment or attack vector.”
Ironically, better trained red teaming AI also becomes a potential threat if bad actors get hold of the AI. “This is particularly critical in cybersecurity, where tools intended to protect could be repurposed for malicious attacks. It’s crucial that organizations adopt strict governance and ethical guidelines when deploying AI in these contexts,” he warns.
Soroko adds the dependency risk. “Over-reliance on AI could diminish human expertise and intuition within cybersecurity teams.”
The use of agentic AI will increase, designed to enhance the performance of the red team. But agentic AI introduces a new attack surface that can be exploited by attackers.
For pentesting
AI promises a quick boost to the pentesting side of offensive security. It has the potential to find vulnerabilities in code without the necessity to understand the business context around the code. It also has the potential – in the future, we’re not there yet – to fix the vulnerabilities in the code. But this means it is equally valuable to any attacker able to see the code.
“However, gen-AI still lacks the contextual reasoning required to uncover unknown vulnerabilities or design bespoke attack paths. As a result, human pentesters will continue to be irreplaceable in the year ahead,” comments Simon Phillips, CTO of engineering at CybaVerse.
AI is also being used in-house to generate new code through vibe-coding. “This new era of building software through AI is taking off today, but it’s also a major security concern as a lot of the code is being created poorly by novice prompt engineers,” he continues.
The growing requirement for rapid checks on in-house code before it reaches production may integrate into the continuous function of the red team in the coming years, leaving external pentesting to bug hunters and periodic pentest engagements to satisfy compliance purposes.
Meantime, “AI-driven SAST tools will redefine code security, detecting logic and architectural flaws that traditional scanners overlook. These tools are rapidly becoming indispensable for pentesters and DevSecOps teams, automating code review and vulnerability discovery,” comments Gianpietro Cutolo, staff threat research engineer at Netskope.
But he adds, “The offensive potential is equally significant, demonstrated by the fact that an AI agent now holds the top rank on HackerOne in the US, signaling a future where both defenders and attackers leverage the same intelligent tooling to outpace each other.”
Aikido’s Debaenst points to research: “Ninety-seven percent of organizations plan to adopt AI for pentesting, and nine out of ten believe it will eventually take over most of the field,” he says. “The shift is already underway.”
The future of AI and red teaming
“In 2026, AI will play a supporting role, helping red teams work faster and cover more ground. However, it won’t replace human researchers. Instead, we’ll see red teamers using AI like a force multiplier that automates the basics so they can focus on advanced tactics and deeper testing,” says Emmanouil Gavriil, VP of labs at Hack The Box.
At the same time, he adds, “Red teamers in 2026 will need to be more adaptable than ever. Traditional exploitation skills are no longer enough. The attack surface now includes cloud systems, IoT devices, and AI-powered tools, each requiring different skills. The job is no longer about mastering one domain, but learning to navigate many, and doing it continuously.”
Subho Halder, co-founder and CEO at Appknox, says, “By 2026, AI will automate many aspects of offensive security testing, running simulations, probing for vulnerabilities, and flagging potential risks at unprecedented speed. Single-agent AI systems, capable of reasoning, learning, and self-correcting, will execute sophisticated, repeatable tests across large codebases and environments.”
Immunefi’s Magalhaes summarizes the way forward. “AI is emerging as an incredibly powerful tool, both for automating tasks and amplifying what small teams can accomplish. In security, that means fewer people may be needed to deliver certain services. On the offensive side, we’re starting to see early signs of AI agents that move faster than human researchers and draw from broader knowledge bases.”
So, yes, he continues, “AI agents will transform offensive security and threat hunting; automation is a game-changer, but only if it’s used in conjunction with humans. The ideal usage is for agentic systems to handle continuous automated testing while humans provide strategic oversight and catch the blind spots that even advanced AI misses.”
The future for offensive security
Much of red teaming is being streamlined. This is necessary simply through the growth and speed of attacks and the size and complexity of the estate that must be defended.
“The offensive security landscape is about to change more in the next 24 months than in the last 10 years. In 2026, we’ll see the first real convergence: automated offensive testing that understands context, state, and business logic, not just endpoints. Think DAST that behaves like a creative attacker – chaining vulnerabilities, exploiting misconfigurations, and validating impact the way a human red-teamer would,” says Alankrit Chona, CTO and co-founder at Simbian.
“Offensive and defensive security will begin to merge, creating an ecosystem where AI-driven tools probe systems continuously, uncovering weaknesses and hardening them in the same cycle,” suggests Travis Volk, VP global technology solutions and GTM Carrier at Radware.
“The boundary between red teaming, penetration testing, and continuous assurance will blur. The next phase is pre-emptive security, a permanent state of validation,” says Bugcrowd’s Julian Brownlow Davies.
“Red, blue and policy teams working in isolation is no longer tenable; the gaps between them create blind spots that attackers readily exploit,” adds Merlin Gillespie, operations director at Cybanetix. “The idea that red teaming, blue teaming and policy writing can live in their own discrete ivory towers is proving painfully outdated.”
Much of the future for red teaming will depend upon how AI continues to evolve. It holds huge promise but still suffers from issues. The biggest advantages will come from the use of agentic AI – but there’s a conflict of priorities here. A primary function of agentic is the ability to operate autonomously without human intervention.

In most cases of agentic use, the final but logical step of independent autonomous remediation is blocked. People are not ready to relinquish ultimate control. But will this last forever? AI has largely given attackers the advantage. They move faster because a mistake is not damaging. Defenders move more slowly because a mistake could be catastrophic to the business.
“There’s still an imbalance as attackers operate with fewer constraints while defenders are tangled in data silos and compliance overheads,” comments Michael Adjei, director of systems engineering at Illumio.
So, with threats increasing faster than defenders can react and remediate, will there come a time when business is forced to adopt agentic AI autonomous remediation from within a single automated red/blue team? That is, after all, the Shangri-La of AI cybersecurity – a completely self-healing system.
It is ironic that while AI is able to see and analyze what is happening in the present, we remain completely in the dark over where future AI may be taking us.
Related: Zero to Hero – A “Measured” Approach to Building a World-Class Offensive Security Program
Related: FireCompass Raises $20 Million for Offensive Security Platform
Related: Red Teaming AI: The Build Vs Buy Debate
Related: How Do You Know If You’re Ready for a Red Team Partnership?

