| SecurityWeek’s Cyber Insights 2026 examines expert opinions on the expected evolution of more than a dozen areas of cybersecurity interest over the next 12 months. We spoke to hundreds of individual experts to gain their expert opinions. Here we explore malware and malicious attacks in the age of artificial intelligence (AI). |
The big takeaway from 2026 onward is the arrival and increasingly effective use of AI, and especially agentic AI, that will revolutionize the attack scenario. The only question is how quickly.
Michael Freeman, head of threat intelligence at Armis, predicts, “By mid-2026, at least one major global enterprise will fall to a breach caused or significantly advanced by a fully autonomous agentic AI system.”
These systems, he continues, “use reinforcement learning and multi-agent coordination to autonomously plan, adapt, and execute an entire attack lifecycle: from reconnaissance and payload generation to lateral movement and exfiltration. They continuously adjust their approach based on real-time feedback. A single operator will now be able to simply point a swarm of agents at a target.”
The UK’s NCSC is slightly more reserved: “The development of fully automated, end-to-end advanced cyberattacks is unlikely [before] 2027. Skilled cyber actors will need to remain in the loop. But skilled cyber actors will almost certainly continue to experiment with automation of elements of the attack chain…”
Both opinions could be accurate. We don’t yet know how the adversarial use of AI will pan out over the next few years. What we do know is that attacks will increase in volume, speed and targeting, assisted by artificial intelligence.
Malware, malicious attacks and AI
Effects
Almost every segment of an attack chain can be automated by AI. One example is the speed with which attackers will reverse engineer a newly released patch, develop an exploit for the vulnerability and discover which companies are vulnerable almost certainly before the average company can initiate the patch.
A second example could be the delivery of finely targeted attacks at the scale of traditional spray and pray attacks. “Malware is becoming far more targeted and personal. Attackers are moving away from mass ‘spray and pray’ tactics and are focusing on specific individuals, organizations, or systems,” says Mehran Farimani, CEO at RapidFort.
“By using data gathered from social media, breaches, and online behavior,” he continues, “they can craft attacks that look legitimate and exploit very specific vulnerabilities. Future malware will feel smarter and stealthier, adapting to defenses, learning from user habits, and blending into normal activity.”
“Forget ‘spray and pray’,” adds Shaun Cooney, CPTO at Promon, “this is more akin to mass targeting with a sniper rifle.”
James Wickett, CEO at DryRun Security, adds the low cost of using AI to the advance of precision targeting. “The economics have flipped,” he says. “The cost to go from vulnerability discovery to exploit used to be weeks and thousands of dollars. Now it’s near zero. So instead of mass ‘spray and pray’ campaigns, we’ll get micro-targeted attacks built for a single system, a single company, maybe even a single developer.”
A third example is the media’s headline threat from AI – the automation of the complete attack lifecycle from vulnerability detection, exploit production, to malware payload delivery and data exfiltration. Cory Michal, CSO of AppOmni, calls it the rise of ‘vibe-hacking’. “We’ve observed attackers using AI to automatically generate data extraction code, reconnaissance scripts, and even adversary-in-the-middle toolkits that adapt to defense. They’re essentially ‘vibe-hacking’ using generative AI to better mimic authentic behavior, refine social engineering lures, and accelerate the technical aspects of intrusion and exploitation.”
When these components can be chained together under the orchestration of agentic AI, we will be closer to the one-click fully automated attack.
“LLM-enabled malware has already moved from proof-of-concept to practice,” says Steve Stone, SVP of threat discovery & response at SentinelOne. “Our discovery of MalTerminal (the earliest known GPT4-powered malware capable of generating ransomware or reverse-shell code at runtime), along with ESET’s PromptLock sample and emerging campaigns like LameHug and PromptSteal, show how attackers are experimenting with AI to create polymorphic, self-evolving payloads.”
These tools blur the line between code and conversation, he continued, “allowing malicious logic to be generated dynamically and evade traditional signatures.”
AI agents can already prepare the stages while agentic AI will be the glue that chains them behind a single click. We’re not there yet, but the potential exists and that future will undoubtedly come.
Ransomware
Extortion will remain a primary purpose of malicious attacks simply because of its success. According to FinCEN, $2.1 billion was paid in ransoms during the three years 2022 to 2024. In 2023 the figure amounted to $1.1 billion (the all-time high) but subsided to $734 million in 2024.
Two years can hardly be considered a trend, but many commenters believe that ransomware is slowly becoming less successful due to increased pressure against ransom payments and improved cyber defenses. Counter intuitively, if true, this ‘trend’ may be strengthened rather than reversed by the rise of AI.

Jason Baker, managing security consultant of threat intelligence at GuidePoint Security, explains. “AI-generated ransomware, or other malware used for extortion, presents a problem for the users – namely, they are unlikely to fully understand how it works, or how to troubleshoot or debug issues.”
Now imagine you’re an extortionist, he continues. “Your victim has paid, and your AI-generated decryption tool doesn’t work. How do you fix this? Do you have any incentive to fix it? And how long do people keep paying you ransoms once the word gets out that you can’t undo the damage you’ve done?”
The return of DDoS?
DDoS declined because of the success of ransomware – but it may return due to any decline in ransomware. “Attackers are reverting to one of their oldest and most disruptive tools: the denial-of-service attack. In 2026, we’ll see a record-setting resurgence of DDoS activity: the largest volumetric attack ever recorded, and the highest requests-per-second rate in history,” warns David Holmes, application security CTO at Thales.
He notes that Imperva’s network is already seeing early signs: attacks that are 50% larger than anything we’ve seen before.
“For threat actors, the playbook is simple. If they can’t extort you with encryption, they’ll take you offline instead. Organizations that spent the past few years fortifying against ransomware will now have to look outward again, reinforcing cloud-based DDoS protection and adaptive mitigation to withstand the next wave. The attackers haven’t disappeared; they’ve just changed tactics, and in 2026, they’ll come roaring back.”
AI will play a major part in enabling and improving the efficiency of these DDoS attacks.
The no-malware alternative
The no-malware alternative isn’t completely no-malware, but the malware is limited to third party infostealers.
“The defining shift in malware heading into 2026 is the consolidation of the entire attack chain around infostealers. They’ve become the entry point, the data broker, the reconnaissance layer, and the fuel for everything that comes after,” suggests the Flashpoint Analyst Team, noting that 1.8 billion credentials were stolen by infostealers in the first half of 2025.
The Team continues, “AI-generated malware will get headlines, but threat actors don’t need fully autonomous malware when infostealers already automate the hardest part: initial compromise at scale.” Those same stealers no longer just collect passwords – they also collect session cookies, access tokens, host metadata, browser profiles and more. The attacker can assume the victim’s identity outright.
Once inside the target network, a seasoned attacker can live off the land (LotL) effectively invisibly until data exfiltration without the use of any malware.
This scenario is supported by Adrian Culley, senior sales engineer at SafeBreach. “The preferred method of intrusion is shifting universally toward Identity-led, malware-free Intrusions,” he says. “The focus on LotL TTPs allows intrusions to blend into normal network activity.”
Infostealers can provide easy access, while LotL provides stealthy collection and exfiltration of data without requiring malware. Extortion may remain the priority motive, but “Think less ‘pay to decrypt’, and more ‘pay to stop leaks’,” suggests Yaz Bekkar, principal consulting architect XDR, at Barracuda Networks.
The new criminal ecosystem
Hacker levels
Only sophisticated organized crime groups and nation state actors will have the immediate technical skill to realize the full potential of artificial intelligence. But AI is removing the entry barrier for new and unskilled hackers. As a result, there will be three distinct classes of bad actor in the future: elite nation state, organized crime, and a rapidly expanding script kiddie level.
“The criminal ecosystem will change,” explains Bekkar. “With AI, you don’t need deep skills, you need ideas. As barriers to entry drop even further, more low-skilled actors will become more dangerous, faster. At the same time, the dominant gangs won’t disappear; instead, they’ll run ‘platforms’ and affiliate programs, renting out AI-driven kits.”
“The barrier to entry has collapsed, giving amateur attackers far more reach,” says Farimani. The short term effect will be more efficient and more finely targeted attacks from the established cybercrime gangs and nation state actors, and a huge increase in less sophisticated attacks by the script kiddies.
The overall effect of the script kiddie wave is unclear. Baker suggests, “Lower knowledge barriers will increase the volume of attacks but not necessarily the sophistication. Well-defended organizations will still be able to filter out the majority of unsophisticated attacks.”
However, “While these individuals might not match nation-states in resources or intelligence-gathering, they will have unprecedented power to launch high-impact attacks. This democratization of capability means the overall threat volume and diversity will grow substantially,” warns Matt Gorham, leader of PwC’s cyber and risk innovation institute.
“Could script kiddies operate like a nation-state? Not in terms of capability, but with stealer logs delivering turnkey access, the damage they can cause starts to look uncomfortably similar,” adds the Flashpoint Analyst Team.
“Cyberattacks will be just as damaging as nation-state attacks next year,” says Dave Spencer, director of technical product management at Immersive. “Criminals don’t need to be sophisticated to cause harm. Look at Scattered Spider – teenagers calling help desks and resetting passwords. That’s not sophisticated.” But it has certainly been effective.
DryRun’s Wickett: “AI won’t make everyone a hacker overnight, but it will close the gap between the script kiddie and a new, bespoke APT.”
“As technology continues to democratize access to advanced capabilities,” continues Adam Darrah, VP of Intelligence at ZeroFox, “that gap will keep narrowing. The result is a much larger pool of actors, more noise, and more risk across the board.” The script kiddies will become better script kiddies.
The criminal underworld
One question remains: will the shakeup occurring in the active hacking world reshape the criminal underworld marketplace?
“The big money will move from stolen identities to stolen code and trade secrets – things AI systems can directly weaponize or learn from,” suggests Wickett. “Instead of selling raw malware, people will sell tailored toolchains: prebuilt reconnaissance scripts, AI-driven exploit builders, and access kits for specific industries. The next underground marketplace isn’t going to look like a ransomware-as-a-service forum. It’s going to look more like GitHub for bad actors.”
“Automation may disrupt middlemen but will also create new marketplaces for specialized AI malware, zero-day commoditization and tailored exfiltration services. As enterprise IP becomes more lucrative and easier to monetize, markets will likely shift toward high-value corporate IP and trade secrets alongside identity data,” agrees Kevin Curran, IEEE senior member and professor of cybersecurity at Ulster University.
Dario Perfettibile, VP and GM of European operations at Kiteworks, suggests that the underworld marketplace will follow the AI-driven shift toward precision targeting. “This transformation will weaken dark web markets for bulk stolen credentials while elevating demand for curated access to specific data exchange platforms. Rather than selling millions of compromised accounts, criminals will broker targeted access to exchanges handling valuable IP, proprietary algorithms, or competitive intelligence.”
GuidePoint Security’s Baker sees a similar relationship between underworld offerings and above ground operations. Invoking his view that hackers will worry about their ability to troubleshoot AI generated malware, “The need for reliable and fixable malware will likely remain, though its customer base may become more concentrated or limited,” he suggests. “Malware-as-a-Service remains a profitable business model and may be perceived as less likely to attract law enforcement scrutiny than directly conducting intrusions.”
The demand for MaaS could even increase with the expected growth of script kiddie hackers who may not have the expertise to develop their own malware.
Also mirroring the hacker migration to AI, Barracuda Networks’ Bekkar suggests, “AI turns commodity malware into something that’s effectively free-of-charge. Brokers pushing basic kits or generic access will become less relevant as the value shifts to what is now truly scarce: high-quality initial access, verified corporate data, bespoke exploits, and, above all, stolen intellectual property.”
Charlie Eriksen, security researcher at Aikido Security, sees a downsizing. “Large data brokers are giving way to smaller groups trading specific types of stolen data or access. We’ve seen this pattern in several major supply-chain compromises that began with stolen publishing credentials. The market is shifting from trading stolen identities to trading stolen trust, and that is where much of the risk now lies.”
The Flashpoint Analyst Team agrees with the ‘trust’ element, but not necessarily any downsizing. “Rather than weakening traditional access brokers, infostealers are transforming them. Instead of selling RDP or VPN access manually, brokers now move bulk identity profiles enriched with metadata: device specs, geolocation, corporate domains, session tokens, and host fingerprints.”
Backed by the enormous and growing success of infostealers, “The marketplace is shifting from stolen credentials to full digital identities that allow high-confidence impersonation. Stolen IP, source code, and proprietary data are becoming more common in stealer logs because attackers are scraping developer tools, browser-stored secrets, and cloud app credentials directly from infected endpoints. Dark-web markets are starting to look more like identity-based supply chains.”
The underworld marketplace will inevitably follow the above ground hacker demand, but both are currently in a state of flux.
Cybersecurity defense in the age of AI attacks
Jim Salter, senior management consultant at CyXcel, points to a comment from the UK’s NCSC: “Cybercriminal attackers target vulnerabilities, not sectors, so every organization with digital assets is a potential target.”
He comments, “As reliance on digital infrastructure in companies of all sizes grows, the opportunity for cyber criminals to exploit vulnerabilities will also grow.” The incidence of potential vulnerabilities is also increasing, through the rapid deployment of vibe coding.
Julie Davila, VP of product security at GitLab expands, “Next year will bring a tidal wave of security risk as adversarial agents lower the barriers to execute increasingly complex attacks. In other words, agents make it much easier to exploit any vulnerability within a system. The exploitation ‘likelihood lever’ for every vulnerability has just gone up.”

She adds, “Organizations that have prioritized foundational security hygiene, including efficient patch management, will be better prepared to defend themselves and minimize existing risk across software environments and their software supply chain.”
This is the ‘eat your cyber veggies’ exhortation from companies such as Cisco and Splunk. Eating vegetables is boring but essential for health. Cyber veggies are the cyber hygiene basics: patching, phishing-proof MFA, least privilege, segmentation backups, etcetera.
Mick Baccio, global security advisor at Cisco Foundation AI, comments, “The building blocks of security, the cyber veggies, have been around for a long time; and if you don’t do them, bad things happen. They’re super applicable to things like AI and software development. There’s no silver bullet, of course, but it will solve a tremendous number of problems for things like account takeover, lateral movement, and the vulnerabilities that shouldn’t exist.”
If you want to survive the malicious side of AI, it is essential that you start with the cyber veggies. But since there really is no silver bullet, you still need to layer additional security on top.
“AI-enabled malware mutates its code, making traditional signature-based detection ineffective. Defenders need behavioral EDR that focuses on what malware does, not what it looks like,” says AppOmni’s Michal. “Detection should key in on unusual process creation, scripting activity, or unexpected outbound traffic especially to AI APIs like Gemini, Hugging Face or OpenAI.”
He continues, “By correlating behavioral signals across endpoint, SaaS, and identity telemetry, organizations can spot when attackers are abusing AI and stop them before data is exfiltrated.”
RapidFort’s Farimani stresses. “The focus of security teams must shift to minimizing exposure and reducing time-to-remediation, because the offensive side is already automated.”
In short, “In 2026, cyber resilience will depend on out-learning, not just out-blocking, the adversary,” explains Kirsty Paine, field CTO at Splunk and fellow at WEF.
“In 2026, we will see the rise of AI-enabled malware that can autonomously adapt in real time to evade detection. We’ve already seen hints of this from research proof of concepts like BlackMamba, but next year we can expect to see AI-enabled malware deployed in increasingly complicated attacks that learn, blend in, and modify their behavior based on environmental signals without a human operator ever touching the keyboard. This shift will reinforce the relevance of David Bianco’s ‘Pyramid of Pain’ where, as adversaries rely less on static artifacts at the bottom of the pyramid, defenders will have to move higher to focus on proactively disrupting attacker tools, behaviors, and TTPs.”
Final thoughts
From 2026 onward, organizations will need to double down on the importance of their cybersecurity. It’s not that artificial intelligence will invent new threats, but it will find and exploit vulnerabilities with greater stealth considerably faster and in greater volumes than we have seen before.
We will need to concentrate on the basics. We must eat our cyber veggies; and then we must overlay additional layers of security. We will need to use our own AI to detect and block the attackers’ use of AI; while simultaneously ensuring they cannot turn our systems against us by hijacking our agentic AI’s APIs, which we may not even know about.
It ain’t gonna be easy, but it’s gotta be done if we want to survive and thrive.
Related: The Wild West of Agentic AI – An Attack Surface CISOs Can’t Afford to Ignore
Related: Beyond GenAI: Why Agentic AI Was the Real Conversation at RSA 2025
Related: AI Emerges as the Hope—and Risk—for Overloaded SOCs
Related: 5 Critical Steps to Prepare for AI-Powered Malware in Your Connected Asset Ecosystem

