Close Menu
    Facebook X (Twitter) Instagram
    Wifi PortalWifi Portal
    • Blogging
    • SEO & Digital Marketing
    • WiFi / Internet & Networking
    • Cybersecurity
    • Tech Tools & Mobile / Apps
    • Privacy & Online Earning
    Facebook X (Twitter) Instagram
    Wifi PortalWifi Portal
    Home»Cybersecurity»Bad Memories Remain a Threat to Agentic AI Systems
    Cybersecurity

    Bad Memories Remain a Threat to Agentic AI Systems

    adminBy adminApril 23, 2026No Comments5 Mins Read
    Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email
    Bad Memories Remain a Threat to Agentic AI Systems
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Memory files can help artificial intelligence perform better, but researchers have found they are also a persistent trouble spot. 

    AI memory files and context data help personalize requests and provide additional information that large-language and other foundational AI models can use to deliver the best responses. But a persistent issue is proving to be a fundamental weakness in the security of AI systems.

    In March, Cisco researchers discovered they could compromise the memory files of Anthropic’s Claude Code and maintain persistence, effectively infecting every project and session of the AI coding assistant. Using the technique, the researchers were able to introduce hard-coded secrets into production code, cause Claude Code to select insecure packages and configuration options, and push those changes to another development team member, according to a published post on the research.

    While Anthropic has since mitigated the issue, AI memory files represent a weak point in the security of the systems that need to be better protected, says Amy Chang, head of AI threat intelligence and security research for Cisco’s AI Software & Platform group. Because memory and context data are incorporated into future requests, they can be used to corrupt the output of AI systems and applications.

    Related:New Raptor Framework Uses Agentic Workflows to Create Patches

    “You get the convenience of not having to reload the same files and dependencies and directories, but at the same time, the trade-off is you could potentially be opening yourself up to potential risk,” she says.

    AI memory files and context data have become a focus for attackers looking to compromise AI applications and gain persistence, as they hold the state of a particular user session and, in the long term, the user’s overall preferences. Researchers at Princeton University and Sentient AI found that attackers can insert fake memories into the data used by AI, manipulating its responses and decisions, while Radware threat researchers demonstrated ways to use indirection prompt injection (IPI) to compromise the connectors used by OpenAI’s ChatGPT to link to third-party services. And Cisco found in a previous report that external data sources known as Model Context Protocol (MCP) servers already pose significant risks to AI applications.

    A Claude Code screenshot showing poisoned memory

    Cisco created a poisoned memory file that tells users it’s poisoned. Source: Cisco

    The latest Cisco research also highlights a major problem with securing AI systems. Cybersecurity professionals view any executable file as a potential danger, and code frequently creeps into non-executable files, such as Excel macros and Python opcodes in Pickle files (used to handle weights for machine learning). Now, any text file could contain information that, when included in a memory file, can cause malicious behavior, says Chang.

    Related:An 18-Year-Old Codebase Left Smart Buildings Wide Open

    “Even your markdown files can be vectors,” she says. As a result, cybersecurity professionals need to be aware of text files and their ability to modify the execution of AI systems.

    Privilege and Prompt Injection

    Cisco’s latest attack focused on using the post-install hooks in the Node Package Manager (NPM) as a vector to modify Claude Code’s memory.md file. Because the first 200 lines of the memory.md file were included in Claude Code’s system prompt, the attack persisted across sessions. Other dependency files — such as claude.md (Anthropic’s Claude), agents.md (OpenAI’s Codex), and soul.md (OpenClaw) — are also risks that users of agentic AI will have to analyze and maintain, Cisco’s Chang says.

    “I think [it] illuminates the environment that we’re in, where — depending on your specific setup and everyone sets their environment up differently — there are probably a lot of different other vectors that are overlooked, overseen, and just blanket accepted,” she says.

    Foundational models are essentially stateless systems, because each call is independent and the weights do not change. To incorporate state, any information must be present in the context window, either as part of the prompt, from some data source — such as vector databases — or through additional tuning using a variety of technologies, such as low-rank adapters (LoRA) and retrieval-augmented generation (RAG).

    Related:Google Fixes Critical RCE Flaw in AI-Based ‘Antigravity’ Tool

    While poisoned Node Package Manager (NPM) components are a popular way to attack LLMs, there are many other vectors, says Jay Chen, senior principal security researcher with cybersecurity firm Palo Alto Networks, which published research on memory manipulation last October.

    “The root cause is prompt injection, which remains an open and unsolved problem,” he says. “Any AI agents or GenAI applications that rely on an LLM to manage memory can be susceptible to memory poisoning.”

    Long-Term Memories Always Bad?

    Retaining memory files for a long time may itself be a security weakness. While malicious additions to memory files are hard to detect, various AI security vendors, including Cisco, Palo Alto Networks, Snyk, Meta, and SentinelOne, have developed tools to scan memory files for malicious modifications and to block attacks on AI systems. Because the files will continue to be targeted by attackers, adopting these tools is key to defense, says Cisco’s Chang.

    “Having additional layers of protection on top of [the memory processing] … would probably improve security,” she says. “We released a lot of open-source scanners that can scan those dependency files that would surface something like this, where you have a poison memory file.”

    Companies may also want to regularly delete memory files, especially if there are concerns over whether they have been maliciously modified, says Palo Alto Networks’ Chen.

    “The duration of exposure depends on memory update frequency and retention policies, so it is difficult to determine how long malicious instructions may persist,” he says. “If there is any doubt, the safest approach is to purge the memory.”

    agentic Bad Memories remain Systems threat
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
    Previous ArticleGoogle May Expand Unsupported Robots.txt Rules List
    Next Article Chrome Beta 148.0.7778.60 APK Download by Google LLC
    admin
    • Website

    Related Posts

    Pre-Stuxnet Sabotage Malware ‘Fast16’ Linked to US-Iran Cyber Tensions

    April 24, 2026

    AI Phishing Is No. 1 With a Bullet for Cyberattackers

    April 24, 2026

    Continuous Observability as the Decision Engine

    April 24, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Search Blog
    About
    About

    At WifiPortal.tech, we share simple, easy-to-follow guides on cybersecurity, online privacy, and digital opportunities. Our goal is to help everyday users browse safely, protect personal data, and explore smart ways to earn online. Whether you’re new to the digital world or looking to strengthen your online knowledge, our content is here to keep you informed and secure.

    Trending Blogs

    This show is six episodes of the most unsettling crime drama on Netflix and nobody is talking about it

    April 24, 2026

    Pre-Stuxnet Sabotage Malware ‘Fast16’ Linked to US-Iran Cyber Tensions

    April 24, 2026

    Robots.txt Docs Expand, Deep Links Get Rules, EU Steps In

    April 24, 2026

    Opera: Private Web Browser 97.3.5038.88255 APK Download by Opera

    April 24, 2026
    Categories
    • Blogging (68)
    • Cybersecurity (1,488)
    • Privacy & Online Earning (181)
    • SEO & Digital Marketing (913)
    • Tech Tools & Mobile / Apps (1,774)
    • WiFi / Internet & Networking (243)

    Subscribe to Updates

    Stay updated with the latest tips on cybersecurity, online privacy, and digital opportunities straight to your inbox.

    WifiPortal.tech is a blogging platform focused on cybersecurity, online privacy, and digital opportunities. We share easy-to-follow guides, tips, and resources to help you stay safe online and explore new ways of working in the digital world.

    Our Picks

    This show is six episodes of the most unsettling crime drama on Netflix and nobody is talking about it

    April 24, 2026

    Pre-Stuxnet Sabotage Malware ‘Fast16’ Linked to US-Iran Cyber Tensions

    April 24, 2026

    Robots.txt Docs Expand, Deep Links Get Rules, EU Steps In

    April 24, 2026
    Most Popular
    • This show is six episodes of the most unsettling crime drama on Netflix and nobody is talking about it
    • Pre-Stuxnet Sabotage Malware ‘Fast16’ Linked to US-Iran Cyber Tensions
    • Robots.txt Docs Expand, Deep Links Get Rules, EU Steps In
    • Opera: Private Web Browser 97.3.5038.88255 APK Download by Opera
    • AI Phishing Is No. 1 With a Bullet for Cyberattackers
    • One bankruptcy just wiped out a popular Google TV lineup in Europe
    • Continuous Observability as the Decision Engine
    • Google spam reports with personally identifying information won’t be used and processed
    © 2026 WifiPortal.tech. Designed by WifiPortal.tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer

    Type above and press Enter to search. Press Esc to cancel.