Close Menu
    Facebook X (Twitter) Instagram
    Wifi PortalWifi Portal
    • Blogging
    • SEO & Digital Marketing
    • WiFi / Internet & Networking
    • Cybersecurity
    • Tech Tools & Mobile / Apps
    • Privacy & Online Earning
    Facebook X (Twitter) Instagram
    Wifi PortalWifi Portal
    Home»Cybersecurity»Silent Drift: How LLMs Are Quietly Breaking Organizational Access Control
    Cybersecurity

    Silent Drift: How LLMs Are Quietly Breaking Organizational Access Control

    adminBy adminMarch 30, 2026No Comments5 Mins Read
    Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email
    Vulnerability
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Business efficiency demands maximum use of AI assistance, but where policy as code is concerned, AI can introduce serious policy flaws.

    The shift to policy as code for organizational security, compliance, and operational rules, is being followed by increased use of LLM artificial intelligence to help produce the raw code. This makes sense. A primary purpose of AI within business is to improve human efficiency, and writing policy in languages like Rego or Cedar is not easy. AI is increasingly used to streamline the process.

    But there is a problem. These generated policies often look correct, compile successfully, and still grant the wrong access. This shouldn’t be a complete surprise. AI generated applications are already known to be capable of introducing security issues by choosing the simplest solution over the most secure solution. However, security issues in an organizational policy that is designed to prevent security issues is especially problematic.

    Independent researcher (and senior security engineer at Apple), Vatsal Gupta, has been examining these issues, and discussed them with SecurityWeek. “LLMs are being introduced into engineering workflows. Developers are using them to generate infrastructure code, security rules, and now even access control policies,” he says.

    The appeal is obvious. “Instead of writing policy logic manually, teams can describe intent in plain language and let the model generate the enforcement logic.”

    But it doesn’t always work that way. “LLM-generated policies are often syntactically valid but semantically incorrect,” continues Vatsal. “One missing condition, a misinterpreted attribute, or an incorrect action can completely redefine who gets access to what.”

    Advertisement. Scroll to continue reading.

    These are not obvious failures. They don’t break builds or trigger alerts. But they quietly expand access boundaries. And Vatsal’s research has found various recurring failure patterns.

    A common issue, he tells us, is missing contextual restraints. “A policy that is supposed to limit access based on region, department, or ownership may omit that condition entirely. The generated policy still looks clean and valid, but it now applies globally instead of within the intended scope.”

    A second, he continues, is missing deny logic. “Many access control policies rely on a baseline deny posture with specific exceptions. LLMs often capture the exception but fail to encode the underlying restriction. The result is a policy that allows more than intended, even though it appears to implement the requirement.”

    Then there’s the standard recurring problem with LLMs — the potential to hallucinate. “Models sometimes introduce attributes that do not exist in the actual system schema. The policy compiles, but at runtime it behaves unpredictably because it relies on data that is not present or incorrectly mapped.”

    Temporal and contextual conditions are frequently dropped. “Policies that depend on time windows, approvals, or session context are simplified into static rules. What was meant to be controlled, time-bound access becomes always-on access.”

    And the last concern: “Even action misclassification can occur. A policy intended to restrict a sensitive action like deletion may be translated into a broader or different operation. The difference may be small in wording, but large in impact.”

    All these failings are natural outcomes from AI’s intention to interpret and simplify language. The result can be a policy that looks good, feels good and tastes good, but simply isn’t good. And detection of not goodness is difficult.

    Over time, these small deviations accumulate. Policies are no longer static artifacts reviewed occasionally – they are generated, updated, and deployed continuously. “As more policies are generated, deployed, and reused, the risk compounds,” continues Vatsal. Organizations may believe they are enforcing least privilege while actually drifting toward over-permissioned environments. 

    “If the generation process is not reliable, the risk becomes systemic,” he adds. “Organizations may end up with thousands of subtly flawed policies. Each flaw may be individually small, but collectively they create a large and difficult-to-understand attack surface.”

    The solution, he says, is not to abandon LLMs but to change our trust model, especially where policy is concerned. “Generated policies should not be treated as correct by default; validation layers between generation and enforcement should be introduced to ensure all required components are present, correct and consistent with expected behavior; policies should be tested, not just compiled; and deny-by-default principles should be enforced explicitly.”

    Most importantly, he adds, “Organizations need to treat authorization logic as a high-risk domain.” Just because a model can generate code does not mean that code is safe to deploy without scrutiny.

    “As we move toward AI-assisted security engineering, the goal should not just be automation. It should be correctness, auditability, and trust, because in authorization, ‘almost correct’ isn’t good enough,” Vatsal told SecurityWeek.

    Learn More at the AI Risk Summit

    Related: Vibe Coding Tested: AI Agents Nail SQLi but Fail Miserably on Security Controls

    Related: Vibe Coding: When Everyone’s a Developer, Who Secures the Code?

    Related: Groucho’s Wit, Cloud Complexity, and the Case for Consistent Security Policy

    Related: How to Eliminate the Technical Debt of Insecure AI-Assisted Software Development

    access breaking control Drift LLMs Organizational quietly Silent
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
    Previous ArticleHow to Boost a Post on LinkedIn: Tips for Better Reach
    Next Article What Is an AI Agent? (And What AI Agents Mean for Your Brand’s Visibility)
    admin
    • Website

    Related Posts

    UAC-0247 Targets Ukrainian Clinics and Government in Data-Theft Malware Campaign

    April 16, 2026

    GitHub lays out copyright liability changes and upcoming DMCA review for developers

    April 16, 2026

    New AgingFly malware used in attacks on Ukraine govt, hospitals

    April 16, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Search Blog
    About
    About

    At WifiPortal.tech, we share simple, easy-to-follow guides on cybersecurity, online privacy, and digital opportunities. Our goal is to help everyday users browse safely, protect personal data, and explore smart ways to earn online. Whether you’re new to the digital world or looking to strengthen your online knowledge, our content is here to keep you informed and secure.

    Trending Blogs

    Google AI director outlines new content playbook

    April 16, 2026

    This 55-Inch LG OLED TV Is Nearly Half Off Right Now

    April 16, 2026

    4 reasons I use a 19-year-old app to copy and move files in Windows

    April 16, 2026

    UAC-0247 Targets Ukrainian Clinics and Government in Data-Theft Malware Campaign

    April 16, 2026
    Categories
    • Blogging (63)
    • Cybersecurity (1,342)
    • Privacy & Online Earning (168)
    • SEO & Digital Marketing (823)
    • Tech Tools & Mobile / Apps (1,606)
    • WiFi / Internet & Networking (225)

    Subscribe to Updates

    Stay updated with the latest tips on cybersecurity, online privacy, and digital opportunities straight to your inbox.

    WifiPortal.tech is a blogging platform focused on cybersecurity, online privacy, and digital opportunities. We share easy-to-follow guides, tips, and resources to help you stay safe online and explore new ways of working in the digital world.

    Our Picks

    Google AI director outlines new content playbook

    April 16, 2026

    This 55-Inch LG OLED TV Is Nearly Half Off Right Now

    April 16, 2026

    4 reasons I use a 19-year-old app to copy and move files in Windows

    April 16, 2026
    Most Popular
    • Google AI director outlines new content playbook
    • This 55-Inch LG OLED TV Is Nearly Half Off Right Now
    • 4 reasons I use a 19-year-old app to copy and move files in Windows
    • UAC-0247 Targets Ukrainian Clinics and Government in Data-Theft Malware Campaign
    • Why Your Search Data Doesn’t Agree (And What To Do About It)
    • Opera’s browsers just picked up a new AI feature that’s actually useful
    • GitHub lays out copyright liability changes and upcoming DMCA review for developers
    • Mi Browser 14.54.0-gn APK Download by Zhigu Corporation Limited
    © 2026 WifiPortal.tech. Designed by WifiPortal.tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer

    Type above and press Enter to search. Press Esc to cancel.