Close Menu
    Facebook X (Twitter) Instagram
    Wifi PortalWifi Portal
    • Blogging
    • SEO & Digital Marketing
    • WiFi / Internet & Networking
    • Cybersecurity
    • Tech Tools & Mobile / Apps
    • Privacy & Online Earning
    Facebook X (Twitter) Instagram
    Wifi PortalWifi Portal
    Home»Cybersecurity»CISOs Debate Human Role in AI-Powered Security
    Cybersecurity

    CISOs Debate Human Role in AI-Powered Security

    adminBy adminMarch 24, 2026No Comments6 Mins Read
    Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email
    CISOs Debate Human Role in AI-Powered Security
    Share
    Facebook Twitter LinkedIn Pinterest Email

    RSAC 2026 CONFERENCE – San Francisco – Do AI deployments need a “human in the loop” or will people merely slow things down?

    That was a key question during an RSAC 2026 Conference panel in which security executives from Google Cloud, Vodafone, and PayPal discussed evolving AI use cases and how to safely deploy it in one’s environment.

    In the panel titled “From Threat to Strategy: The CISO’s Playbook for the AI Revolution,” The Wall Street Journal’s James Rundle asked Google Cloud chief operating officer (COO) and president of security products Francis deSouza, Vodafone global chief information security officer (CISO) Emma Smith, and PayPal senior VP and CISO Shaun Khalfan how security leaders can best adapt to the new AI landscape. The trio also discussed the role of humans in AI-powered security.

    For as many problems as the “AI revolution” hopes to solve, the introduction of LLM-powered security products has introduced and/or exacerbated other issues in the security landscape. 

    Related:Trivy Supply Chain Attack Targets CI/CD Secrets

    Thanks to the high security standard needed to secure AI tools (lest they leak sensitive corporate documents and the like, thanks to a prompt injection), the shared data security model between AI vendor and customer remains something of a mess. AI advances outside the security organization, such as vibe coding, have also created challenges; an organization may lean too hard onto AI generated code without the right humans in the loop, making the CISO’s job more complex. It must also be noted that many organizations have yet to find success in their AI security deployments, according to studies.

    Google’s AI presence speaks for itself, as 50% of its code is AI generated with developer assistance. Vodafone security analysts are using it to automate various workflows and conduct other tasks, like making board executive summaries of technical subject matter — and Khalfan said PayPal is using AI to help detect fraud in its billion transactions per month. 

    Smith said Vodafone began implementing AI when the company realized it was moving slower than AI would enable, and concluded that it would take a top-down approach from leadership to integrate it correctly. As in, everyone needs to be on the same page for how to implement AI technology in a safe, ethical, responsible way. 

    Vodafone’s solution has been AI Booster, a centralized machine learning platform leveraging Google’s technology that’s designed to help deploy AI and ML models at scale. It includes a central, reusable codebase that allows it to deploy established use cases quickly via pre-trained models and custom tools, and tracks how successful these processes have been, business-wise. 

    Related:AI Conundrum: Why MCP Security Can’t Be Patched Away

    Smith said Vodafone did that for business reasons, in part to track the value of different initiatives, but it also gives her privacy engineering team a framework to do interventions on each use case and ensure the proper guardrails are in place. 

    Humans on vs. in the Loop

    One surprising note came in discussing the idea of placing a “human in the loop” — the concept that AI tools should include humans at some steps or even every step in order to ensure accuracy of an LLM’s output. Although humans are part of the process, deSouza said that human-led defenses are often too slow to stop things like agent-led cyberattacks, and, as such, Google is moving toward agent-led defense. 

    Smith agreed. “I totally agree that a human in the loop is not scalable if we think about our traditional security controls, the ones that rely on human behaviors are the ones that we don’t rely on the most,” she said. “Let’s face it, we rely on the ones that are technical and automated and that we can prove over time. A human in the loop is not the solution for the long term, certainly on scaled operations, and I also worry that it will give a boring job to the human in the loop.” 

    Related:GlassWorm Malware Evolves to Hide in Dependencies

    Instead, organizations should think about ways to get a human “on the loop” to get insights from AI, rather than controlling or overseeing the tools, because “it’s just not going to scale,” Smith said.

    She added that Vodafone has built a heat map that looks at the confidence in an AI’s outcome and potential risk outcome. For very high risk impact use cases, Vodafone likely wouldn’t pursue such an approach unless there was a big business benefit, “and then it would absolutely have a human in the loop.”

    The Importance of Data Security and Collaboration

    Khalfan followed Smith by emphasizing the importance of putting everything one does in a data security wrapper. While PayPal is a proponent of the engineering and technological benefits of AI tooling, he added that “it’s just as important to have a risk and compliance wrapper around it.”

    “When we think about our key AI principles, it’s data and security. It’s privacy, it’s transparency, it’s explainability,” he said. “As we wrap everything we’re doing in these principles, it helps us keep this anchor of all of the efforts that we’re making.”

    An example of this is that PayPal’s AI model teams rank them in tiers based on data sensitivity, establishing use cases, and then establishing what controls need to be in place to protect any sensitive data stored within. These controls are intended to protect the models against tampering and prompt injections. It means accounting for the many identities that AI agents will need. 

    Part of this too, Khalfan said, involves collaborating with the larger ecosystem, such as the Coalition for Secure AI (CoSAI), an industry-wide initiative that aims to facilitate collaboration between stakeholders and ensure more secure AI deployments. It offers a wide range of white papers and documentation based on multiple different workstreams. 

    Alexandra Rose, director of government partnerships and the Counter Threat Unit at Sophos, tells Dark Reading that safe AI deployment is about encouraging curiosity and innovation while ensuring security. 

    “I think it’s important that security is not the world of no,” she says. “It’s how do we get to yes, and how do we get to a yes in a way that that we’re protected?”

    AIPowered CISOs Debate Human role Security
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
    Previous ArticleGoogle Responds To Error That Causes Old Branding To Persist In SERPs
    Next Article What is the release date for The Pitt season 2 episode 12 on HBO Max?
    admin
    • Website

    Related Posts

    Operation PowerOFF identifies 75k DDoS users, takes down 53 domains

    April 17, 2026

    Data Breach at Tennessee Hospital Affects 337,000

    April 17, 2026

    [Webinar] Find and Eliminate Orphaned Non-Human Identities in Your Environment

    April 17, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Search Blog
    About
    About

    At WifiPortal.tech, we share simple, easy-to-follow guides on cybersecurity, online privacy, and digital opportunities. Our goal is to help everyday users browse safely, protect personal data, and explore smart ways to earn online. Whether you’re new to the digital world or looking to strengthen your online knowledge, our content is here to keep you informed and secure.

    Trending Blogs

    Google AI Mode in Chrome now lets you search deeper with fewer tabs

    April 17, 2026

    The New Google Pixel 10a Is Already $50 Off

    April 17, 2026

    Operation PowerOFF identifies 75k DDoS users, takes down 53 domains

    April 17, 2026

    Taylor Sheridan’s next Paramount movie is based on a video game, not Yellowstone

    April 17, 2026
    Categories
    • Blogging (63)
    • Cybersecurity (1,356)
    • Privacy & Online Earning (170)
    • SEO & Digital Marketing (833)
    • Tech Tools & Mobile / Apps (1,622)
    • WiFi / Internet & Networking (227)

    Subscribe to Updates

    Stay updated with the latest tips on cybersecurity, online privacy, and digital opportunities straight to your inbox.

    WifiPortal.tech is a blogging platform focused on cybersecurity, online privacy, and digital opportunities. We share easy-to-follow guides, tips, and resources to help you stay safe online and explore new ways of working in the digital world.

    Our Picks

    Google AI Mode in Chrome now lets you search deeper with fewer tabs

    April 17, 2026

    The New Google Pixel 10a Is Already $50 Off

    April 17, 2026

    Operation PowerOFF identifies 75k DDoS users, takes down 53 domains

    April 17, 2026
    Most Popular
    • Google AI Mode in Chrome now lets you search deeper with fewer tabs
    • The New Google Pixel 10a Is Already $50 Off
    • Operation PowerOFF identifies 75k DDoS users, takes down 53 domains
    • Taylor Sheridan’s next Paramount movie is based on a video game, not Yellowstone
    • Data Breach at Tennessee Hospital Affects 337,000
    • Gen Z Workers Pick Human-Only Output Over AI-Assisted
    • The USB trick that bypasses your smart TV’s 100Mbps Ethernet limit
    • Firefox Nightly for Developers 151.0a1 APK Download by Mozilla
    © 2026 WifiPortal.tech. Designed by WifiPortal.tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer

    Type above and press Enter to search. Press Esc to cancel.