Close Menu
    Facebook X (Twitter) Instagram
    Wifi PortalWifi Portal
    • Blogging
    • SEO & Digital Marketing
    • WiFi / Internet & Networking
    • Cybersecurity
    • Tech Tools & Mobile / Apps
    • Privacy & Online Earning
    Facebook X (Twitter) Instagram
    Wifi PortalWifi Portal
    Home»Cybersecurity»Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to Copy Model
    Cybersecurity

    Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to Copy Model

    adminBy adminFebruary 24, 2026No Comments4 Mins Read
    Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email
    Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to Copy Model
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Ravie LakshmananFeb 24, 2026Artificial Intelligence / Anthropic

    Anthropic on Monday said it identified “industrial-scale campaigns” mounted by three artificial intelligence (AI) companies, DeepSeek, Moonshot AI, and MiniMax, to illegally extract Claude’s capabilities to improve their own models.

    The distillation attacks generated over 16 million exchanges with its large language model (LLM) through about 24,000 fraudulent accounts in violation of its terms of service and regional access restrictions. All three companies are based in China, where the use of its services is prohibiteduse of its services is prohibited due to “legal, regulatory, and security risks.”

    Distillation refers to a technique where a less capable model is trained on the outputs generated by a stronger AI system. While distillation is a legitimate way for companies to produce smaller, cheaper versions of their own frontier models, it’s illegal for competitors to leverage it to acquire such capabilities from other AI companies at a fraction of the time and cost that would take them if they were to develop them on their own.

    “Illicitly distilled models lack necessary safeguards, creating significant national security risks,” Anthropic said. “Models built through illicit distillation are unlikely to retain those safeguards, meaning that dangerous capabilities can proliferate with many protections stripped out entirely.”

    Foreign AI companies that distill American models can weaponize these unprotected capabilities to facilitate malicious activities, cyber-related or otherwise, thereby serving as a foundation for military, intelligence, and surveillance systems that authoritarian governments can deploy for offensive cyber operations, disinformation campaigns, and mass surveillance.

    The campaigns detailed by AI upstart entail the use of fraudulent accounts and commercial proxy services to access Claude at scale while avoiding detection. Anthropic said it was able to attribute each campaign to a specific AI lab based on request metadata, IP address correlation, request metadata, and infrastructure indicators.

    The details of the three distillation attacks are below –

    • DeepSeek, which targeted Claude’s reasoning capabilities, rubric-based grading tasks, and sought its help in generating censorship-safe alternatives to politically sensitive queries like questions about dissidents, party leaders, or authoritarianism across over 150,000 exchanges.
    • Moonshot AI, which targeted Claude’s agentic reasoning and tool use, coding capabilities, computer-use agent development, and computer vision across over 3.4 million exchanges.
    • MiniMax, which targeted Claude’s agentic coding and tool use capabilities across over 13 million exchanges.

    “The volume, structure, and focus of the prompts were distinct from normal usage patterns, reflecting deliberate capability extraction rather than legitimate use,” Anthropic added. “Each campaign targeted Claude’s most differentiated capabilities: agentic reasoning, tool use, and coding.”

    The company also pointed out that the attacks relied on commercial proxy services that resell access to Claude and other frontier AI models at scale. These services are powered by “hydra cluster” architectures that contain massive networks of fraudulent accounts to distribute traffic across their API.

    The access is then used to generate large volumes of carefully crafted prompts that are designed to extract specific capabilities from the model for the purpose of training their own models by harvesting the high-quality responses. 

    “The breadth of these networks means that there are no single points of failure,” Anthropic said. “When one account is banned, a new one takes its place. In one case, a single proxy network managed more than 20,000 fraudulent accounts simultaneously, mixing distillation traffic with unrelated customer requests to make detection harder.”

    To counter the threat, Anthropic said it has built several classifiers and behavioral fingerprinting systems to identify suspicious distillation attack patterns in API traffic, strengthened verification for educational accounts, security research programs, and startup organizations, and implemented enhanced safeguards to reduce the efficacy of model outputs for illicit distillation.

    The disclosure comes weeks after Google Threat Intelligence Group (GTIG) disclosed it identified and disrupted distillation and model extraction attacks aimed at Gemini’s reasoning capabilities through more than 100,000 prompts.

    “Model extraction and distillation attacks do not typically represent a risk to average users, as they do not threaten the confidentiality, availability, or integrity of AI services,” Google said earlier this month. “Instead, the risk is concentrated among model developers and service providers.”

    Anthropic Chinese Claude copy firms Million model Queries
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
    Previous ArticleI used this SSD flash drive to solve my Pixel 10 Pro XL storage problems
    Next Article IMDb: Movies & TV Shows 9.2.7 APK Download by IMDb
    admin
    • Website

    Related Posts

    CISA Adds Actively Exploited VMware Aria Operations Flaw CVE-2026-22719 to KEV Catalog

    March 4, 2026

    Why workforce identity is still a vulnerability, and what to do about it

    March 4, 2026

    CISA flags VMware Aria Operations RCE flaw as exploited in attacks

    March 4, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Search Blog
    About
    About

    At WifiPortal.tech, we share simple, easy-to-follow guides on cybersecurity, online privacy, and digital opportunities. Our goal is to help everyday users browse safely, protect personal data, and explore smart ways to earn online. Whether you’re new to the digital world or looking to strengthen your online knowledge, our content is here to keep you informed and secure.

    Trending Blogs

    CISA Adds Actively Exploited VMware Aria Operations Flaw CVE-2026-22719 to KEV Catalog

    March 4, 2026

    Why Atlas & Comet Are Unlikely To Win The AI Browser War

    March 4, 2026

    Avatar: Realms Collide 1.4.815 APK Download by Tilting Point

    March 4, 2026

    AMD accelerates telecom network AI

    March 4, 2026
    Categories
    • Blogging (32)
    • Cybersecurity (591)
    • Privacy & Online Earning (88)
    • SEO & Digital Marketing (371)
    • Tech Tools & Mobile / Apps (727)
    • WiFi / Internet & Networking (106)

    Subscribe to Updates

    Stay updated with the latest tips on cybersecurity, online privacy, and digital opportunities straight to your inbox.

    WifiPortal.tech is a blogging platform focused on cybersecurity, online privacy, and digital opportunities. We share easy-to-follow guides, tips, and resources to help you stay safe online and explore new ways of working in the digital world.

    Our Picks

    CISA Adds Actively Exploited VMware Aria Operations Flaw CVE-2026-22719 to KEV Catalog

    March 4, 2026

    Why Atlas & Comet Are Unlikely To Win The AI Browser War

    March 4, 2026

    Avatar: Realms Collide 1.4.815 APK Download by Tilting Point

    March 4, 2026
    Most Popular
    • CISA Adds Actively Exploited VMware Aria Operations Flaw CVE-2026-22719 to KEV Catalog
    • Why Atlas & Comet Are Unlikely To Win The AI Browser War
    • Avatar: Realms Collide 1.4.815 APK Download by Tilting Point
    • AMD accelerates telecom network AI
    • Why workforce identity is still a vulnerability, and what to do about it
    • 8 Best Robo-Advisors of March 2026
    • Android’s March update is all about finding people, apps, and your missing bags
    • CISA flags VMware Aria Operations RCE flaw as exploited in attacks
    © 2026 WifiPortal.tech. Designed by WifiPortal.tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer

    Type above and press Enter to search. Press Esc to cancel.