Close Menu
    Facebook X (Twitter) Instagram
    Wifi PortalWifi Portal
    • Blogging
    • SEO & Digital Marketing
    • WiFi / Internet & Networking
    • Cybersecurity
    • Tech Tools & Mobile / Apps
    • Privacy & Online Earning
    Facebook X (Twitter) Instagram
    Wifi PortalWifi Portal
    Home»Cybersecurity»AI is becoming part of everyday criminal workflows
    Cybersecurity

    AI is becoming part of everyday criminal workflows

    adminBy adminFebruary 24, 2026No Comments6 Mins Read
    Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email
    AI is becoming part of everyday criminal workflows
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Underground forums include long threads about chatbots drafting phishing emails, generating code snippets, and coaching social engineering calls. A new study examined conversations captured between January 1, 2025 and July 31, 2025 across dozens of cybercrime forums to map how AI tools are entering day to day criminal operations.

    AI in cybercrime

    The dataset includes 163 discussion threads drawn from 21 forums, totaling 2,264 messages posted by 1,661 distinct contributors. Much of the activity clustered on well known platforms such as XSS, BreachForums, Dread, and Exploit.in.

    Four themes dominated the discussions: repurposing mainstream AI services, marketing criminal AI products, adapting models for specific operations, and debating operational risk.

    Mainstream tools drive experimentation

    Commercial chatbots serve as the starting point for many participants. ChatGPT appeared in 52.5 percent of the threads that mentioned legal AI products. DeepSeek followed at 27.9 percent, Claude at 19.7 percent, and Grok at 18.0 percent. Llama, Gemini, Mistral, Hugging Face, Manus AI, and WhiteRabbitNeo also appeared across multiple conversations.

    Open-source and locally hosted models drew attention for privacy and fewer built in content restrictions. Participants described running models offline to draft scripts, refine phishing language, and explore attack concepts. Discussions also covered the hardware and time required to train or fine tune a model for offensive tasks, with several contributors citing long development cycles even when using high end consumer GPUs.

    Jailbreaking remained common. Users shared prompts designed to bypass safety controls, including role play scenarios and instructions that attempt to override internal policies. Some threads focused on which models appeared more permissive during testing. Others described ways to obtain premium access through stolen or resold accounts. Listings included accounts with active subscriptions and instructions for abusing student verification flows.

    Criminal AI brands crowd the forums

    A second stream of activity involved tools marketed specifically for fraud, spam, and malware. Fifty threads centered on selling, requesting, or reviewing these products. Mentions clustered around a handful of names. WormGPT accounted for 26 percent of product mentions, FraudGPT for 18 percent, and DarkGPT for 16 percent. ChaosGPT, GhostGPT, and SpamGPT each appeared in 6 percent of mentions.

    Many offerings functioned as wrappers that resell access to mainstream models through a bot interface or API gateway paired with a jailbreak prompt. Threads described short lived services, disputes over quality, and concerns about logging or hidden collection features. Sellers also advertised custom development. Some offered to host large language models for clients lacking infrastructure. Others promoted AI enabled calling systems designed to automate outbound fraud operations and handle victim interactions.

    Monitoring how these products are marketed could offer early warning of broader adoption. Benoît Dupont, PhD, professor of criminology and co author of the study, told Help Net Security that defenders can track how often AI claims appear in underground sales listings.

    “We could monitor forums, markets and Telegram channels to assess what share of malicious products and services on sale claim to be powered or enabled by AI,” Dupont said. “This claim is often central to secure a competitive advantage, so sellers are unlikely to obfuscate this in their offerings. If we were able to reliably measure the share of AI powered cybercriminal products and services advertised, we could track when certain thresholds are being reached and could certainly state with more confidence that we are leaving the experimental phase for a more industrial phase. Of course, advertising does not mean adoption, but this is an indicator of the direction things are going.”

    Adaptation centers on scams and automation

    Higher skill discussions focused on adapting AI to specific workflows. Participants described using chatbots to rehearse social engineering scripts tailored to a target organization. Others outlined tools that generate variable spam content to evade filters by altering phrasing and structure. Call center automation featured prominently. Posts detailed virtual assistants that support human operators in real time by suggesting responses, extracting one time passwords, and forwarding victims to live agents.

    A smaller set of threads addressed malware development. Contributors emphasized the need for technical expertise to turn generated snippets into functioning payloads and delivery chains. Several elite forums added dedicated AI sections to concentrate discussion and attract specialists. Recruitment posts offered hourly assistance with model setup and integration into existing toolchains.

    Dupont expects fraud operations to integrate AI faster than other categories of cybercrime.

    “Social engineering and scamming operations will probably be able to leverage AI capacities more systematically, profitably and sooner than malware writing operations in the near future at least,” he said. “This very uncertain assessment is based on the fact that profit incentives and rewards are more accessible for AI enabled scams, but also because defensive AI systems are more systematically deployed to protect organizations, whereas individuals seem more exposed and benefit from limited AI enabled protection.”

    Skepticism and operational risk

    Skepticism ran through many conversations. Participants questioned the reliability of AI generated code for complex offensive tasks and cited frequent errors and hallucinated functions. Complaints about low quality forum posts generated by chatbots appeared across multiple communities, with members describing an increase in repetitive and derivative content.

    Operational security concerns also surfaced repeatedly. Contributors treated prompts and chat histories as sensitive data that platform operators can monitor and store. Advice circulated about minimizing identifying details in queries and rotating accounts. Similar caution applied to criminal AI services, where buyers expressed concern about logging, hidden backdoors, and potential interception of stolen data.

    Dupont said defenders can watch for measurable signals that point to scaled automation.

    “The defensive signals that could be monitored could include the volume in certain types of scam reported by victims, providing we can access these reports in real time, volume of phishing, vishing and smishing messages intercepted, level of sophistication of these messages and calls, level of coordination of certain calling and messaging campaigns, volume of new account generation among certain digital platforms used to enable online scams, to name a few,” he said. “Any fraud signal that scales up and demonstrates high levels of coordination should be examined carefully to determine whether AI tools are at play.”

    Across the seven month window, adoption clustered in fraud, scams, and social engineering workflows. A core group of innovators experimented with automation and new service models, and a wider set of users tested mainstream tools for drafting messages and refining scripts. The broader ecosystem shows an early stage of integration, with experimentation, marketing activity, and debate unfolding across multiple forums.

    Criminal everyday Part workflows
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
    Previous ArticleA Technical Guide To Prepare For Google’s UCP
    Next Article I used this SSD flash drive to solve my Pixel 10 Pro XL storage problems
    admin
    • Website

    Related Posts

    Apple account change alerts abused to send phishing emails

    April 19, 2026

    Social media bans might steer kids into riskier corners of the internet

    April 19, 2026

    Vercel confirms breach as hackers claim to be selling stolen data

    April 19, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Search Blog
    About
    About

    At WifiPortal.tech, we share simple, easy-to-follow guides on cybersecurity, online privacy, and digital opportunities. Our goal is to help everyday users browse safely, protect personal data, and explore smart ways to earn online. Whether you’re new to the digital world or looking to strengthen your online knowledge, our content is here to keep you informed and secure.

    Trending Blogs

    The Ray-Ban Meta (Gen 1) smart glasses just scored a rare 25% discount at Amazon

    April 20, 2026

    The best robot vacuum in Australia: reliable, effective, effort-free automated cleaners

    April 20, 2026

    Monitor spec sheets hide the one thing that actually decides whether a display feels premium

    April 19, 2026

    Apple account change alerts abused to send phishing emails

    April 19, 2026
    Categories
    • Blogging (65)
    • Cybersecurity (1,403)
    • Privacy & Online Earning (172)
    • SEO & Digital Marketing (850)
    • Tech Tools & Mobile / Apps (1,684)
    • WiFi / Internet & Networking (232)

    Subscribe to Updates

    Stay updated with the latest tips on cybersecurity, online privacy, and digital opportunities straight to your inbox.

    WifiPortal.tech is a blogging platform focused on cybersecurity, online privacy, and digital opportunities. We share easy-to-follow guides, tips, and resources to help you stay safe online and explore new ways of working in the digital world.

    Our Picks

    The Ray-Ban Meta (Gen 1) smart glasses just scored a rare 25% discount at Amazon

    April 20, 2026

    The best robot vacuum in Australia: reliable, effective, effort-free automated cleaners

    April 20, 2026

    Monitor spec sheets hide the one thing that actually decides whether a display feels premium

    April 19, 2026
    Most Popular
    • The Ray-Ban Meta (Gen 1) smart glasses just scored a rare 25% discount at Amazon
    • The best robot vacuum in Australia: reliable, effective, effort-free automated cleaners
    • Monitor spec sheets hide the one thing that actually decides whether a display feels premium
    • Apple account change alerts abused to send phishing emails
    • Apple AirPods Pro 3 review: A masterclass in sound, a lesson in lock-in
    • Samsung Galaxy S23 Ultra versus vivo X300 Ultra
    • Here’s How Netflix Plans to Add TikTok-Style Videos to Its Mobile App
    • Social media bans might steer kids into riskier corners of the internet
    © 2026 WifiPortal.tech. Designed by WifiPortal.tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer

    Type above and press Enter to search. Press Esc to cancel.