Close Menu
    Facebook X (Twitter) Instagram
    Wifi PortalWifi Portal
    • Blogging
    • SEO & Digital Marketing
    • WiFi / Internet & Networking
    • Cybersecurity
    • Tech Tools & Mobile / Apps
    • Privacy & Online Earning
    Facebook X (Twitter) Instagram
    Wifi PortalWifi Portal
    Home»Tech Tools & Mobile / Apps»Eight Things You Should Never Share With an AI Chatbot
    Tech Tools & Mobile / Apps

    Eight Things You Should Never Share With an AI Chatbot

    adminBy adminApril 13, 2026No Comments4 Mins Read
    Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email
    Eight Things You Should Never Share With an AI Chatbot
    Share
    Facebook Twitter LinkedIn Pinterest Email


    It probably goes without saying at this point, but your conversations with AI chatbots aren’t private—everything you type or upload to Gemini, ChatGPT, and other models might be read and used in a variety of ways. If you wouldn’t send a document or repeat information to someone you don’t know, you shouldn’t include it in a chatbot prompt either.

    Researchers at Stanford reviewed the privacy policies of the six U.S. companies that developed the most popular AI chatbots, including Claude, Gemini, and ChatGPT, and found that all of them use chat data by default for training purposes. Some retain said data indefinitely, and most merge it with other information collected from consumers, such as search queries and purchases. In most cases, you can opt out of having your data used to train LLMs, but chats can also be read by human reviewers, and long-term retention policies increase the risk of your stored information being leaked in a breach.

    If you’re going to use an AI chatbot, these are the things you should avoid sharing:


    What do you think so far?

    • Login credentials: Obviously, you should never paste prompts with usernames and passwords into a chatbot, including documents that contain login credentials. AI is also abysmal at generating secure passwords—use the tools in your password manager instead, or better yet, opt for a passkey if available.

    • Financial data: AI chatbots aren’t financial experts, and you shouldn’t upload documents or use data related to your specific finances in prompts. This includes bank statements, credit card numbers, investment information, account numbers and balances, etc. Sharing financial details anywhere that isn’t secure increases the risk of theft, fraud, and targeting by scammers.

    • Medical records: AI chatbots also aren’t medical professionals and shouldn’t be relied upon for medical advice. You probably don’t want your medical records to be used to train LLMs—plus, uploading them exposes them to potential data breaches.

    • Personally identifiable information (PII): AI prompts should never include information like your name, address, email, phone number, birth date, Social Security number, passport number, or any other data that could be used to steal your identity. (Financial information and medical records are also considered sensitive PII.)

    • General health information: In addition to keeping your sensitive medical records private, you should avoid giving chatbots seemingly benign information about your health that could be used to profile you. For example, the Stanford report notes that it’s possible for AI chatbots to infer health status from a request for heart-friendly dinner recipes, which could eventually be accessible to insurance companies. This also includes information related to topics like sexual health, medication use, and gender-affirming care.

    • Mental health concerns: Another thing your chatbot isn’t is a therapist. AI has been unhelpful at best and harmful at worst when it comes to mental health. Even with updates intended to protect users in crisis, chatbots aren’t a replacement for real, human support.

    • Photos: AI image editing is popular, but that doesn’t mean it’s without risk. You may not want your personal photos used for training purposes, and image metadata contains information like your GPS location. At the very least, avoid uploading images of people (especially minors), and consider stripping EXIF data before sharing.

    • Company documents: AI may be useful for summarizing documents, creating presentations, drafting emails, and completing other work-related tasks more quickly, but you should use caution when uploading files containing sensitive company information to a chatbot. Your employer may even have a policy prohibiting it.

    The bottom line is that you should be cautious what you share with AI chatbots—assume everything in your prompts is stored and could be read by someone else. Avoid anything that is personal or identifiable, and enable all available privacy settings (such as data sharing and training opt-outs).

    chatbot Share
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
    Previous ArticleThis 25-year-old Windows tool is better than Task Manager
    Next Article 8 Chrome extensions I can’t live without
    admin
    • Website

    Related Posts

    Posts in your Shorts? What to expect from YouTube’s experiment

    April 15, 2026

    I Tried Binge, the Letterboxd Alternative That I Now Like More Than Letterboxd

    April 15, 2026

    Closets are killing your NAS performance, even the fanless ones

    April 15, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Search Blog
    About
    About

    At WifiPortal.tech, we share simple, easy-to-follow guides on cybersecurity, online privacy, and digital opportunities. Our goal is to help everyday users browse safely, protect personal data, and explore smart ways to earn online. Whether you’re new to the digital world or looking to strengthen your online knowledge, our content is here to keep you informed and secure.

    Trending Blogs

    Sweden Blames Pro-Russian Group for Cyberattack Last Year on Its Energy Infrastructure

    April 15, 2026

    The automation drift and how to correct course

    April 15, 2026

    Posts in your Shorts? What to expect from YouTube’s experiment

    April 15, 2026

    Microsoft, Salesforce Patch AI Agent Data Leak Flaws

    April 15, 2026
    Categories
    • Blogging (63)
    • Cybersecurity (1,329)
    • Privacy & Online Earning (167)
    • SEO & Digital Marketing (814)
    • Tech Tools & Mobile / Apps (1,590)
    • WiFi / Internet & Networking (224)

    Subscribe to Updates

    Stay updated with the latest tips on cybersecurity, online privacy, and digital opportunities straight to your inbox.

    WifiPortal.tech is a blogging platform focused on cybersecurity, online privacy, and digital opportunities. We share easy-to-follow guides, tips, and resources to help you stay safe online and explore new ways of working in the digital world.

    Our Picks

    Sweden Blames Pro-Russian Group for Cyberattack Last Year on Its Energy Infrastructure

    April 15, 2026

    The automation drift and how to correct course

    April 15, 2026

    Posts in your Shorts? What to expect from YouTube’s experiment

    April 15, 2026
    Most Popular
    • Sweden Blames Pro-Russian Group for Cyberattack Last Year on Its Energy Infrastructure
    • The automation drift and how to correct course
    • Posts in your Shorts? What to expect from YouTube’s experiment
    • Microsoft, Salesforce Patch AI Agent Data Leak Flaws
    • Why ChatGPT Cites One Page Over Another (Study of 1.4M Prompts)
    • I Tried Binge, the Letterboxd Alternative That I Now Like More Than Letterboxd
    • How Endpoint Network Monitoring Enables Remote Work
    • Actively Exploited nginx-ui Flaw (CVE-2026-33032) Enables Full Nginx Server Takeover
    © 2026 WifiPortal.tech. Designed by WifiPortal.tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer

    Type above and press Enter to search. Press Esc to cancel.