Close Menu
    Facebook X (Twitter) Instagram
    Wifi PortalWifi Portal
    • Blogging
    • SEO & Digital Marketing
    • WiFi / Internet & Networking
    • Cybersecurity
    • Tech Tools & Mobile / Apps
    • Privacy & Online Earning
    Facebook X (Twitter) Instagram
    Wifi PortalWifi Portal
    Home»Tech Tools & Mobile / Apps»I turned my old gaming PC into an AI assistant that’s actually useful
    Tech Tools & Mobile / Apps

    I turned my old gaming PC into an AI assistant that’s actually useful

    adminBy adminFebruary 13, 2026No Comments6 Mins Read
    Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email
    I turned my old gaming PC into an AI assistant that's actually useful
    Share
    Facebook Twitter LinkedIn Pinterest Email

    A lot of projects that revive old hardware feel like an excuse to tinker rather than something you’ll actually use. Turning old PCs into NAS boxes or DNS resolvers can be genuinely practical, but many other home lab projects are often abandoned by their creators, or were only spun up as an experiment in the first place.

    I wouldn’t blame you for thinking that local AI in the form of an LLM could have the same fate written all over it, and I thought the same thing until I set one up for myself. Even with my relatively modest hardware, spinning up a local AI assistant was easier than I thought, and was genuinely useful.

    Why set up a local LLM

    Control, privacy, and customizability

    Self Hosted LLM AI macOS

    The obvious appeal of a local LLM is privacy. When you run everything on your own machine, nothing you paste in leaves your network. That means logs, configuration files, error output, half-finished drafts, and essentially all other data you put in stays local. For anyone even slightly concerned about their footprint, that alone is compelling.

    There’s also the cost angle. When using an old gaming rig you have lying around, the cost of setup is essentially zero. It might add a bit to your power bill, but there are ways to mitigate that, and all-in-all, it’s not as expensive as you might think, especially if you compare it with a monthly subscription.

    To me, the most interesting and compelling reason to set up a local LLM is the customizability. You dictate every part of the experience, down to the responses you get. You get to choose the model and all the parameters it uses. This can be quite overwhelming, but you don’t actually need to get into the nitty-gritty if you don’t want to.

    Getting started

    There are “soft” hardware and software requirements

    A photo of an old graphics card

    In terms of a GPU, anything with more than 8 GB of VRAM is a good place to start. Older Pascal-era Nvidia cards are perfectly usable in this application. In my situation, I have a dusty ASUS STRIX GTX 1070 in my repurposed gaming rig. CPU performance matters far less for raw inference (actually generating the words you see, or “tokens” as they’re known), but it does matter for larger prompts and context windows, especially if multiple people are using the system at the same time. I have an i7-6700K in my system.

    In regard to RAM, the same rules as VRAM apply here: more is better. I have 16 GB in my setup, which is workable for most setups, especially running one model at a time. For storage, I have an 500 GB NVMe drive that’s been diced up among my Proxmox VMs, but I’ve allocated 200 GB here, which is overkill if you’re not running many models, but I plan to try many different ones, and want the space to do so. I recommend allocating similar amounts of storage if you don’t already have a model in mind.

    At the core, you need a local LLM runtime, some kind of model manager, and optionally, a web interface to interact with the LLM from other devices. You’ll be best off running some sort of server OS and running in a VM inside of a hypervisor like Proxmox, and hosting it from there. This makes the most sense for an old gaming PC you plan on turning into a home lab.

    A Gigabyte Aorus ACW300 cabinet with a Ryzen 5 1600 CPU and a GTX 1080 inside

    How I turned my old PC into the ultimate LAN party game server

    You don’t need a particularly powerful PC to become a game server.

    Initial setup and choosing a model

    Incredibly simple setup

    To begin, I installed Ubuntu Server and began configuring GPU passthrough with Proxmox. This required installing the Nvidia driver manually and ensuring that I added it as a PCI device in the VM settings, and being sure to enable IOMMU in the BIOS, which is needed to pass PCI devices through a hypervisor. In my case, the setting was hidden well, and marked at VT-d. On AMD systems, it may be marked as SVM + IOMMU.

    After using the nvidia-smi command to ensure my GTX 1070 was being passed through correctly, I installed Ollama, which is going to act as my LLM engine.

    nvidia_passthrough_proxmox_VM_Linux

    For a model, I decided on the latest Mistral model, which is relatively lightweight, and performs well on older GPUs like my own. There are so many to try, I recommend giving the Ollama site a browse. Do note that any models marked as “cloud” are sending prompts off of your device to a remote server, and are not truly local.

    To load my model, I simply typed the Ollama run command, followed by the model name:

    A sample prompt in a proxmox ollama VM

    I now have a working, local LLM inside of a Proxmox VM.

    Setting up the Web UI

    A docker container is all you need

    To access this from a web UI, I recommend setting up Open WebUI in a Docker container. To do so, I used the following configuration:

    A screenshot of a proxmox VM showing a docker container file

    version: "3.9"
    
    services:
      open-webui:
        image: ghcr.io/open-webui/open-webui:main
        container_name: open-webui
        ports:
          - "3000:8080"
        environment:
          - OLLAMA_BASE_URL=http://host.docker.internal:11434
        extra_hosts:
          - "host.docker.internal:host-gateway"
        volumes:
          - open-webui-data:/app/backend/data
        restart: unless-stopped
    
    volumes:
      open-webui-data:
    

    You’ll also need to create an override file that binds Ollama so that it’s reachable from your Docker container as well as other local devices. This is fine if you don’t plan to expose it to the Internet, but it does expose it to anyone connected to your network. If you want to shield it from specific devices and services, you’ll have to specify those yourself.

    A screenshot showing ollama being exposed to all interfaces in override.conf

    Once that’s done, once spinning up the Docker container, you should be able to access your web UI at VM_IP:3000 in your web browser if everything is set up correctly.

    A screenshot showing a URL bar and a private IP, with Open WEBUI open

    Accessing a Swarm cluster on Portainer

    5 Docker mistakes beginners make in their first month

    We’re all guilty of making them

    A safe place for your documents, notes, and data

    It won’t be a full cloud-model replacement, but a local LLM is great for parsing through collections of data, simple one-off queries, or monotonous text-based tasks. In my case, I use it for giving me a preliminary scanning of error logs, pointing me in the right direction to troubleshoot the issue, and is actually far faster and more useful than scanning forums or actually scanning the log files myself.

    Assistant gaming turned
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
    Previous ArticleHackers probe, exploit newly patched BeyondTrust RCE flaw (CVE-2026-1731)
    Next Article Malicious Chrome Extensions Caught Stealing Business Data, Emails, and Browsing History
    admin
    • Website

    Related Posts

    Why I’m sticking with 7B models for my local dev environment (and you should too)

    March 2, 2026

    Google Pixel’s Now Playing feature rolls out as an app, and boy does it look good

    March 2, 2026

    Vulnerability Allowed Hijacking Chrome’s Gemini Live AI Assistant

    March 2, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Search Blog
    About
    About

    At WifiPortal.tech, we share simple, easy-to-follow guides on cybersecurity, online privacy, and digital opportunities. Our goal is to help everyday users browse safely, protect personal data, and explore smart ways to earn online. Whether you’re new to the digital world or looking to strengthen your online knowledge, our content is here to keep you informed and secure.

    Trending Blogs

    Why I’m sticking with 7B models for my local dev environment (and you should too)

    March 2, 2026

    Fake Google Security site uses PWA app to steal credentials, MFA codes

    March 2, 2026

    AI marketing predictions that will shape 2026

    March 2, 2026

    Google Pixel’s Now Playing feature rolls out as an app, and boy does it look good

    March 2, 2026
    Categories
    • Blogging (32)
    • Cybersecurity (563)
    • Privacy & Online Earning (76)
    • SEO & Digital Marketing (351)
    • Tech Tools & Mobile / Apps (700)
    • WiFi / Internet & Networking (101)

    Subscribe to Updates

    Stay updated with the latest tips on cybersecurity, online privacy, and digital opportunities straight to your inbox.

    WifiPortal.tech is a blogging platform focused on cybersecurity, online privacy, and digital opportunities. We share easy-to-follow guides, tips, and resources to help you stay safe online and explore new ways of working in the digital world.

    Our Picks

    Why I’m sticking with 7B models for my local dev environment (and you should too)

    March 2, 2026

    Fake Google Security site uses PWA app to steal credentials, MFA codes

    March 2, 2026

    AI marketing predictions that will shape 2026

    March 2, 2026
    Most Popular
    • Why I’m sticking with 7B models for my local dev environment (and you should too)
    • Fake Google Security site uses PWA app to steal credentials, MFA codes
    • AI marketing predictions that will shape 2026
    • Google Pixel’s Now Playing feature rolls out as an app, and boy does it look good
    • Vulnerability Allowed Hijacking Chrome’s Gemini Live AI Assistant
    • Want Better Google Ads Insights? Try These 6 Reports
    • Pixel’s Now Playing feature has officially rolled out as an app
    • Don’t Post Travel Updates in Real Time
    © 2026 WifiPortal.tech. Designed by WifiPortal.tech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer

    Type above and press Enter to search. Press Esc to cancel.