Quick post about a change I made that’s worked out well.

I was using OpenAI API for automations in n8n — email summaries, content drafts, that kind of thing. Was spending ~$40/month.

Switched everything to Ollama running locally. The migration was pretty straightforward since n8n just hits an HTTP endpoint. Changed the URL from api.openai.com to localhost:11434 and updated the request format.

For most tasks (summarization, classification, drafting) the local models are good enough. Complex reasoning is worse but I don’t need that for automation workflows.

Hardware: i7 with 16GB RAM, running Llama 3 8B. Plenty fast for async tasks.

  • Shady_Shiroe@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    12 hours ago

    I only ever use my local ai for home assistant voice assistant on my phone, but it’s more of a gimmick/party trick since I only have temperatures sensors currently (only got into ha recently) and it can’t access WiFi so it’s just quietly sitting unloaded on my truenas server

    • blargh513@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      Running any LLM on truenas is not awesome. I’ve tried it with GPU passthrough and it’s just too much overhead. I may just burn all my stuff down and restart with Proxmox, run Truenas core inside just for NAS. The idea of a converged nas+virtualization is wonderful, but it’s just not there.

      The host networking model alone is such a pain, then you get into performance stuff. I still like Truenas a lot, but I think that Proxmox is probably still the better platform.