Quick post about a change I made that’s worked out well.

I was using OpenAI API for automations in n8n — email summaries, content drafts, that kind of thing. Was spending ~$40/month.

Switched everything to Ollama running locally. The migration was pretty straightforward since n8n just hits an HTTP endpoint. Changed the URL from api.openai.com to localhost:11434 and updated the request format.

For most tasks (summarization, classification, drafting) the local models are good enough. Complex reasoning is worse but I don’t need that for automation workflows.

Hardware: i7 with 16GB RAM, running Llama 3 8B. Plenty fast for async tasks.

  • Ludicrous0251@piefed.zip
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    5 hours ago

    No, not free, OPs power bill just climbed behind the scenes to match. Probably a discount but definitely not free.

    • Katherine 🪴@piefed.social
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      3 hours ago

      Unless OP is running a data center, then there’s not really much of a power increase to run a local Ollama.

      • doodledup@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        Running a thousand watts and not running a thousand watts can be quiet a difference depending on where you live. And then consider buying all of the hardware. In many cases it’s probably cheaper to just pay $40 al month.

        • StripedMonkey@lemmy.zip
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 hours ago

          That would be true worst case, but you’re never running inference 24/7. It’s no crazier than gaming in that regard.