Quick post about a change I made that’s worked out well.

I was using OpenAI API for automations in n8n — email summaries, content drafts, that kind of thing. Was spending ~$40/month.

Switched everything to Ollama running locally. The migration was pretty straightforward since n8n just hits an HTTP endpoint. Changed the URL from api.openai.com to localhost:11434 and updated the request format.

For most tasks (summarization, classification, drafting) the local models are good enough. Complex reasoning is worse but I don’t need that for automation workflows.

Hardware: i7 with 16GB RAM, running Llama 3 8B. Plenty fast for async tasks.

  • Barbecue Cowboy@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 hours ago

    The trash talking on AI is half people with legitimate concerns on the societal and ecological impact and the other half just want to be in on the party and aren’t interested in understanding it. It’s useful like googling things is useful, the items you search for are not always correct, but if you have a basic level of knowledge it’ll help you get where you want to be much faster.

    Nothing quite compares to Claude Opus in a cohesive package that I’d recommend for an average self hoster but I personally really like running Nemotron from Nvidia. It’s not the best model, but in my experience it’s consistently good enough along with being fast and stable. If you’re focused more on coding, I hear the Qwen series had some good models.