Been running n8n with Ollama for a few months now for work automation. Wanted to share what I’ve learned since it’s not super well-documented.

The setup is just Docker Compose with n8n + Ollama + Postgres. n8n’s HTTP Request node talks directly to Ollama’s REST API — no custom nodes needed.

What I’m running:

  • Email digest every morning (IMAP → Ollama → Slack)
  • Document summarization (PDF watcher → Ollama → notes)
  • Lead scoring from form webhooks

Zero API costs, everything stays on my server. If anyone wants the workflow templates I have a pack: https://workflows.neatbites.com/

Happy to answer questions about the setup.

  • mental_block@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 hours ago

    Piggybacking too as I am considering the same. Please OP and thank you.

    And what model class are you using? Lightweight (2B), reasonable ~10B or above 32B?

    Do they load fast?

    I had a look at NetworkChucks setup and don’t think I can afford an overpowered rig in this economy. Depending on the rig, may have to wait >20s for a prompt answer.

    Thank you again!

    • frongt@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      I was playing with ministral-3 3b on a 3060. It loads pretty quick, but response generation is a bit slow. It starts responding nearly instantly once the model is loaded (which is also quick), but for long responses (~5 paragraphs) it may take 15-20 seconds for the whole thing.