Something to handle code, text and math.

  • thingsiplay@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    4 hours ago

    I use local LLM with 8gb VRAM and 32gb system RAM, thanks to Vulkan support. My GPU is a RX 7600. I can run qwen/qwen3.6-35B-A3B-Q4_K_M.gguf and gemma-4-26B-A4B-it-Q4_K_M.gguf in example. It will first fill in the GPU and the rest will use the system RAM instead, which is slower but at least it will fit and run bigger models. I just need to lower the context length, which has a great impact (current custom value is 64k for anyone who wants to know).

    But this is still highly limited and not competitive at all. I mostly play around with it and occasionally ask a question here or there and that’s it. So if you are serious about your system, you need something faster and with more than just 8gb VRAM.

    • Domi@lemmy.secnd.me
      link
      fedilink
      arrow-up
      2
      ·
      3 hours ago

      As a side note, Qwen3.6-27B is much more capable than Qwen3.6-35B, even though it is much slower.

      https://huggingface.co/unsloth/Qwen3.6-27B-GGUF

      For coding tasks where you don’t mind waiting, you should be able to barely squeeze in the 8-bit quantized version with 32 GB RAM + 8 GB VRAM and have a pretty competent local model. 4-bit quants work but they have issues with complex tool calls.

      If you use the MTP branch of llama.cpp (and a suitable model) you can even double or triple your token generation speed: https://github.com/ggml-org/llama.cpp/pull/22673

      For easier tasks, disable reasoning for instant responses.

      • thingsiplay@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        2 hours ago

        I probably have to wait for my client (for noobs) to support MTP. So until then I play around with what I have. I’m not even that deep into Ai anyway and mostly play around and only use it occasionally to help. But thanks for the suggestion.

        I’m still experimenting, and just started doing some custom settings. What makes these “bigger” models more usable is, lowering the context to free up VRAM a bit and in exchange load more of the core model into VRAM. In example I’m trying this with a 31B unsloth gemma 4 model, but Q3_K_M and get 4 tok/sec. It’s slow and doesn’t have huge context, but for the occasional questions this is tolerable, with respect to the hardware I have.

        My main models are the previously mentioned 35B-A3B and 26B-A4B (where only a few billion parameters are active from a bigger pool) anyway, as they are pretty fast with 17 to 50 tok/sec. While the quality is acceptable and not really much different from the “bigger” models I can run.