Not sure if this goes here or if this post will be hated upon? but i want to host ai like llms and comfyuis newer models locally but im not sure what type of setup or parts would work best on a possible slim budget? im not sure either if now is the time with inflation and such.
I dont have a price in mind yet but im wondering how much it would cost or what parts i may need?
If you have any questions or concerns please leave a comment.


It would not be worth it as a replacement for Claude.
80% of my issue is that it’s AMD and their drivers are still awful. 20% is that the token generation speed very slow, especially compared to commercial models running on dedicated hardware. MOE models are fine, dense models are too slow for meaningful workflows. ComfyUI is decent, but I’m not seriously into image gen.
I have a lot of fun with it, but I have not been able to use it for any actual AI dev.