In the next ~6 months I’m going to entirely overhaul my setup. Today I have a NUC6i3 running Home Assistant OS, and a NUC8i7 running OpenMediaVault with all the usual suspects via Docker.
I want to upgrade hardware significantly, partially because I’d like to bring in some local LLM. Nothing crazy, 1-8B models hitting 50tps would make me happy. But even that is going to mean a beefy machine compared to today, which will be nice for everything else too of course.
I’m still all over the place on hardware, part of what I’m trying to decide is whether to go with a single machine for everything or keep them separate.
Idea 1 is a beefy machine and Proxmox with HA in a VM, OMV or TrueNAS in another, and maybe a 3rd straight Debian to separate all the Docker stuff. But I don’t know if I want to add the complexity.
Idea 2 would be beefy machine for straight OMV/TrueNAS and run most stuff there, and then just move HA over to the existing i7 for more breathing room (mostly for Frigate, which could also separate to other machine I guess).
I hear a lot of great things about Proxmox, but I’m not sold that it’s worth the new complexity for me. And keeping HA (which is “critical” compared to everything else) separated feels like a smart choice. But keeping it on aging hardware diminishes that anyway, so I don’t know.
Just wanting to hear various opinions I guess.
Proxmox is a convenient gui wrapper around libvirt but you can do everything without it.
It’s got more than just VM management, but yeah, it’s a frontend for a bunch of other services, that you don’t need Proxmox for.
but you can do everything without it.
yes but why would you? There’s a reason we use GUIs, especially when new to a field (like virtualization).
libvirt comes with some gui tool of its own, though I haven’t used it. I generally prefer to understand what I’m doing, so I use command line tools or API’s at first. GUI’s are a convenience to use later, once it’s clear how they work.
yes but why would you?
Mainly because you’re required to use their distribution, or to build on Debian, which is not to everyone’s liking.
Of course that’s an argument against proxmox, and not
virt-managerand the like.Once you get to know the GUI well enough and start scripting, the GUI becomes less relevant.
This is untrue, proxmox is not a wrapper around libvirt. It has it’s own API and it’s own methods of running VM’s.
I did it purely so I could fully back up my server VM and move it to new hardware when I wanted to upgrade. I just have to install Proxmox, attach the NAS, and pull the VM backup. And just like that everything is back to running just as it was before the upgrade! Now just faster and more energy efficient!
I have recently moved non-vm truenas to a new hardware and actually it was a breeze. I just created the backup, disconnected the drives, physically put them into the new server, install the truenas, restored the backup, and it was done. I understand that everyone has different preferences. I’m just saying that it’s easy to move truenas without it being the VM as well.
I like ProxMox too, I’m quite happy that I dove in with it. Just one word of warning - if you mount a drive volume in a container, destroy the container and restore it from a backup, it wipes out the mounted drive. I, uh, lost a bunch of data that way. Not super important data, but still.
I’m still glad I went with ProxMox though. It makes spinning up something a breeze, and I also went with HA in a VM, and another Debian VM for Docker, and a bunch of random LXCs.
Is this separate from a bind mount? Cause that doesn’t happen with bind mounts.
Yeah, not a bind mount. There was a warning, but I was restoring a ton of LXCs and clicked through the warning too fast. My fault, I’m not super sore about it, just warning others as a service to prevent what happened to me!
Fair enough!
If you can replicate it, you should really file a bug report so that the next guy doesn’t lose data.
It tells you it will happen when you use the restore backup feature.
Proxmox adds a lot of complexity and a nice GUI. If you are fine with using the terminal, there is really not much benefit from Proxmox and the potential issues from the added complexity are IMHO not worth it. I am not a Proxmox expert though, so take this advise with a grain of salt 😅
Is it decently easy to create and manage vm’s and containers with the terminal? I use proxmox at the moment. Should I switch to Ubuntu server?
Should I switch to Ubuntu server?
Thats a hard no IMO.
Even if you want to do something other than proxmox (just use Debian, fedora, or opensuse).
Its not bad from the CLI, you just need to know your commands.
virt-install --name=deb13-vm --vcpus=1 --memory=1024 --cdrom=/tmp/debian-13.0.0-amd64-netinst.iso --disk size=8 --os-variant=debian13Will get you 1 vcpu, 1GB ram, and an 8GB drive worth of debian. If you don’t specify a path, in home under .local/share/libvirt/images it will go!
You can also then
virsh edit deb13-vmAnd you’ll get the XML, where you can edit away.
Personally, I’d rather use the webgui for most things, but yeah its perfectly doable from the CLI.
I would have thought debian is better than ubuntu but I couldn’t find a server version of debian. Where do I find debian server or debian cli only?
Debian is by default suited for server. Just skip the desktop environment part in the installer.
Oh ok, I’ve never installed debian before so thats good to know.
With libvirt it is fairly easy yes. And you can also install a standalone web-gui like Cockpit or use the desktop app virt-manager over ssh to do it.
In my opinion, Proxmox is worth it for two reasons:
-
Easy high-availability setup and control
-
Proxmox Backup Server
Those two are what drove me to switch from KVM, and I don’t regret it at all. PBS truly is a fantastic piece of software.
Upvoted for PBS alone. Incremental backups that are rock solid mean you can completely brick your server and have it back to normal in minutes
-
Don’t use Proxmox, use incus. It’s way easier to run and doesn’t give a care about your storage.
No backup utility like PBS though, thats why I haven’t switched.
Like I said, incus don’t care about your storage.
I’ve never used PBS, I’ve always just rolled my own. I currently keep 7 daily, 4 weekly and 4 monthly. My data mounts are all nfsv4.
Edit: isnt it possible to use pbs with non-proxmox systems?
Yeah it sounds nice but too much time investment for me.
I can install PBS client on any system but it requires manual setup and scheduling which I don’t want to do. When used with Proxmox that’s all handled for me.
Also I don’t think Proxmox cares about storage either, I just use ZFS which is completely standard under the hood.
Also I don’t think Proxmox cares about storage either,
Proxmox forces you to add a “storage area”, which is fine, except you must use their mount path of /mnt/pve/ and you must add NFS tuning switches via pve or they don’t work.
Proxmox is great, I used it for 8 years. But it is also opinionated and doesn’t like non-standard configs.
Oh I see what you mean yeah, I’ve never used NFS before with it.
I like Incus a lot, but it’s not as easy to create complex virtual networksnas it is with proxmox, which is frustrating in educational/learning environments.
No.
The one factor that no one seems to have mentioned yet that is key for many of us is LEARNING …
It’s a great way to learn virtualization and containerization
I use it exclusively to run Linux containers, it makes it very convenient to backup and restore as well as replicate environments.
We are now migrating our lab at work away from VMW
It’s great if you need what it offers. Otherwise, it’s simpler to set up something like Ubuntu Server.
I use Proxmox to run my email service, https://port87.com/, because I can have high-availability services that can move around the different Proxmox hosts. It’s great for production stuff.
I also use it to run my seedbox, because graphics in the browser through Proxmox is really easy.
For everything else (my Jellyfin, Nextcloud, etc), I have a server that runs Ubuntu Server and use a docker compose stack for each service.
I had never heard of Port87 before, how do you like it? And I assume you pay no monthly fee by hosting your own domain?
I meant that I made it. :) It’s my own email service, and I run it on Proxmox. So, take this with a grain of salt knowing that I wrote and run it, but I think it’s the best email service by far. I wrote an article about how it works really well for me here:
https://sciactive.com/2023/07/17/the-best-email-for-those-who-struggle-with-organization/
Feel free to sign up for free and try it out. :D
Interesting.
I have an old free email provider that’s just passed the email service to another provider
I’m looking to move because I used to be able to use <anything-at-all>@my-email.domain and I’m not sure I’ll be able to do that anymore
I basically do what you’re doing - using email prefixes for the site I’m registering with… I even caught a company out once when I suddenly started getting spam from that email address. They’d sold my details…
You should check out Port87. :) You wouldn’t need to change any of your addresses if you bring your domain on. Custom domains is $10/month though, so it would cost you more. Hopefully the features would be worth it for you, and if not, you can always migrate it again to a different provider. That’s something I love about email. If you have your own domain, you can completely avoid vendor lock in.
Best thing to do is give it a go and see what shakes out OP. I absolutely love both my Proxmox boxes. In my humble opinion, Proxmox was an easier set up, and the possibilities are endless really. It’s a solid freemium product. Couple it with the extensive Helper Scripts, and Jack’s a doughnut, Bob’s your uncle.
Do you need clusters that can failure ver from one machine to another? Is yes, proxmox is good. If no, there are less complex options.
Why rule out proxmox as “complex” just because there is no need for HA??
Because it moves further from a vanilla setup without solving a problem.
I use PVE professionally. I could spent some time bitching about how it handles ssh keys and the fragile corosync cluster management. I could complain about the sloppy release cycle and the way they move fast and break shit. Or all the janky shit they’ve slapped together in PBS. I could go on.
But I actually pay for a license for my homelab. And ya, it is THE thing at work now.
I’ve often heard it said that Proxmox isn’t a great option. But its the best one.
If you do try it, don’t bother asking questions here.
Go to the source. https://forum.proxmox.com/Please elaborate. How does it handle ssh keys? And what is fragile regarding corosync?
SSH key management in PVE is handled in a set of secondary files, while the original debian files are replaced with symlinks. Well, that’s still debian. And in some circumstances the symlinks get b0rked or replaced with the original SSH files, the keys get out of sync, and one machine in the cluster can’t talk to another. The really irritating thing about this is that the tools meant to fix it (pvecm updatecerts) don’t work. I’ve got an elaborate set of procedures to gather the certs from the hosts and fix the files when it breaks, but it sux bad enough that I’ve got two clusters I’m putting off fixing.
Corosync is the cluster. It’s a shared file system that immediately replicates any changes to all members. That’s essentially anything under /etc/pve/. Corosync is very sensitive. I believe they ask for 10ms lag or less between hosts, so it can’t work over a WAN connection. Shit like VM restores or vmotion between hosts can flood it out. Looks fukin awful when it goes down. Your whole cluster goes kaput.
All corosync does is push around this set of config files, so a dedicated NIC is overkill, but in busy environments, you might wind up resorting to that. You can put cororsync on its own network, but you obviously need a network for that. And you can establish throttles on various types of host file transfer activities, but that’s a balancing act that I’ve only gotten right in our colos where we only have 1gb networks. I have my systems provisioned on a dedicated corosync vlan and also use a secondary IP on a different physical interface, but corosync is too dumb to fall back to the secondary if the primary is still “up”, regardless of whether its actually communicating, so I get calls on my day off about “the cluster is down!!!1” when people restore backups.
Thanks for your answer.
I use proxmox since version 2.1 in my home lab and since 2020 in production at work. We did not have issues with the ssh files yet. Also corosync is working fine although it shares its 10g network with ceph.
In all that time I was not aware of how the certs are handled, despite the fact I had two official proxmox trainings. Ouch.
Cool.
Here. SSH key issues. There was a huge forum war.
https://forum.proxmox.com/threads/ssh-keys-in-a-proxmox-cluster-resolving-replication-host-key-verification-failed-errors.138102/
But its still a thing. That still needs to be fixed by a human. Today that’s me.Regarding CEPH and corosync on the same network … well I’m just getting started with that now. I do have them on different vlans, but its the same 10gb set of nics. I’m hoping if it gets really lousy, my netadmin can prioritize the corosync vlan. I’ll burn that bridge when I come to it.
EDIT … The linked forum post above leads to the SSH key answer, but its convoluted.
Here’s what I put in my own wiki.Get the right key from each server.
cat ~/.ssh/id_rsa.pubMake sure they match in here. Fix em if they don’t.
/etc/pve/priv/authorized_keysThere’s a couple symlinks to fix too, but this should get it.
From an earlier post I made much like yours, I decided to go with incus. I’d be fully migrated if real life hadn’t kicked me in the taint for a few weeks.
I shy away from VMs because I prefer having a pool of resources on a machine that can be used as needed instead of being pre-allocated. Pre-allocating CPU, RAM, and doing PCI passthough for GPUs wastes already limited resources and is extra effort. Yes, the best practice for production k8s is setting resource requests and limits, but it’s not something I want to bother with when I only have one server.
Just to address the resourcing point…
VM resources can be over allocated, meaning that the hypervisor will try it’s best to meet their requirements, so you’re not wasting anything and could run more VMs than you have resources for.
Yes, VMs can also be configured to need a certain amount of resources and the hypervisor will have to stop, but I just wanted you to know it’s not fixed.
I need do update my hardware and thought about switching to proxmox, because of all the good things i hear about it. Iam currently on unraid, but this thing still runs and its the same installation of 7 years ago. It had zero downtime. Mutliple drives, vms and docker container. Easy to use and rock solid.











