

Trying to move my fully populated 42u rack to Europe.
So far been doing it piecemeal, but that will have to accelerate at some point.
Trying to move my fully populated 42u rack to Europe.
So far been doing it piecemeal, but that will have to accelerate at some point.
Worst part about docker: insane volume management.
This thread:
Jails make docker look like windows 11 with copilot.
That’s worse.
Fail2ban isn’t an application like jellyfin, it’s a security framework that should be built in to the gateway router.
Not really, you need a license and you can host openvpn at tcp 443, but chances are they’ll try to track you down and make your life unpleasant.
When I was there I vps bumped through Hk, that’s probably harder now.
There’s a mass rename button somewhere.
Been using nginx, probably should change just because my mail uses letsencyrot while my http uses bought certs.
Letsencrypt has gone far enough that we can just rely on it now apparently.
In my job? Yes.
At home? God no.
I make sure I can recover data when things go wrong, but otherwise my recovery path is redeploying quickly.
I’m gonna have to donate then.
It pushes stuff when they’re really really cold, so for instance init services and libs that have basically never been touched since boot but still technically need to be in memory.
They might have been pushed out because the page cache thought it had something more interesting, or if you have VMs, because the system wanted to make some huge pages.
It’s good, but be aware you want to stick to LTS kernels or at least don’t upgrade casually.
Arch is the worst for this, ubuntu and debian are better but still get hit.
https://forums.opensuse.org/t/zfs-on-tumbleweed-how-to-keep-a-working-kernel-version/151323
https://github.com/openzfs/zfs/issues/15759
https://www.reddit.com/r/archlinux/comments/137pucy/zfs_not_compatible_with_kernel_63/
Hit this recently on an arch build, switched to kernel-lts and it worked, but basically once every year or so the abi breaks and zfs is dead for 3-6 months on github.com/torvalds/linux@master. Just FYI.
FYI, zfs is pretty fucking fragile, it breaks a lot, especially if you like to keep your kernel up to date. The kernel abi is just unstable and it takes months to catch up.
Which is part of why I don’t trust zfs on root.
Worst case you can sometimes recover with zfs-fuse.
Nfs, it’s good enough, and is how everyone accesses it. I’m toying with ceph or some kind of object storage, but that’s a big leap and I’m not comfortable yet
Zfs snapshot to another machine with much less horsepower but similar storage array.
Debian boots off like a 128gb Sata ssd or something, just something mindless that makes it more stable, I don’t want to f with Zfs root.
My pool isn’t encrypted, don’t consider it necessary, though I’ve toyed with it in th past. Anything sensitive I keep on separate USB keys and duplicate them, and I use luks.
I considered virtiofs, it’s not ready for what I need, it’s not meant for this use case and it causes both security and other issues. Mostly it breaks the demarcation so I can’t migrate or retarget to a different storage server cleanly.
These are good ideas, and would work. I use zvols for most of this, in fact I think I pass through a nvme drive to freebsd for its jails.
Docker fucks me here, the volume system is horrible. I made an lxc based system with python automation to bypass this, but it doesn’t help when everyone releases as docker.
I have a simple boot drive for one reason: I want nothing to go wrong with booting, ever, everything after that is negotiable, but the machine absolutely has to show up.
It has a decent ups, but as I mentioned earlier, I live in San Jose and have fucking pge , so weeks without power aren’t fucking unheard of. I’m away from home so it has to come back after the fairly regular outages. I have some leeway, but my entire infrastructure is on it, so not much.
Zfs on Debian on bare metal with nfs server. Edit: and it hosts the worker vms
Vlan for services with routed subnet
Sriov connectx4 with 1 primary vm running freebsd and basically all my major services in their own jails. Won’t go into details, but it has like 20 jails and runs almost everything. (had full vnet jails for a while which was really cool but performance wasn’t great).
1 vm for external nginx and bind on Debian vm on isolated subnet/Vlan and dmz for exposed services
1 vm for mailinabox on dmz subnet/Vlan
1 Debian vm on services vlan/net for apps that don’t play well with freebsd, mostly dockers, I do not like this vm, it’s basically unclean and mostly isolated.
Few other vms for stuff.
It’s a Dell r730 with 2 2697(or 2698? 20c/40t each) with 512gb.
12x16tb hgst h530s with 2 nvme drives and 2 Sata ssds, somewhere in there is a zlog and l2arc.
Can’t figure out how to fit a decent GPU in there so currently it’s living on my dual Rome workstation, this system is due for an upgrade, thinking about swapping the workstation to a much lighter one and push the work to the server, while moving the storage to a dedicated system, but not there yet.
Love freebsd though, don’t use it as my daily driver, tried a bit, it worked but there was just enough trouble to not make it work, but freebsd has moved on and so have i, so it’s worth a shot again.
Decent i/O, but nothing to write home about, think it saturates the 10g but only just, I have gear for full 100g (I do a LOT of chip startups, and worked at a major networking chip firm a while) but it takes a lot more power, and i have PGE so I can’t justify it till I can seriously saturate it.
Also I’m in process of moving to Europe, built a weak network here and linked via wire guard, but shit is expensive here and I’m not sure how to finish the move just yet, so I’m basically 50/50 including time at work in the valley.
ZFS, hands down, it doesn’t even begin to hurt the SSDs, it’s basically the best choice, just try to not fill the whole volumes or it starts thrashing like crazy.
ZFS has encryption, but LUKS is fine too.
I’ve run Raidz2 for well over a decade, never had data loss that wasn’t extremely my fault, and I recovered from that almost immediately from backed up snapshot.
Yeah, there were other countries to ban, but those 2 cut my attacks down 90%.
Also consider a honeypot that triggers when anyone tries to ssh it at all.
3090 has 24gb and rolls the 38b models beautifully.
There are ip lists that let you iptables drop all traffic from China and Russia.
Strongly recommend.
I mean, they added a ton of features, especially minor or niche ones, but a lot of amazing ones like KDEConnect too.
But what makes KDE the best is that the features don’t get in the way of core functionality anymore, the basic DE is always safe and they generally layer stuff on such that it doesn’t break anything.
So basically the opposite of most of modern software nowadays.
Meh, it’s going.
I might do the rest at once, it’s about setting up a link between the two first (wireguard etc), then bringing servers over and up.
What terrifies me is trying to move my monitors, those are massive and beautiful with 0 room for error.
Used metal carriers with foam inserts for storage, but the balance is still back home.
Fiber in Europe is glorious, so that helps.