That’s a great tip. I’d completely forgot you can use telnet for that. Thanks!
- 0 Posts
- 18 Comments
Thanks for the response. I really should just dive in, but I’ve got this nagging fear that I’m going to forget about some DNS record that will bork my entire mail service. It good to hear about some working instances that people are happy with.
Tainted in that the kernel and ZFS have different licenses. Not a functional impairment. I have no way to check to check a system not using ZFS. For my use case, Debian plus ZFS are PVE’s principal features.
I have synapse server running in docker on a VPS and it’s been pretty reliable. At my office I use it as sort of a self-hosted Slack replacement. For our use case, I don’t have federation enabled, so no experience on that front. It’s a small office and everyone here uses either Element or FuzzyChat on desktop and mobile. It runs behind an nginx reverse proxy and I’ve got SSO set up with Authentik and that’s worked very well. Happy to share some configs if that would be useful.
Have you by any chance documented your PMG set up? I’m also a very happy Mailcow user and spinning up PMG is something I’ve been meaning to tackle for years so I can implement archiving with mailpiler, but I’ve never really wrapped my head around how everything fits together.
Ceph isn’t installed by default (at least it hasn’t been any time I’ve set up PVE) and there’s no need to use ZFS if you don’t want to. It’s available, but you can go right ahead and install the system on LVM instead.
tvcvt@lemmy.mlto Linux@lemmy.ml•Unbound as DNS resolver on a Linux laptop: tips/experiences?4·20 days agoUnbound can query the root dns servers, but it’s also commonly used as a recursive resolver, which just uses a server upstream, similar to
systemd-resolved
. I use unbound network-wide, but I have it querying 9.9.9.9 to take advantage of their filtering.
tvcvt@lemmy.mlto Linux@lemmy.ml•Unbound as DNS resolver on a Linux laptop: tips/experiences?5·20 days agoYou may already have a local dns caching mechanism on your computer. I think by default Ubuntu uses
systemd-resolved
(it does on my desktops anyway). If you checkdig
it’ll show lookups coming from 127.0.0.53. With that in place, your local machine is caching lookup results and anything it doesn’t know, it’s forwarding to the network’s resolver (which it gets via dhcp, usually).
I set up an old thin client with Debian and lxqt to connect to a VM on Proxmox. Got the idea from an Apalrd’s Adventures video about VDI. It worked pretty well on a decent network, but it really suffered on high latency networks.
tvcvt@lemmy.mlto Selfhosted@lemmy.world•Fresh Proxmox install w/ full disk encryption—so install Debian first, then Proxmox on top?English2·3 months agoI think you can do the same with LUKS (https://www.cyberciti.biz/hardware/cryptsetup-add-enable-luks-disk-encryption-keyfile-linux/) if that’s your preferred route.
tvcvt@lemmy.mlto Selfhosted@lemmy.world•Fresh Proxmox install w/ full disk encryption—so install Debian first, then Proxmox on top?English10·3 months agoAnother idea for you: if you use ZFS for the install, check Debian directions on OpenZFS or zfsbootmenu and you’ll get directions for an encrypted installation. You’ll be able to specify the path to a key file, which you can keep on a thumb drive. When the machine boots up, it’ll see the thumb drive and decrypt the zpool automatically; yank the thumb drive and it won’t (backup the key of course).
The answers for this will vary widely, but the thing I think many people overlook when planning out expenses is a plan to back up the data. Having the file server is great, but start planning now for what to do when it breaks. Where will backup copies of your data live and how will you restore it?
As to the server itself, the hardware completely depends on your desires. Some like second hand enterprise gear; others prefer purpose-made home NAS devices or a DIY rig. On the software side my thought is keep it simple if you’re starting up. There are good readymade options (TrueNAS, XigmaNAS, openmediavault, unraid, etc). They’re all great and they help get up and running quickly. They also have a lot of tempting knobs to turn that can cause unexpected problems if you don’t fully understand them.
To my mind file servers have to be reliable above all else, so I’d avoid running anything besides file sharing on your server until it’s running like a top and then only add more layers one at a time.
Sorry for all the philosophy, but I really do think this is a common stumbling block for people getting started.
You ever see those Wired videos where they talk about a concept on five different levels ranging from beginner to expert?
The first level answer is likely that, yes, you’re reasonably secure in your current setup. That’s true, but it’s also really simplified and it skips a lot of important considerations. (For example, “secure against what?”) One of the first big realizations that hit me after I’d been running servers for a little while and trying to chase security is the idea of a threat model. What protects me from a script kiddie trying to break into one of my web servers won’t do much for me against a phishing attack.
The more you do this, though, the more I think you’ll realize that security is more of a process than an actual state you can attain.
I think it sounds like you’re doing a good job moving cautiously and picking up things at each step. If the next step is remote access, you’ve got a pretty good situation for a mesh VPN like Tailscale or Netbird or ZeroTier. They’ll help you deal with the CGNAT and each one gives you a decent growth path where you can start out with a free tier and if you need it in the future, either buy into the product or self host it.
Keep an eye on lowendbox.com’s hosting offers. There’s some junk to wade through, but it sounds like exactly what you’re after.
tvcvt@lemmy.mlto Selfhosted@lemmy.world•Best Back Up Solution For Multiple ServersEnglish0·6 months agoIt sure will handle a remote VPS, it’s just not as automatic to set up as it is with PVE.
I put this off for a long time, but I finally did it this weekend.
Basically, you install the
proxmox-backup-client
utility and then run it viacron
or asystemd timer
to do the backup however often you want.You’re responsible for getting the VPS to communicate with your backup server (like pretty much any self-hosted service), so some sort of VPN between them would be good. I used NetBird for that part and I have a policy that allows access from the client to PBS only on TCP port 8007.
tvcvt@lemmy.mlto Selfhosted@lemmy.world•Best Back Up Solution For Multiple ServersEnglish0·6 months agoI’ve been quite happy with Proxmox Backup Server. I’ve had it running for years and it’s been pretty solid for all my VMs/containers. There’s also a bare metal client, which I’m adding to a couple cloud VPS machines this weekend. We’ll see how that goes.
Also, since it’s just Debian under the hood, I also use the PBS host as a replication target for my ZFS datasets via sanoid/syncoid.
I’m a huge Debian fan, but I’d say everyone should give openSUSE a shot. It’s a well thought out distro that doesn’t get enough love.
I’m not familiar with Zurg, but the WebDAV connection makes me recall: doesn’t LXC require that the FUSE kernel module be loaded in order to use WebDAV?
I’ve also seen it recommended that WebDAV be setup on the host and then the mount points bind mounted into the container. Not sure if any of that helps, but maybe it’ll lead you somewhere.