

That’s certainly true in terms of TrueNAS Core, but FreeBSD itself is quite active (15.0-RELEASE dropped this month), as are the others BSDs.


That’s certainly true in terms of TrueNAS Core, but FreeBSD itself is quite active (15.0-RELEASE dropped this month), as are the others BSDs.


I’m not sure what it is, but Scale has never thrilled me. I’ve tested it a couple times and I just didn’t get along well with it. I’ve tested know Jim Salter (practicalzfs.com) has frequently recommended XigmaNAS as a strong (albeit less pretty) alternative to TrueNAS. I did some tests with that as well and it seemed perfectly fine. In the end I decided that when I migrate off of Core this winter, it’ll be to a bare metal FreeBSD system. I’m using it as an excuse to better learn that ecosystem and to bone up on ansible, which I’m using to define all of my settings.
There’s lots of good stuff on YouTube, including from David Bombal and Jeremy Cioara. If you’re more of a listening-while driving person, years ago the Security Now podcast did a “how the internet works” series that gives a terrific overview of the TCP/IP stack (it’s from 2006, but it’s still very applicable). And if you like to read, Michael Lucas just released a “Networking for Sysadmins” book, which is excellent.


To my thinking the most important difference would be mobility. Using the Synology app would probably make setup somewhat easier, but if you ever decided to leave the Synology ecosystem migration would likely be more complicated. That by itself isn’t a recommendation one way or another, but it should definitely factor into your planning.


Sure thing—autofs is a pretty cool utility and it works with SMB as well.
If the storage isn’t present for PBS, the backup would fail. There are files inside the directory that PBS will notice are missing.
Mounting the NFS export in the PVE host is the simplest way to get shared storage into an LXC container. You have to fight apparmor to mount NFS or SMB inside the container directly.


No, I used an unprivileged container and I set the permissions on the NFS server to accommodate that.


I use it like I might use unbound or dnsmasq, but I’d think of it more like bind. It’s can be used as a recursive or authoritative resolver. It supports all kinds of protocols (DOT, DOH, DNSSEC, etc). Handles zone transfers easily. It’s pretty slick. Definitely worth a look


If you’d like some separation, one option is to create a VM on TrueNAS for PBS that connects to an NFS export where all the data would be stored.
What I did in this scenario is an LXC container running PBS, which uses a bindmount for storage. That bindmount is populated via an NFS export from my NAS, mounted on the PVE host using autofs so that if it disconnects, it will reconnect as soon as it can.


Technetium is a recursive DNS resolver with a nice web UI. If you’re familiar with PiHole or AdGuard Home, you can think of it in that genre, but much more full-featured.


If you haven’t already, check out the Awesome Open Source page’s Booking and Scheduling section.
That metadata is written into the photo by the camera, so Immich may not be able to accommodate this easily. Not sure about Canon specifically, but my Nikon cameras have a memory bank for manual focus lenses. Might be worth checking through your menus.


The two pieces of software have very different topologies.
In very broad strokes: Something like FunkWhale uses a server-client model. To get to it, you connect to it remotely and you need some way to get there. By contrast Syncthing behaves as a mesh of nodes. Each node connects directly to the other nodes and the syncthing project folks host relays that help introduce the nodes to one another and penetrate NAT.
No, you may not need a paid domain to use your self-hosted FunkWhale server (I haven’t dabbled with that service in particular). There are a few options.
These all assume that you have a public IP address on your router and not one that’s being NAT-ed by your ISP.
Again, these are very broad strokes, but hopefully it helps point your in a direction for some research.


There’s definitely nothing magic about ports 443 and 80. The risk is always that the underlying service will provide a vulnerability through which attackers could find a way. Any port presents an opportunity for attack; the security of the service is the is what makes it safe or not.
I’d argue that long tested services like ssh, absent misconfiguration, are at least as safe as most reverse proxies. That doesn’t mean to say that people won’t try to break in via port 22. They sure will—they try on web ports too.


I’m not sure if this what you’re after, but it sounded to me that you were describing monitoring. Might be worth your checking out librenms or zabbix or checkmk. Those would give you a good overview of the health of your stuff and keep track of what’s where.


It’s not exactly a single tool, but torsocks kind of enables doing what you’re describing. The syntax would be something like torsocks curl $url


/etc/network/interfaces file and the config file for the VMs (from /etc/pve/qemu-server)?Hope some of that helps
I’m not familiar with Zurg, but the WebDAV connection makes me recall: doesn’t LXC require that the FUSE kernel module be loaded in order to use WebDAV?
I’ve also seen it recommended that WebDAV be setup on the host and then the mount points bind mounted into the container. Not sure if any of that helps, but maybe it’ll lead you somewhere.
That’s a great tip. I’d completely forgot you can use telnet for that. Thanks!
Thanks for the response. I really should just dive in, but I’ve got this nagging fear that I’m going to forget about some DNS record that will bork my entire mail service. It good to hear about some working instances that people are happy with.
If I remember correctly, that was largely in consideration of the large corpus of docker-packaged projects that could be used as a pre-built app ecosystem. That makes a lot of sense for anyone who really wants an appliance-like all-in-one system with minimal setup.