Elvith Ma'for

Former Reddfugee, found a new home on feddit.de. Server errors made me switch to discuss.tchncs.de. Now finally @ home on feddit.org.

Likes music, tech, programming, board games and video games. Oh… and coffee, lots of coffee!

I � Unicode!

  • 0 Posts
  • 15 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2024

help-circle
  • For me it was usually that the config that I need to serve a site with TLS is quite short, there are sensible defaults and many things (e.g. websockets) just work without further declaration. That’s especially important if you want to host a container that has some lacking documentation about usage of reverse proxies, as most things “just work fine” for me.

    And using a simple include directive, you can even replicate ‘sites-available’ and ‘sites-enabled’ behaviour. My standard Caddyfile just sets up the log file format and location and basic Let Encrypt values. Then it includes /foo/bar/sites-available/*. Every deployment/container now has its own Caddyfile that just gets linked there.






  • I only host Nextcloud in an old setup (read pure PHP, MariaDB, Apache - no docker, etc.)

    That server is set-up to be snapshotted daily. Also there’s a script running about 30 min before each snap shot that will also dump the database to disk (as otherwise the snapshot might contain a random state of the database). It’s not perfect, but it works - also because everything of this is done in the night, when I do not use the system, so chances are really low, that the snapshot of the disk and the database dump in it are not desynchronized too much.

    I do not know what’s the best practice for a modern Nextcloud setup with docker is or how to handle the other two…







  • Came to suggest this. I ran into the same problem when I tried to host Jellyfin at home. Also I was fed up with all those certificate warnings, depending on which device I used. Since I was already using pihole in my home network, I just went and looked at all the DNS plugins for certbot to learn which provider allows for easy DNS challenges. Then I researched a bit and stumbled upon a provider that was running a sale - so I got a domain for less than 5 bucks/year.

    I set the public A record to 127.0.0.1 and configured certbot to use their API. This domain is now used internally in my network exclusively and I just added some DNS entries for several subdomains in pihole, so that it works for every device at home (e.g. jellyfin.example.com / dockerhost.example.com / proxmox.example.com / …).

    When I’m away, I shouldn’t be able to resolve the domain, and even if DNS were hijacked, the TLS certificate will protect me from connecting to $randomServices. Also my router is less restricted, which means that I can just use it’s VPN server to connect directly to my home network, if I need to access my server or need to troubleshoot things when away.



  • If done correctly, those may only be open from the internet, but not from the local network. While SSH may only be available from your local network - or maybe only by the fixed IP of your PC. Other services may only be reachable, when coming from the correct VLAN (assuming you did segment your home network). Maybe your server can only access the internet, but not to the home network, so that an attacker has a harder time spreading into your home network (note: that’s only really meaningful, if it’s not a software firewall on that same server…)


  • Instead of thinking with layers, you should use think of Swiss cheese. Each slice of cheese has some holes - think of weaknesses in the defense (or intentional holes as you need a way to connect to the target legitimately). Putting several slices back to back (in random order and orientation) means that the way to penetrate all layers is not a simple straight way, but that you need to work around each layer.