• 0 Posts
  • 7 Comments
Joined 2 years ago
cake
Cake day: November 5th, 2023

help-circle
  • Named volumes are often the default because there is no chance of them conflicting with other services or containers running on the system.

    Say you deployed two different docker compose apps each with their own MariaDB. With named volumes there is zero chance of those conflicting (at least from the filesystem perspective).

    This also better facilitates easier cleanup. The apps documentation can say “docker compose down -v”, and they are done. Instead of listing a bunch of directories that need to be cleaned up.

    Those lingering directories can also cause problems for users that might have wanted a clean start when their app is broken, but with a bind mount that broken database schema won’t have been deleted for them when they start up the services again.

    All that said, I very much agree that when you go to deploy a docker service you should consider changing the named volumes to standard bind mounts for a couple of reasons.

    • When running production applications I don’t want the volumes to be able to be cleaned up so easily. A little extra protection from accidental deletion is handy.

    • The default location for named volumes doesn’t work well with any advanced partitioning strategies. i.e. if you want your database volume on a different partition than your static web content.

    • Old reason and maybe more user preference at this point but back before the docker overlay2 storage driver had matured we used the btrfs driver instead and occasionally Docker would break and we would need to wipe out the entire /var/lib/docker btrfs filesystem, so I just personally want to keep anything persistent out of that directory.

    So basically application writers should use named volumes to simplify the documentation/installation/maintenance/cleanup of their applications.

    Systems administrators running those applications should know and understand the docker compose well enough to change those settings to make them production ready for their environment. Reading through it and making those changes ends up being part of learning how the containers are structured in the first place.


  • For shared lines like cable and wireless it is often asymmetrical so that everyone gets better speeds, not so they can hold you back.

    For wireless service providers for instance let’s say you have 20 customers on a single access point. Like a walkie-talkie you can’t both transmit and receive at the same time, and no two customers can be transmitting at the same time either.

    So to get around this problem TDMA (time division multiple access) is used. Basically time is split into slices and each user is given a certain percentage of those slices.

    Since the AP is transmitting to everyone it usually gets the bulk of the slices like 60+%. This is the shared download speed for everyone in the network.

    Most users don’t really upload much so giving the user radios equal slices to the AP would be a massive waste of air time, and since there are 20 customers on this theoretical AP every 1mbit cut off of each users upload speed is 20mbit added to the total download capability for anyone downloading on that AP.

    So let’s say we have APs/clients capable of 1000mbit. With 20 users and 1AP if we wanted symmetrical speeds we need 40 equal slots, 20 slots on the AP one for each user to download and 1 slot for each user to upload back. Every user gets 25mbit download and 25mbit upload.

    Contrast that to asymmetrical. Let’s say we do a 80/20 AP/client airtime split. We end up with 800mbit shared download amongst everyone and 10mbit upload per user.

    In the worst case scenario every user is downloading at the same time meaning you get about 40mbit of that 800, still quite the improvement over 25mbit and if some of those people aren’t home or aren’t active at the time that means that much more for those who are active.

    I think the size of the slices is a little more dynamic on more modern systems where AP adjusts the user radios slices on the fly so that idle clients don’t have a bunch of dead air but they still need to have a little time allocated to them for when data does start to flow.

    A quick Google seems to show that DOCSIS cable modems use TDMA as well so this all likely applies to cable users as well.


  • The problem is that on top of the pins occasionally not making good contact on these new connectors, Nvidia has been cheaping out on how power is delivered to the card.

    They used to have three shunt resistors that the card could use to measure voltage drop. That meant that the six power pins were split into pairs and if any pair did make contact the card could detect it and prevent the card from powering up.

    There could be a single pin in each of those pairs not making contact meaning that the remaining pins are being forced to handle double their rated power. It is unlikely that you would lose one pin on each pair so that is an unlikely worst case, but a single pin in a single pair failing could be fairly common.

    But on the 40 series they dropped to two shunt resistors. So instead of three pairs, they can only monitor 2x bundles of three wires. Meaning the card can only detect that the plug isn’t plugged in correctly if all three wires in the same bundle are disconnected.

    You could theoretically have only two out of six power pins plugged in and the card would think everything is fine. Each of those two remaining pins being forced to handle three times their normal current.

    And on the 5090 FE they dropped down to one shunt resistor… So five of the six pins can be disconnected and the card thinks everything is fine, forcing six times the current down a single wire.

    https://www.youtube.com/watch?v=kb5YzMoVQyw

    So the point of these fused cables is to work around a lack of power monitoring on the card itself with cables that destroy themselves instead of melting the connector on your $2000 GPU.


  • greyfox@lemmy.worldtoSelfhosted@lemmy.worldSharing Jellyfin
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Depending on how you setup your reverse proxy it can reduce random scanning/login attempts to basically zero. The point of a reverse proxy is to act as a proxy, as a sort of web router, and to validate that the http requests are correctly formatted.

    For the routing depending on what DNS name/path the request comes in with it can route to different backends. So you can say that app1.yourdomain.com is routed to the internal IP address of your app1, and app2.yourdomain.com goes to app2. You can also do this with paths if the applications can handle it. Like yourdomain.com/app1.

    When your client makes a request the reverse proxy uses the “Host” header or the SNI string that is part of the TLS connection to determine what certificate to use and what application to route to.

    There is usually a “default” backend for any request that doesn’t match any of the names for your backend services (like a scanner blindly trying to access your IP). If you disable the default backend or redirect default requests to something that you know is secure any attacker scanning your IP for vulnerabilities would get their requests rejected. The only way they can even try to hit your service is to know the correct DNS name of your service.

    Some reverse proxies (Traefik, HAproxy) have options to reject the requests before the TLS negation has even completed. If the SNI string doesn’t match the connection just drops it doesn’t even bother to send a 404/5xx error. This can prevent an attacker from doing information gathering about the reverse proxy itself that might be helpful in attacking it.

    This is security by obscurity which isn’t really security, but it does reduce your risk because it significantly reduces the chances of an attacker being able to find your applications.

    Reverse proxies also have a much narrower scope than most applications as well. Your services are running a web server with your application, but is Jellyfin’s built in webserver secure? Could an attacker send invalid data in headers/requests to trigger a buffer overflow? A reverse proxy often does a much better job of preventing those kinds of attacks, rejecting invalid requests before they ever get to your application.


  • Agreed. The nonstandard port helps too. Most script kiddies aren’t going to know your service even exists.

    Take it another step further and remove the default backend on your reverse proxy so that requests to anything but the correct DNS name are dropped (bots just are probing IPs) and you basically don’t have to worry at all. Just make sure to keep your reverse proxy up to date.

    The reverse proxy ends up enabling security through obscurity, which shouldn’t be your only line of defence, but it is an effective first line of defence especially for anyone who isn’t a target of foreign government level of attacks.

    Adding basic auth to your reverse proxy endpoints extends that a whole lot further. Form based logins on your apps might be a lot prettier, but it’s a lot harder to probe for what’s running behind your proxy when every single URI just returns 401. I trust my reverse proxy doing basic auth a lot more than I trust some php login form.

    I always see posters on Lemmy about setting up elaborate VPN setups for as the only way to access internal services, but it seems like awful overkill to me.

    VPN still needed for some things that are inherently insecure or just should never be exposed to the outside, but if it is a web service with authentication required a reverse proxy is plenty of security for a home lab.


  • You are paying for reasonably well polished software, which for non technical people makes them a very good choice.

    They have one click module installs for a lot of the things that self hosted people would want to run. If you want Plex, a onedrive clone, photo sync on your phone, etc just click a button and they handle installing and most of the maintenance of running that software for you. Obviously these are available on other open source NAS appliances now too so this isn’t much of a differnentiator for them anymore, but they were one of the first to do this.

    I use them for their NVR which there are open source alternatives for but they aren’t nearly as polished, user friendly, or feature rich.

    Their backup solution is also reasonably good for some home labs and small business use cases. If you have a VMware lab at home for instance it can connect to your vCenter and it do incremental backups of your VMs. There is an agent for Windows machines as well so you can keep laptops/desktops backed up.

    For businesses there are backup options for Office365/Google Workspace where it can keep backups of your email/calendar/onedrive/SharePoint/etc. So there are a lot of capabilities there that aren’t really well covered with open source tools right now.

    I run my own built NAS for mass storage because anything over two drives is way too expensive from Synology and I specifically wanted ZFS, but the two drive units were priced low enough to buy just for the software. If you want a set and forget NAS they were a pretty good solution.

    If their drives are reasonably priced maybe they will still be an okay choice for some people, but we all know the point of this is for them to make more money so that is unlikely. There are alternatives like Qnap, but unless you specifically need one of their software components either build it yourself or grab one of the open source NAS distros.


  • The biggest question is, are you looking for Dolby Vision support?

    There is no open source implementation for Dolby Vision or HDR10+ so if you want to use those formats you are limited to Android/Apple/Amazon streaming boxes.

    If you want to avoid the ads from those devices apart from side loading apks to replace home screens or something the only way to get Dolby Vision with Kodi/standard Linux is to buy a CoreELEC supported streaming device and flashing it with CoreELEC.

    List of supported devices here

    CoreELEC is Kodi based so it limits your player choice, but there are plugins for Plex/Jellyfin if you want to pull from those as back ends.

    Personally it is a lot easier to just grab the latest gen Onn 4k Pro from Walmart for $50 and deal with the Google TV ads (never leave my streaming app anyways). Only downside with the Onn is lack of Dolby TrueHD/DTS Master audio output, but it handles AV1, and more Dolby Vision profiles than the Shield does at a much cheaper price. It also handles HDR10+ which the Shield doesn’t but that for at isn’t nearly as common and many of the big TV brands don’t support it anyways.