Aussie living in the San Francisco Bay Area.
Coding since 1998.
.NET Foundation member. C# fan
https://d.sb/
Mastodon: @dan@d.sb

  • 0 Posts
  • 144 Comments
Joined 3 years ago
cake
Cake day: June 14th, 2023

help-circle


  • Oops, I didn’t know about the SX line. Thanks!! I’m not familiar with all of Hetzner’a products.

    For pure file storage (ie you’re only using SFTP, Borgbackup, restic, NFS, Samba, etc) I still think the storage boxes are a good deal, as you don’t have to worry about server maintenance (since it’s a shared environment). I’m not sure if supports encryption though, which is probably where a dedicated server would be useful.



  • SQLite is underrated. I’ve used it for high traffic systems with no issues. If your system has a large number of readers and a small number of writers, it performs very well. It’s not as good for high-concurrency write-heavy use cases, but that’s not common (most apps read far more than they write).

    My use case was a DB that was created during the build process, then read on every page load.


  • MariaDB is not always a drop-in replacement. There’s several features that MySQL has that MariaDB doesn’t, especially related to the optimizer (for some types of queries, MySQL will give you a more optimized execution plan compared to MariaDB). It’s also missing some newer data types, like JSON (which indexes the individual fields in JSON objects to make filtering on them more efficient).

    MariaDB and MySQL are both fine. Even though MySQL doesn’t receive as much development any more, it doesn’t really need it. It works fine. If you want a better database system, switch to PostgreSQL, not MariaDB.


  • AWS Glacier would be about $200/mo, PLUS bandwidth transfer charges, which would be something like $500. R2 would be about $750/mo

    50TB on a Hetzner storage box would be $116/month, with unlimited traffic. It’d have to be split across three storage boxes though, since 20TB is the max per box. 10TB is $24/month and 20TB is $46/month.

    They’re only available in Germany and Finland, but data transfer from elsewhere in the world would still be faster than AWS Glacier.

    Another option with Herzner is a dedicated server. Unfortunately the max storage they let you add is 2 x 22TB SATA HDDs, which would only let you store 22TB of stuff (assuming RAID1), for over double the cost of a 20TB storage box.


  • dan@upvote.autoSelfhosted@lemmy.worldWhere are you running your wireguard endpoint?
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    22 days ago

    Both of those documents agree with me? RedHat are just using the terms “client” and “server” to make it easier for people to understand, but they explicitly say that all hosts are “peers”.

    Note that all hosts that participate in a WireGuard VPN are peers. This documentation uses the terms client to describe hosts that establish a connection and server to describe the host with the fixed hostname or IP address that the clients connect to and, optionally, route all traffic through this server.

    Everything else is a client of that server because they can’t independently do much else in this configuration.

    All you need to do is add an extra peer to the WireGuard config on any one of the “clients”, and it’s no longer just a client, and can connect directly to that peer without using the “server”.


  • dan@upvote.autoSelfhosted@lemmy.worldWhere are you running your wireguard endpoint?
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    edit-2
    22 days ago

    There’s no such thing as a client or server with Wireguard. All systems with Wireguard installed are “nodes”. Wireguard is peer-to-peer, not client-server.

    You can configure nftables rules to route through a particular node, but that doesn’t really make it a server. You could configure all nodes to allow routing traffic through them if you wanted to.

    If you run Wireguard on every device, you can configure a mesh VPN, where every device can directly reach any other device, without needing to route through an intermediary node. This is essentially what Tailscale does.





  • dan@upvote.autoSelfhosted@lemmy.worldDocker security
    link
    fedilink
    English
    arrow-up
    25
    ·
    edit-2
    2 months ago

    you can override this by setting an IP on the port exposed so thet a local only server is only accessable on 127.0.0.1

    Also, if the Docker container only has to be accessed from another Docker container, you don’t need to expose a port at all. Docker containers can reach other Docker containers in the same compose stack by hostname.



  • If you don’t mind a web UI, Netdata is great. It collects a bunch of metrics once per second and can retain them for a long period of time. The web UI is pretty good. Their Github readme links to some example servers so you can try it out first. Just click the link to use it without an account (that’s optional).

    It’s mainly designed for servers, but there’s no reason you couldn’t run it on a client system. They’re focusing a lot on AI/ML-based anomaly detection as well as their cloud offering at the moment, but you don’t have to use either and can just stick to the open-source agent.


  • dan@upvote.autoSelfhosted@lemmy.worldCurated list of selfhosted apps for your homelab
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    edit-2
    2 months ago

    why is a tower defense game listed under Automation?

    and two of the most popular automation programs are missing (n8n and Node-RED).

    who on earth needs customer live chat and a lot of business-scale website analytics, webshop systems and CRM and ERP in their homelab??

    Maybe not in a homelab, but plenty of people self-host these. I’m setting up customer live chat (Chatwoot) and invoicing and account (Bigcapital) for my wife for example. I self-host website analytics (Plausible) and bug tracking (used to be Sentry but it got too complex to host, so now I’m trying Bugsink and Glitchtip) for my personal sites/projects, too.





  • dan@upvote.autoSelfhosted@lemmy.worldDecreasing Certificate Lifetimes to 45 Days
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    2
    ·
    edit-2
    2 months ago

    This is one of the reasons they’re reducing the validity - to try and convince people to automate the renewal process.

    That and there’s issues with the current revocation process (for incorrectly issued certificates, or certificates where the private key was leaked or stored insecurely), and the most effective way to reduce the risk is to reduce how long any one certificate can be valid for.

    A leaked key is far less useful if it’s only valid or 47 days from issuance, compared to three years. (note that the max duration was reduced from 3 years to 398 days earlier this year).

    From https://www.digicert.com/blog/tls-certificate-lifetimes-will-officially-reduce-to-47-days:

    In the ballot, Apple makes many arguments in favor of the moves, one of which is most worth calling out. They state that the CA/B Forum has been telling the world for years, by steadily shortening maximum lifetimes, that automation is essentially mandatory for effective certificate lifecycle management.

    The ballot argues that shorter lifetimes are necessary for many reasons, the most prominent being this: The information in certificates is becoming steadily less trustworthy over time, a problem that can only be mitigated by frequently revalidating the information.

    The ballot also argues that the revocation system using CRLs and OCSP is unreliable. Indeed, browsers often ignore these features. The ballot has a long section on the failings of the certificate revocation system. Shorter lifetimes mitigate the effects of using potentially revoked certificates. In 2023, CA/B Forum took this philosophy to another level by approving short-lived certificates, which expire within 7 days, and which do not require CRL or OCSP support.