• 0 Posts
  • 4 Comments
Joined 6 months ago
cake
Cake day: December 6th, 2024

help-circle

  • Stories from the “good” old days running Linux on a 386 machine with 4 MB or less of memory aside, in the present day it’s still perfectly normal to run Linux on a much weaker machine as a server - you can just rent a the cheapest VPS you can find (which nowadays will have 128 MB, maybe 256MB, and definitelly only give you a single core) and install it there.

    Of course, it won’t be something with X-Windows or Wayland, much less stuff like LibreOffice.

    I think the server distribution of Ubunto might fit such a VPS, though there are server-specific Linux distros that will for sure fit and if everything fails TinyCore Linux will fit in a potato.

    I current have a server like that using AlmaLinux on a VPS with less than 1GB in memory, which is used only as a Git repository and that machine is overkill for it (it’s the lowest end VPS with enough storage space for a Git repository big enough for the projects I’m working on, so judging by the server management interface and linux meminfo, that machine’s CPU power and memory are in practice far more than needed).

    If you’re willing to live with a command line interface, you can run Linux on $50 worth of hardware.


  • Similar story but I just installed slackware on one of the University PCs (they just had a handful of PCs in the general computer room for the students and nobody actually watched over us) since I did not have a PC yet (only had a ZX Spectrum at the timback then).

    Trying to get X-Windows to work in Slackware was interesting, to say the least: back then you had to manually create your own video timings configuration file to get the graphics to work - which means defining the video mode at the very low level, such as configuring the number of video clock cycles between end-of-line-drawing and horizontal-retrace - and fortunatelly I didn’t actually blow up any monitor (which was possible if you did the configuration wrong).

    At least we had some access to the Internet (most things were blocked but we had Usenet and e-email and one could use FTPmail gateways to download stuff from remote servers) via Ethernet, so that part was easy.

    Anyways, my first reaction looking at the OP’s post was like: yeah, if they’re running X it’s probably a too powerfull machine.


  • The more services you have depending on a 3rd party which can do whatever the fuck they want, either directly or by changing the rules when the feel like it (i.e. not bound by rules they cannot change, such as root DNS providers are) and then doing it, the less your system is actually self-hosted, IMHO.

    For me the whole point of self-hosting is exactly being as independent as possible of 3rd parties that can just fuck you up, be it on purpose (generally for $$$) or because they go bankrupt and close their services.

    This is why I’ve actually chosen to run Kodi on my home server that doubles down as TV Box even though I can’t easilly use it from anywhere else (it’s possible but it involves using a standalone database that is then shared, which can only be safelly done through customly setup ssh pipes) rather than something like Plex.

    It’s kinda funny to see people into self-hosting still doing the kind of mistake I did almost 3 decades ago (fortunatelly in a professional environment) of trusting a 3rd party to the point of becoming dependent on them and later getting burned when they abused that trust, and which led me to avoid such situations like the plague ever since.

    Mind you, I can understand if people for whom self-hosting is not driven by a desire to reduce vulnerability to the whims of 3rd parties (which includes reducing the risk of enshittification) and is instead driven by “waste not” (for example, bringing new life to old hardware rather than throwing it out) or by it being a fun challenge, don’t really care to be as independent as possible from such 3rd parties.