

Awesome.
You not only solved your problem but learned a heap along the way.
I love searxng too btw. It’s the best way to search for the time being.
Awesome.
You not only solved your problem but learned a heap along the way.
I love searxng too btw. It’s the best way to search for the time being.
I’ve never used portainer sorry.
If you see the published port for a very short time then something might be crashing when it tries to start.
docker logs searxng
from cli might be revealing
edit: I do have a searxng container and my compose.yml is very similar to yours. I guess we both copied the example. The only difference I can see is that you still have the env variables for UWSGI_WORKERS and UWSGI_THREADS. I just set both of those to 4 instead of using the SEARXNG_ env vars
I’d just install debian because that’s what I use so that’s what I can most easily provide support for.
I know you’re joking but with flatpaks and app images you don’t even notice the oldness any more.
I don’t have any credible feedback for you.
I dislike php. That’s my feeling. The vibe.
When I’m browsing self hostable things I avoid php unless it’s really the only option.
My completely unsupported “feeling” is that apps written in php are awkward and clunky and just less pleasant to host.
I’m sorry if that offends you, and I don’t care at all what you think of my opinion.
I only mentioned “php apps feel old” because I could see you were some kind of php devotee and that it would trigger you. Maybe look into that.
Pretty much me.
I’ve been daily driving debian for many years. I’m very comfortable here.
In 2025 with docker containers and flatpaks the benefits of an atomic OS don’t feel very compelling.
Yeah I daily drive debian stable.
With flatpaks and docker I never run into problems with my applications being too old or whatever.
Nextcloud really does feel old.
Guy, everyone operates off their own limited experience.
LibreOffice has user defined functions that work just fine. You’re just illustrating my point really.
It sounds like you’re talking the ability of ms office products to open documents authored by libreoffice.
I have no way to evaluate whether these claims are true. Pretty much verbatim what people say about libreoffice.
Can I ask your perspective on the comments here saying that Krita and Inkscape just aren’t comparable to their commercial alternatives?
The reason is… I’m not a professional graphic designer, I have a small consultancy with several staff and work with documents and spreadsheets all day.
Occasionally I encounter similar threads discussing the difference between LibreOffice and Microsoft Office, and the comments are all the same. So many people saying LibreOffice just “isn’t there yet”, or that it might be ok for casual use but not for power users.
But as someone who uses LibreOffice extensively with a broad feature set I’ve just never encountered something we couldn’t do. Sure we might work around some rough edges occasionally, but the feature set is clearly comparable.
My strongly held suspicion is that it’s a form of the dunning-kruger effect. People have a lot of experience using software-A so much so that they tend to overlook just how much skill and knowledge they have accumulated with that specific software. Then when they try software-B they misconstrue their lack of knowledge with that specific software as complexity.
Well it makes people feel like they’ve done something.
I wish there was something that just did file sync.
I know there’s syncthing but that’s not ideal for large repositories with many users and many files.
PHP apps always feel old.
Could this be a snaps thing?
I despise snaps and left Ubuntu for that reason. I don’t remember the specifics but I think even after installing firefox with apt it somehow get’s magically switched to a snap.
I daily drive debian on a t490s and it’s rock solid. There’s just no way anyone could consider this set up unstable.
In recent years I’ve found most of my problems come from the fancy new packages. In order of reliability I find that it goes apt > .dev > AppImage > flatpak > snap
This is what I do. Changing the port to a higher number will prevent almost all bots.
I understand that obscurity is not security but not getting probed is nice.
Also ssh keys are a must.
I do log in as root though.
However, I block all IPs other than mine from connecting to this port in my host’s firewall. I only need to log in from home, or my office, and in a crisis I can just log in to OVH and add whitelist my IP.
I feel like most commenters here haven’t understood what you’re proposing.
I’ve thought about doing this, I’ve seen other commenters say they’re doing it. It’s not a terrible idea. I haven’t done it myself because … it’s just not a priority and I’m not sure it ever will be. Anyway …
If you’re willing to set up and self host your own email stack like mail-in-a-box or whatever, then configuring a separate outbound SMTP server is fairly trivial in comparisson.
If you already had your own stack set up to be self hosted you would ordinarily be using the SMTP server there-with to send emails.
Firstly configure your client to use whatever other SMTP server you have access to. I think it’s possible to use mailgun or one of those API transactional senders. You could get a cheap plan with mxroute or any other email host and just use the SMTP server.
Suppose your client is Thunderbird and you set up your account like smtp.mxroute.com for outbound and imap.myserver.com for email storage. When you send an email tbird transmits it through mxroute and then stores it on your imap server at myserver.com in your sent folder.
The potentially complex part is configuring spf & DKIM records on your domain.
I’m not sure if I’ll be able to explain this clearly but… suppose a recipient’s spam service receives an email purportedly from marauding_giberish@myserver.com but transmitted by smtp.mxroute.com. That spam service will look up the DNS records for myserver.com and inspect the records for the spf record. This record pretty much lists which servers are authorised to transmit email from addresses ending in myserver.com. So with a more typical set up an spf record might be:
“v=spf1 include:myserver.com -all”
This would indicate that only the smtp server at myserver.com can transmit email from your domain.
You would edit that to include the mxroute smtp server like this:
“v=spf1 include:mxroute.com include:myserver.com -all”
This way, recipients can confirm that the owner of myserver.com domain has formally designated mxroute as an authorised recipient.
Your SMTP server will have a public & private key pair which it uses to sign outbound messages. Recipients can use the public key to confirm the signature and thereby confirm that the message has not been altered in flight.
Whatever SMTP server you use will tell you the public key and instruct you to add that to the DNS records of your custom domain.
That’s the one that looks like this:
“v=DKIM1; k=rsa; p=MIIBIj [ … it’s a long key … ] op3Nbzgv35kzrPQme+uhtVcJP”
Once this is in place recipients of your emails can query the DNS for myserver.com and find this public key, and use it to confirm that the signature on the email they received is authentic.
Well, you don’t need containers for wireguard the same way you don’t need containers for anything.
I personally prefer docker containers for everything that can be containerised because it provides a consistent abstraction layer. As in, I always know how to find configurations and paths and manage network infrastructure for anything that resides in a container.
In the case I outlined above with the wireguard containers, I’m more confident I’m not going to upset any other services on my server, and I understand the configuration.
Maybe it’s a bit like using ufw to manage iptables rules, unnecessary but helpful.
Of course, I freely admit that my way is not necessarily the best way and if someone wants to run wireguard on the host then great.
My SO watches free tier youtube.