

Automatic updates for bug fixes (e.g. 1.0.0 to 1.0.1) are usually fine - it’s major and minor updates that are scarier. I’ve never used Watchtower so I’m not sure if it has an option to only allow bugfixes.
Aussie living in the San Francisco Bay Area.
Coding since 1998.
.NET Foundation member. C# fan
https://d.sb/
Mastodon: @dan@d.sb


Automatic updates for bug fixes (e.g. 1.0.0 to 1.0.1) are usually fine - it’s major and minor updates that are scarier. I’ve never used Watchtower so I’m not sure if it has an option to only allow bugfixes.


Where is the website template from? I’ve seen the exact same one before.


You can run your own AI locally if you have powerful enough equipment, so that you’re not dependent on paying a monthly fee to a provider. Smaller quantized models work fine on consumer-grade GPUs with 16GB RAM.
The major issue with AI providers like Anthropic and OpenAI at the moment is that they’re all subsidizing the price. Once they start charging what it actually costs, I think some of the hype will die off.


I definitely agree with you!
I’m using AI a little bit myself, but I’m an experienced developer and fully understand the code it’s writing (and review all of it manually). I use it for tedious things, where I could do it myself but it’d take much longer. I don’t let AI write commit messages or PR descriptions for me.
At work, I reject AI slop PRs, but it’s becoming harder since AI can submit so much more code than humans can, and there’s people that are less stringent about code quality than I am. A lot of the issues affecting open-source projects are affecting proprietary code too. Amazon recently had to slow down with AI and get senior devs to review AI-written code because it was causing stability issues.


Not just California. Several other US states are considering (or will be rolling out) similar laws, and Brazil’s version has already rolled out this month.


… did you read the same article as everyone else? I can’t tell if you’re joking or not.


I think the blurb was posted by the submitter (@vegetaaaaaaa@lemmy.world) rather than being a part of the link.


If your AI is making PRs without you, that’s even worse.
This is happening a lot more these days, with OpenClaw and its copycats. I’m seeing it at work too - bots submitting merge requests overnight based on items in their owners’ todo lists.


They were hours apart, though.


I couldn’t not.
You could not not?


I don’t think WINE would work, because it likely relies on a custom driver.
If you don’t have a Windows installation, booting into a WinPE LiveCD (like Sergei Strelec’s WinPE: https://m.majorgeeks.com/files/details/sergei_strelecs_winpe.html) and installing it in the live environment should work. Running Windows in a VM should work too, if you pass the USB drive through to the VM.


Did anyone suggest using black electrical tape yet?
for example maybe AV1 takes even more off,
I know this was just an example, but Intel 11th gen and newer has hardware acceleration for AV1.
GPUs have their place, but they significantly increase power consumption, which is an issue in areas with high power prices.
If you want to self-host email or websites, I’d use a VPS for those use cases. For websites, a $30/year VPS would be more than sufficient. You can try host at home, but hosting those things from a residential IP doesn’t always work well.
QuickSync is more than sufficient for most users. It can handle several concurrent 4K transcode. It’s also not that common to have to transcode, unless you stream your media content when away from home a lot, and have poor upload speed.
If going Intel, there’s different models of Intel iGPU, so I’d go for the lowest-end GPU that has the higher end iGPU. My home server is a few years old and has an Intel Core i5 13500. The difference between the 13400 and 13500 looks small on paper, but the 13400 only has UHD Graphics 730 while the 13500 had UHD Graphics 770 which can handle double the number of concurrent transcodes.
Intel iGPUs also support SR-IOV which lets you share one iGPU across multiple VMs. For example, if you have a Plex server on the host Linux system, and Blue Iris in a Windows Server VM, and both need to use hardware transcoding.
I’ve heard AMD’s onboard graphics are pretty good these days, but I haven’t tried AMD CPUs on a server.
You can share the node with them, and use an ACL to control which ports they have access to.
I used to use Dogpile a lot in the late 1990s. Coincidentally it was a similar idea to this and SearxNG - it was a meta search engine that combined Yahoo, Lycos, Excite, AltaVista and a few others into one interface (no Google since it wasn’t in widespread use yet).
It’s not like gtk3 is suddenly out of use.
That’s true, however the GNOME maintainers will drop support for it at some point. I guess Cinnamon or xfce could maintain their own forks, however the majority of apps target what GNOME is currently using given it’s the most popular desktop environment.
I’m not familiar with this app, but what do you mean “gnomed”? Do you mean the UI started using Gtk4 and Adwaita components?
Gtk3 is considered legacy now, so most apps that use Gtk will be transitioning to Gtk4 (and Adwaita) at some point. Gtk3 is starting to look a bit outdated in modern DEs.
Copying my comment from the homelab community:
I haven’t tried it yet, but here’s some initial thoughts:
Does it support multiple separate docker-compose.yml files? It would be useful if it could pull the list of containers directly from Docker rather than having to paste the docker-compose.
Does it pull changelogs so that the user can tell if a change is a breaking change that’ll require extra work?
It would be useful to support Webauthn/FIDO2 2FA instead of just TOTP. TOTP is being slowly phased out due to its weaknesses (it’s phishable). Similarly, it’d be useful to support single sign on using OIDC (OpenID Connect) as a lot of self-hosters use Authentik, Authelia, or Keycloak to have one login for all their self hosted services.