That metadata is written into the photo by the camera, so Immich may not be able to accommodate this easily. Not sure about Canon specifically, but my Nikon cameras have a memory bank for manual focus lenses. Might be worth checking through your menus.
- 0 Posts
- 25 Comments
tvcvt@lemmy.mlto
Selfhosted@lemmy.world•Why do I need a domain to access my Funkwhale library but not SyncThing?English
13·12 days agoThe two pieces of software have very different topologies.
In very broad strokes: Something like FunkWhale uses a server-client model. To get to it, you connect to it remotely and you need some way to get there. By contrast Syncthing behaves as a mesh of nodes. Each node connects directly to the other nodes and the syncthing project folks host relays that help introduce the nodes to one another and penetrate NAT.
No, you may not need a paid domain to use your self-hosted FunkWhale server (I haven’t dabbled with that service in particular). There are a few options.
- You could probably use the direct public IP address or alternatively
- Use a dynamic DNS provider (like afraid.org) to resolve your IP address
- Use a VPN on all of your clients and use local DNS to resolve your FunkWhale server’s local IP address.
These all assume that you have a public IP address on your router and not one that’s being NAT-ed by your ISP.
Again, these are very broad strokes, but hopefully it helps point your in a direction for some research.
There’s definitely nothing magic about ports 443 and 80. The risk is always that the underlying service will provide a vulnerability through which attackers could find a way. Any port presents an opportunity for attack; the security of the service is the is what makes it safe or not.
I’d argue that long tested services like
ssh, absent misconfiguration, are at least as safe as most reverse proxies. That doesn’t mean to say that people won’t try to break in via port 22. They sure will—they try on web ports too.
tvcvt@lemmy.mlto
Selfhosted@lemmy.world•[Question] Visual feedback of my Linux homelab setup/system?English
4·1 month agoI’m not sure if this what you’re after, but it sounded to me that you were describing monitoring. Might be worth your checking out librenms or zabbix or checkmk. Those would give you a good overview of the health of your stuff and keep track of what’s where.
tvcvt@lemmy.mlto
Linux@lemmy.ml•Is there a CLI Tor HTTP client, like curl/wget but routed over the Tor network?
10·1 month agoIt’s not exactly a single tool, but torsocks kind of enables doing what you’re describing. The syntax would be something like
torsocks curl $url
tvcvt@lemmy.mlto
Linux@lemmy.ml•New to Proxmox, Facing Issues with Homelab Setup - Need Advice
6·2 months ago- This sounds like a weird one. It would be helpful to have some more info about your network. Would you share your PVE host’s
/etc/network/interfacesfile and the config file for the VMs (from/etc/pve/qemu-server)? - Excellent
- I think ZFS would likely help since it can make use of block level snapshots. I think the way to move things over would be to create a ZFS datastore in Proxmox and then just migrate each VM’s disk.
- Personally I think this is a bit simpler in an LXC container and there are a bunch of tutorials to help. These two are similar to my own setup:
- https://blog.bekh.fr/jellyfin-lxc-with-hardware-transcoding-igpu-passtrough/
- https://www.wundertech.net/installing-jellyfin-on-proxmox/
Hope some of that helps
- This sounds like a weird one. It would be helpful to have some more info about your network. Would you share your PVE host’s
I’m not familiar with Zurg, but the WebDAV connection makes me recall: doesn’t LXC require that the FUSE kernel module be loaded in order to use WebDAV?
I’ve also seen it recommended that WebDAV be setup on the host and then the mount points bind mounted into the container. Not sure if any of that helps, but maybe it’ll lead you somewhere.
That’s a great tip. I’d completely forgot you can use telnet for that. Thanks!
Thanks for the response. I really should just dive in, but I’ve got this nagging fear that I’m going to forget about some DNS record that will bork my entire mail service. It good to hear about some working instances that people are happy with.
Tainted in that the kernel and ZFS have different licenses. Not a functional impairment. I have no way to check to check a system not using ZFS. For my use case, Debian plus ZFS are PVE’s principal features.
I have synapse server running in docker on a VPS and it’s been pretty reliable. At my office I use it as sort of a self-hosted Slack replacement. For our use case, I don’t have federation enabled, so no experience on that front. It’s a small office and everyone here uses either Element or FuzzyChat on desktop and mobile. It runs behind an nginx reverse proxy and I’ve got SSO set up with Authentik and that’s worked very well. Happy to share some configs if that would be useful.
Have you by any chance documented your PMG set up? I’m also a very happy Mailcow user and spinning up PMG is something I’ve been meaning to tackle for years so I can implement archiving with mailpiler, but I’ve never really wrapped my head around how everything fits together.
Ceph isn’t installed by default (at least it hasn’t been any time I’ve set up PVE) and there’s no need to use ZFS if you don’t want to. It’s available, but you can go right ahead and install the system on LVM instead.
tvcvt@lemmy.mlto
Linux@lemmy.ml•Unbound as DNS resolver on a Linux laptop: tips/experiences?
4·2 months agoUnbound can query the root dns servers, but it’s also commonly used as a recursive resolver, which just uses a server upstream, similar to
systemd-resolved. I use unbound network-wide, but I have it querying 9.9.9.9 to take advantage of their filtering.
tvcvt@lemmy.mlto
Linux@lemmy.ml•Unbound as DNS resolver on a Linux laptop: tips/experiences?
5·2 months agoYou may already have a local dns caching mechanism on your computer. I think by default Ubuntu uses
systemd-resolved(it does on my desktops anyway). If you checkdigit’ll show lookups coming from 127.0.0.53. With that in place, your local machine is caching lookup results and anything it doesn’t know, it’s forwarding to the network’s resolver (which it gets via dhcp, usually).
I set up an old thin client with Debian and lxqt to connect to a VM on Proxmox. Got the idea from an Apalrd’s Adventures video about VDI. It worked pretty well on a decent network, but it really suffered on high latency networks.
tvcvt@lemmy.mlto
Selfhosted@lemmy.world•Fresh Proxmox install w/ full disk encryption—so install Debian first, then Proxmox on top?English
2·5 months agoI think you can do the same with LUKS (https://www.cyberciti.biz/hardware/cryptsetup-add-enable-luks-disk-encryption-keyfile-linux/) if that’s your preferred route.
tvcvt@lemmy.mlto
Selfhosted@lemmy.world•Fresh Proxmox install w/ full disk encryption—so install Debian first, then Proxmox on top?English
10·5 months agoAnother idea for you: if you use ZFS for the install, check Debian directions on OpenZFS or zfsbootmenu and you’ll get directions for an encrypted installation. You’ll be able to specify the path to a key file, which you can keep on a thumb drive. When the machine boots up, it’ll see the thumb drive and decrypt the zpool automatically; yank the thumb drive and it won’t (backup the key of course).
The answers for this will vary widely, but the thing I think many people overlook when planning out expenses is a plan to back up the data. Having the file server is great, but start planning now for what to do when it breaks. Where will backup copies of your data live and how will you restore it?
As to the server itself, the hardware completely depends on your desires. Some like second hand enterprise gear; others prefer purpose-made home NAS devices or a DIY rig. On the software side my thought is keep it simple if you’re starting up. There are good readymade options (TrueNAS, XigmaNAS, openmediavault, unraid, etc). They’re all great and they help get up and running quickly. They also have a lot of tempting knobs to turn that can cause unexpected problems if you don’t fully understand them.
To my mind file servers have to be reliable above all else, so I’d avoid running anything besides file sharing on your server until it’s running like a top and then only add more layers one at a time.
Sorry for all the philosophy, but I really do think this is a common stumbling block for people getting started.
If you haven’t already, check out the Awesome Open Source page’s Booking and Scheduling section.