• 0 Posts
  • 47 Comments
Joined 3 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • The full-blown solution would be to have your own recursive DNS server on your local network, and to block or redirect any other DNS server to your own, and possibly blocking all know DoH servers.

    This would solve the DNS leakage issue, since your recursive server would learn the authoritative NS for your domain, and so would contact that NS directly when processing any queries for any of your subdomains. This cuts out the possibility of any espionage by your ISP/Google/Quad9’s DNS servers, because they’re now uninvolved. That said, your ISP could still spy in the raw traffic to the authoritative NS, but from your experiment, they don’t seem to be doing that.

    Is a recursive DNS server at home a tad extreme? I used to think so, but we now have people running Pi-hole and similar software, which can run in recursive mode (being built atop Unbound, the DNS server software).

    /<minor nitpick>

    “It was DNS” typically means that name resolution failed or did not propagate per its specification. Whereas I’m of the opinion that if DNS is working as expected, then it’s hard to pin the blame on DNS. For example, forgetting to renew a domain is not a DNS problem. And setting a bad TTL or a bad record is not a DNS problem (but may be a problem with your DNS software). And so too do I think that DNS leakage is not a DNS problem, because the protocol itself is functioning as documented.

    It’s just that the operators of the upstream servers see dollar-signs by selling their user’s data. Not DNS, but rather a capitalism problem, IMO.

    /</minor nitpick>


  • I loaded True Nas onto the internal SSD and swapped out the HDD drive that came with it for a 10tb drive.

    Do I understand that you currently have a SATA SSD and a 10TB SATA HDD plugged into this machine?

    If so, it seems like a SATA power splitter that divides the power to the SSD would suffice, in spite of the computer store’s admonition. The reason for splitting power from the SSD is because an SSD draws much less power than spinning rust.

    Can it still go wrong? Yes, but that’s the inherent risk when pushing beyond the design criteria of what this machine was originally built for. That said, “going wrong” typically means “won’t turn on”, not “halt and catch fire”.


  • If I understand the Encryption Markdown page, it appears the public/private key are primarily to protect the data at-rest? But then both keys are stored on the server, although protected by the passphrase for the keys.

    So if the protection boils down to the passphrase, what is the point of having the user upload their own keypair? Are the notes ever exported from the instance while still being encrypted by the user’s keypair?

    Also, why PGP? PGP may be readily available, but it’s definitely not an example of user-friendliness, as exemplified by its lack of broad acceptance by non-tech users or non-government users.

    And then, why RSA? Or are other key algorithms supported as well, like ed25519?



  • litchralee@sh.itjust.workstoSelfhosted@lemmy.worldPassword managers...
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    1 month ago

    For a single password, it is indeed illogical to distribute it to others, in order to prevent it from being stolen and misused.

    That said, the concept of distributing authority amongst others is quite sound. Instead of each owner having the whole secret, they only have a portion of it, and a majority of owners need to agree in order to combine their parts and use the secret. Rather than passwords, it’s typically used for cryptographically signing off on something’s authenticity (eg software updates), where it’s known as threshold signatures:

    Imagine for a moment, instead of having 1 secret key, you have 7 secret keys, of which 4 are required to cooperate in the FROST protocol to produce a signature for a given message. You can replace these numbers with some integer t (instead of 4) out of n (instead of 7).

    This signature is valid for a single public key.

    If fewer than t participants are dishonest, the entire protocol is secure.




  • https://ipv6now.com.au/primers/IPv6Reasons.php

    Basically, Legacy IP (v4) is a dead end. Under the original allocation scheme, it should have ran out in the early 1990s. But the Internet explosion meant TCP/IP(v4) was locked in, and so NAT was introduced to stave off address exhaustion. But that caused huge problems to this day, like mismanagement of firewalls and the need to do port-forwarding. It also broke end-to-end connectivity, which requires additional workarounds like STUN/TURN that continue to plague gamers and video conferencing software.

    And because of that scarcity, it’s become a land grab where rich companies and countries hoard the limited addresses in circulation, creating haves (North America, Europe) and have-nots (Africa, China, India).

    The want for v6 is technical, moral, and even economical: one cannot escape Big Tech or American hegemony while still having to buy IPv4 space on the open market. Czechia and Vietnam are case studies in pushing for all-IPv6, to bolster their domestic technological familiarity and to escape the broad problems with Business As Usual.

    Accordingly, there are now three classes of Internet users: v4-only, dual-v4-and-v6, and v6-only. Surprisingly, v6-only is very common now on mobile networks for countries that never had many v4 addresses. And it’s an interop requirement for all Apple apps to function correctly in a v6-only environment. At a minimum, everyone should have access to dual-stack IP networks, so they can reach services that might be v4-only or v6-only.

    In due course, the unstoppable march of time will leave v4-only users in the past.


  • You might also try asking on !ipv6@lemmy.world .

    Be advised that even if a VPN offers IPv6, they may not necessarily offer it sensibly. For example, some might only give you a single address (aka a routed /128). That might work for basic web fetching but it’s wholly inadequate if you wanted the VPN to also give addresses to any VMs, or if you want each outbound connection to use a unique IP. And that’s a fair ask, because a normal v6 network can usually do that, even though a typical Legacy IP network can’t.

    Some VPNs will offer you a /64 subnet, but their software might not check if your SLAAC-assigned address is leaking your physical MAC address. Your OS should have privacy-extensions enabled to prevent this, but good VPN software should explicitly check for that. Not all software does.


  • Connection tracking might not be totally necessary for a reverse proxy mode, but it’s worth discussing what happens if connection tracking is disabled or if the known-connections table runs out of room. For a well-behaved protocol like HTTP(S) that has a fixed inbound port (eg 80 or 443) and uses TCP, tracking a connection means being aware of the TCP connection state, which the destination OS already has to do. But since a reverse proxy terminates a TCP connection, then the effort for connection tracking is minimal.

    For a poorly-behaved protocol like FTP – which receives initial packets in a fixed inbound port but then spawns a separate port for outbound packers – the effort of connection tracking means setting up the firewall to allow ongoing (ie established) traffic to pass in.

    But these are the happy cases. In the event of a network issue that affects an HTTP payload sent from your reverse proxy toward the requesting client, a mid-way router will send back to your machine an ICMP packet describing the problem. If your firewall is not configured to let all ICMP packets through, then the only way in would be if conntrack looks up the connection details from its table and allows the ICMP packet in, as “related” traffic. This is not dissimilar to the FTP case above, but rather than a different port number, it’s an entirely different protocol.

    And then there’s UDP tracking, which is relevant to QUIC. For hosting a service, UDP is connectionless and so for any inbound packet we received on port XYZ, conntrack will permit an outbound packet on port XYZ. But that’s redundant since we presumably had to explicitly allow inbound port XYZ to expose the service. But in the opposite case, where we want to access UDP resources on the network, then an outbound packet to port ABC means conntrack will keep an entry to permit an inbound packet on port ABC. If you are doing lots of DNS lookups (typically using UDP), then that alone could swamp the con track table: https://kb.isc.org/docs/aa-01183

    It may behoove you to first look at what’s filling conntrack’s table, before looking to disable it outright. It may be possible to specifically skip connection tracking for anything already explicitly permitted through the firewall (eg 80/443). Or if the issue is due to numerous DNS resolution requests from trying to look up spam sources IPs, then perhaps either the logs should not do a synchronous DNS lookup, or you can also skip connection tracking for DNS.


  • https://github.com/Overv/vramfs

    Oh, it’s a user space (FUSE) driver. I was rather hoping it was an out-of-tree Linux kernel driver, since using FUSE will: 1) always pass back to userspace, which costs performance, and 2) destroys any possibility of DMA-enabled memory operations (DPDK is a possible exception). I suppose if the only objective was to store files in VRAM, this does technically meet that, but it’s leaving quite a lot on the table, IMO.

    If this were a kernel module, the filesystem performance would presumably improve, limited by how the VRAM is exposed by OpenCL (ie very fast if it’s just all mapped into PCIe). And if it was basically offering VRAM as PCIe memory, then this potentially means the VRAM can be used for certain RAM niche cases, like hugepages: some applications need large quantities of memory, plus a guarantee that it won’t be evicted from RAM, and whose physical addresses can be resolved from userspace (eg DPDK, high-performance compute). If such a driver could offer special hugepages which are backed by VRAM, then those application could benefit.

    And at that point, on systems where the PCIe address space is unified with the system address space (eg x86), then it’s entirely plausible to use VRAM as if it were hot-insertable memory, because both RAM and VRAM would occupy known regions within the system memory address space, and the existing MMU would control which processes can access what parts of PCIe-mapped-VRAM.

    Is it worth re-engineering the Linux kernel memory subsystem to support RAM over PCIe? Uh, who knows. Though I’ve always like the thought of DDR on PCIe cards. All technologies are doomed to reinvent PCIe, I think, said someone from Level1Techs.



  • For my own networks, I’ve been using IPv6 subnets for years now, and have NAT64 translation for when they need to access Legacy IP (aka IPv4) resources on the public Internet.

    Between your two options, I’m more inclined to recommend the second solution, because although it requires renumbering existing containers to the new subnet, you would still have one subnet for all your containers, but it’s bigger now. Whereas the first solution would either: A) preclude containers on the first bridge from directly talking to containers on the second bridge, or B) you would have to enable some sort of awful NAT44 translation to make the two work together.

    So if IPv6 and its massive, essentially-unlimited ULA subnets are not an option, then I’d still go with the second solution, which is a bigger-but-still-singular subnet.


  • For a link of 5.5 km and with clear LoS, I would reach for 802.11 WiFi, since the range of 802.11ah HaLow wouldn’t necessarily be needed. For reference, many WISPs use Ubiquiti 5 GHz point-to-point APs for their backhaul links for much further distances.

    The question would be what your RF link conditions look like, whether 5 GHz is clear in your environment, and what sort of worst-case bandwidth you can accept. With a clear Fresnel zone, you could probably be pushing something like 50 Mbps symmetrical, if properly aimed and configured.

    Ubiquiti’s website has a neat tool for roughly calculating terrain and RF losses.





  • Tbf, can’t the other party mess it up with signal too?

    Yes, but this is where threat modeling comes into play. Grossly simplified, developing a threat model means to assess what sort of attackers you reasonably expect to make an attempt on you. For some people, their greatest concern is their conservative parents finding out that they’re on birth control. For others, they might be a journalist trying to maintain confidentiality of an informant from a rogue sheriff’s department in rural America. Yet others face the risk of a nation-state’s intelligence service trying to find their location while in exile.

    For each of these users, they have different potential attackers. And Signal is well suited for the first two, and only alright against the third. After all, if the CIA or Mossad is following someone around IRL, there are other ways to crack their communications.

    What Signal specifically offers is confidentiality in transit, meaning that all ISPs, WiFi networks, CDNs, VPNs, script skiddies with Wireshark, and network admins in the path of a Signal convo cannot see the contents of those messages.

    Can the messages be captured at the endpoints? Yes! Someone could be standing right behind you, taking photos of your screen. Can the size or metadata of each message reveal the type of message (eg text, photo, video)? Yes, but that’s akin to feeling the shape of an envelope. Only through additional context can the contents be known (eg a parcel in the shape of a guitar case).

    Signal also benefits from the network effect, because someone trying to get away from an abusive SO has plausible deniability if they download Signal on their phone (“all my friends are on Signal” or “the doctor said it’s more secure than email”). Or a whistleblower can send a message to a journalist that included their Signal username in a printed newspaper. The best place to hide a tree is in a forest. We protect us.

    My main issue for signal is (mostly iPhone users) download it “just for protests” (ffs) and then delete it, but don’t relinquish their acct, so when I text them using signal it dies in limbo as they either deleted the app or never check it and don’t allow notifs

    Alas, this is an issue with all messaging apps, if people delete the app without closing their account. I’m not sure if there’s anything Signal can do about this, but the base guarantees still hold: either the message is securely delivered to their app, or it never gets seen. But the confidentiality should always be maintained.

    I’m glossing over a lot of cryptographic guarantees, but for one-to-one or small-group private messaging, Signal is the best mainstream app at the moment. For secure group messaging, like organizing hundreds of people for a protest, that is still up for grabs, because even if an app was 100% secure, any one of those persons can leak the message to an attacker. More participants means more potential for leaks.



  • Let me make sure I understand everything correctly. You have an OpenWRT router which terminates a Wireguard tunnel, which your phone will connect to from somewhere on the Internet. When the Wireguard tunnel lands within the router in the new subnet 192.168.2 0/24, you have iptable rules that will:

    • Reject all packets on the INPUT chain (from subnet to OpenWRT)
    • Reject all packets on the OUTPUT chain (from OpenWRT to subnet)
    • Route packets from phone to service on TCP port 8080, on the FORWARD chain
    • Allow established connections, on the FORWARD chain
    • Reject all other packets on the FORWARD chain

    So far, this seems alright. But where does the service run? Is it on your LAN subnet or the isolated 192.168.2.0/24 subnet? The diagram you included suggests that the service runs on an existing machine on your LAN, so that would imply that the router must also do address translation from the isolated subnet to your LAN subnet.

    That’s doable, but ideally the service would be homed onto the isolated subnet. But perhaps I misunderstood part of the configuration.