• 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: August 28th, 2023

help-circle



  • So you’re not remapping the source ports to be unique? There’s no mechanism to avoid collisions when multiple clients use the same source port? Full Cone NAT implies that you have to remember the mapping (potentially indefinitely—if you ever reassign a given external IP:port combination to a different internal IP or port after it’s been used you’re not implementing Full Cone NAT), but not that the internal and external ports need to be identical. It would generally only be used when you have a large enough pool of external IP addresses available to assign a unique external IP:port for every internal IP:port. Which usually implies a unique external IP for each internal IP, as you can’t restrict the number of unique ports used by each client. This is why most routers only implement Symmetric NAT.

    (If you do have sufficient external IPs the Linux kernel can do Full Cone NAT by translating only the IP addresses and not the ports, via SNAT/DNAT prefix mapping. The part it lacks, for very practical reasons, is support for attempting to create permanent unique mappings from a larger number of unconstrained internal IP:port combinations to a smaller number of external ones.)


  • What “increased risks as far as csam”? You’re not hosting any yourself, encrypted or otherwise. You have no access to any data being routed through your node, as it’s encrypted end-to-end and your node is not one of the endpoints. If someone did use I2P or Tor to access CSAM and your node was randomly selected as one of the intermediate onion routers there is no reason for you to have any greater liability for it than any of the ISPs who are also carrying the same traffic without being able to inspect the contents. (Which would be equally true for CSAM shared over HTTPS—I2P & Tor grant anonymity but any standard password-protected web server with TLS would obscure the content itself from prying eyes.)



  • No, that’s not how I2P works.

    First, let’s start with the basics. An exit node is a node which interfaces between the encrypted network (I2P or Tor) and the regular Internet. A user attempting to access a regular Internet site over I2P or Tor would route their traffic through the encrypted network to an exit node, which then sends the request over the Internet without the I2P/Tor encryption. Responses follow the reverse path back to the user. Nodes which only establish encrypted connections to other I2P or Tor nodes, including ones used for internal (onion) routing, are not exit nodes.

    Both I2P and Tor support the creation of services hosted directly through the encrypted network. In Tor these are referred to as onion services and are accessed through *.onion hostnames. In I2P these internal services (*.i2p or *.b32) are the only kind of service the protocol directly supports—though you can configure a specific I2P service linked to a HTTP/HTTPS proxy to handle non-I2P URLs in the client configuration. There are only a few such proxy services as this is not how I2P is primarily intended to be used.

    Tor, by contrast, has built-in support for exit nodes. Routing traffic anonymously from Tor users to the Internet is the original model for the Tor network; onion services were added later. There is no need to choose an exit node in Tor—the system maintains a list and picks one automatically. Becoming a Tor exit node is a simple matter of enabling an option in the settings, whereas in I2P you would need to manually configure a proxy server, inform others about it, and have them adjust their proxy configuration to use it.

    If you set up an I2P node and do not go out of your way to expose a HTTP/HTTPS proxy as an I2P service then no traffic from the I2P network can be routed to non-I2P destinations via your node. This is equivalent to running a Tor internal, non-exit node, possibly hosting one or more onion services.



  • It is not true that every node is an exit node in I2P. The I2P protocol does not officially have exit nodes—all I2P communication terminates at some node within the I2P network, encrypted end-to-end. It is possible to run a local proxy server and make it accessible to other users as an I2P service, creating an “exit node” of sorts, but this is something that must be set up deliberately; it’s not the default or recommended configuration. Users would need to select a specific I2P proxy service (exit node) to forward non-I2P traffic through and configure their browser (or other network-based programs) to use it.


  • “Off topic” is a legitimate reason to downvote a post or comment, even one made respectfully and in good faith.

    I do sometimes wish more sites had adopted something like the system Slashdot used, with multiple categories of up or down votes (insightful, informative, off-topic, flamebait, etc.) which users could weight according to their own preferences. The simplistic “either up, down, or neutral” model is a rather blunt instrument.


  • nybble41@programming.devtoMastodon@lemmy.mlGot to love Mastodon
    link
    fedilink
    arrow-up
    6
    arrow-down
    9
    ·
    1 year ago

    The more users spread out into smaller, more easily censored instances, the more the remaining fragmented bits of the Lemmy ecosystem still talking to each other will turn into echo chambers full of groupthink. This low threshold for defederation is the Fediverse’s greatest weakness. Sure, it’s possible to work around it—but how many separate Lemmy accounts are users expected to create? Even if you have accounts on every instance of note you’d need to manually cross-post messages to each balkanized server and their comment sections wouldn’t be shared—exactly the sort of thing federation was meant to avoid.

    Email, another federated system, has this same weakness. It’s why it’s increasingly difficult to run your own (outgoing) email server which other systems will accept messages from without going through a well-known third party like Google. Especially when trying to push content to a large audience (e.g. mailing lists), which happens to be Lemmy’s core function.


  • Examples of local commands I might run in tmux could include anything long-running which is started from the command line. A virtual machine (qemu), perhaps, or a video encode (ffmpeg). Then if I need to log out or restart my GUI session for any reason—or something goes wrong with the session manager—it won’t take the long-running process with it. While the same could be done with nohup or systemd-run, using tmux allows me to interact with the process after it’s started.

    I also have systems which are accessed both locally and remotely, so sometimes (not often) I’ll start a program on a local terminal through tmux so I can later interact with it through SSH without resorting to x11vnc.


  • Not the GP but I also use tmux (or screen in a pinch) for almost any SSH session, if only as insurance against dropped connections. I occasionally use it for local terminals if there is a chance I might want a command to outlive the current graphical session or migrate to SSH later.

    Occasionally it’s nice to be able to control the session from the command line, e.g. splitting a window from a script. I’ve also noticed that wrapping a program in tmux can avoid slowdowns when a command generates a lot of output, depending on the terminal emulator. Some emulators will try to render every update even if it means blocking the output from the program for the GUI to catch up, rather than just updating the state of the terminal in memory and rendering the latest version.


  • Historically speaking, people have gone to the trouble of manually digitizing hard copy books to distribute freely. There were digital copies of print books available online (if you knew where to look) before e-books were officially available for sale in any form. That includes mass-market novels as well as items of interest to historians. Ergo, your scepticism seems entirely unjustified.

    OCR is far from perfect (though editing OCR output is generally faster than retyping), but even without it we have the storage and bandwidth these days to distribute full books as stacks of images if needed, without converting them to text. The same way people distribute scans of comics/manga.


  • The average person would just download it. Only one needs the equipment to digitize it. And that equipment isn’t as specialized as you seem to think. For printed (mass-produced) books you can just cut the pages from the spine and feed them in batches through an automated document feeder, which comes standard with many consumer-grade scanners. Automated page-turning on an e-reader can be done with a software plugin in some cases, or externally with something like a SwitchBot. Capturing copy-restricted video is frankly much more involved, and that hasn’t stopped anyone so far.


  • with books there’s basically no reasonable way to create an ebook from a hardcopy

    On the contrary, tons of books have been digitized from hard copies through a combination of OCR and manual editing. (E.g.: Project Gutenberg.) The same basic process works for both printed books and pages displayed on an e-reader. It’s quite tedious but not exactly difficult. Anyone with a smartphone can submit usable scans, though some simple DIY equipment speeds up the process and improves the quality, and OCR is getting better all the time.

    In the worst case the book can simply be retyped. People used to copy books by hand after all, using nothing more sophisticated than pen/quill and paper/parchment/papyrus. Unlike in those days the manual effort is only needed once per title, not per copy.


  • Allegories aside, the Bible definitely has a few LGBTQ characters, even if they’re not portrayed in a very positive light. I suppose that means they’ll be banning the Bible from school libraries? Not to mention a fair amount of historical literature… including anything featuring Leonardo da Vinci, Florence Nightingale, King James (yes, that King James), William Shakespeare, King Richard I, or Julius Caesar.

    It will be interesting to see whether this makes the history classes easier, for lack of material to cover, or harder, for lack of references.


  • The most valuable thing is an experienced team who thoroughly understand both the specifications and the implementation as well as the reasoning behind both. Written specifications are great as onboarding and reference material but there will always be gaps between the specifications and the code. (“The map is not the territory.”) Even with solid specifications you can’t just turn over maintenance of a codebase to a new team and expect them to immediately be productive with it.


  • Who is enforcing this and how?

    Liability would be decided by the courts or another form of binding arbitration. Obviously. Harming someone through action or negligence is a tort, and torts are addressed by the judicial branch. Both sides would present their arguments, including any scientific evidence in their favor—the FDA or similar organizations could weigh in here as expert witnesses, if they have something to offer—and the court will decide whether the vendor acted reasonably or has liability toward the defendant.

    If you knowingly sell me a car with an engine about to fail, you are in no way accountable.

    If you knew that the engine was about to fail and didn’t disclose that fact, or specifically indicate that the vehicle was being sold “as-is” with no guarantees, then you certainly should be accountable for that. Your contract with the buyer was based on the premise that they were getting a vehicle in a certain condition. An unknown fault would be one thing, but if you knew about the issue and the buyer did not then there was no “meeting of the minds”, which means that the contract is void and you are a thief for taking their payment under false pretenses.

    Anyway, you continue to miss the point. I’m not saying that everyone should become an expert in every domain. I’m saying that people should be able to choose their own experts (reputation sources) rather than have one particular organization like the FDA (instance/community moderators) pre-filtering the options for everyone. I wasn’t even the one who brought up the FDA—this thread was originally about online content moderation. If you insist on continuing the thread please try to limit yourself to relevant points.