It’s a bit of a weird shower thought but basically I was wondering hypothetical if it would be possible to take data from a social media site like Reddit and map the most commonly used words starting at 1 and use a separate application to translate it back and forth.

So if the word “because” was number 100 it would store the value with three characters instead of seven.

There could also be additions for suffixes so “gardening” could be 5000+1 or a word like “hoped” could be 2000-2 because the “e” is already present.

Would this result in any kind of space savings if you were using larger amounts of text like a book series?

    • Corroded@leminal.spaceOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Huh that was an interesting read. I’m not sure I understood the entirety of it but it sounds like it would be a lot more efficient than what I was thinking

      • morhp@lemmy.wtf
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        Yes, it’s very efficient and the core of what complession formats like .zip do.

        The main difference to your idea is that computers count in binary like 0, 1, 10, 11, 100, 101 and so on and that you don’t want to assign these very low codes words directly. Say you’d have assigned 1 the most common word, then that will would be encoded very short, but you’d sort of take the one away from all the other codes, as you don’t know if 11 is twice the most common word or once the 11th (3rd in decimal) common word.

        Huffman essentially computes the most optimal word assignments mathematically.

        The main other difference between your suggestion and most compression algorithms is that you wouldn’t use a huge dictionary in real time and loading it and looking into it would be very slow. Most compression algorithms have a rather small dictionary buildin and/or they build one on the fly looking at the data that they want to compress.