I’m talking about things like “python3-blabla” and “libsomething”. How bad would it be if nobody used these library-packages and everyone just added all kinds of libraries as part of an application’s package?

  • Skull giver@popplesburger.hilciferous.nl
    link
    fedilink
    arrow-up
    27
    ·
    edit-2
    1 year ago

    The norm for most Rust packages is to statically compile everything. That’s why every Rust application needs a rebuild and update every time a vulnerability is found in a package like RustTLS (which many projects don’t do). This allows a Rust program to run basically anywhere a modern enough version of the C library is installed, without any worries about dependencies, at the cost of disk space.

    You end up with binaries tens or hundreds of megabytes that are kilobytes in size for most C code. GUI and networked applications are especially fat, because they’re often full of frameworks and other projects.

    I would estimate the size difference as “10x the size of your /usr/bin, but without 99% of /usr/lib”. My /usr/bin is about 2.3GB in size, my /usr/lib is about 17GB. I wouldn’t be surprised if it would increase the footprint by a few gigabytes. In theory file system level de-duplication may help, but only if the offsets happen to line up.

    In theory you could test this on a distro like Gentoo: if you edit the build scripts right, you can get the entire OS to be compiled with static compilation.

  • 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.social
    link
    fedilink
    arrow-up
    17
    arrow-down
    2
    ·
    1 year ago

    You may find out. This is what snap and flatpack do, basically. And containers, for that matter. They’re a sort of half-assed version of static linking where you get few of the benefits of either, and most of the downsides of both.

    • folkrav@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      They’re a sort of half-assed version of static linking where you get few of the benefits of either, and most of the downsides of both.

      I’m… very curious as to what you mean by this take.

      • Ok. It’s not entirely fair, but I think Snap is designed like something with an ulterior motive masquerading as a solution to a problem of questionable severity.

        Dynamic linking is a reuse solution, right? The benefits are that you reduce code duplication and size impact, and also get the benefit of wider positive impacts of bug and security fixes. But it was mainly developed as a way of reducing space requirements when disk and memory were far more constrained than today. Not only did you reduce at-rest size of programs, by dynamic linking also allows the OS to load code into memory once and re-use that across multiple programs. The downside is dependency hell, where OSes have to manage multiple versions, and poor vendor versioning practices where minor upgrades needed by some applications unexpectedly break others. This last point is one of the problems Snap and Flatpack are trying to solve.

        Statically linked programs minimize dependency hell. A statically linked binary can be copied around to nearly any machine of the same architecture and will run. And it will be runnable, through system upgrades, almost indefinitely. There are almost no dependencies, not even e.g. on a running Snap service on the machine. The downside is greater binary sizes, being impractical for GUIs, the loss of auyomatic distribution of bug and security fixes in dependencies, and only being available to statically compiled languages (outside of bundling software for interpreted VMs, which have always been hit-or-miss second-class citizens).

        So what does Snap get you?

        • A dependency on what is almost a system-wide VM: snap itself
        • Another service that needs to be running on the computer, or else you can’t run snap packages
        • Dynamic linking to libraries, but without the benefit of broad application of bug and security fixes rolled out in dependencies - the downside of statically linked binaries
        • The benefit of statically linked stability, but without the portability and independence of a statically linked binary
        • The loss of much of the dynamically-linked space savings, as library version isolation ends up reducing the ability of the OS to share code between programs.

        Fundamentally, automatic distribution of bug fixes, and stability through upgrades are conflicting goals; it is impossible to satisfy both. This is due to the essential truth that one tool’s bug fix is another’s breaking change.

        I made the comment that I believe Snap has an ulterior motive, and I feel as if this needs an explanation. If you compare Snap’s design to, say, appimage or Flatpack, it’s pretty clear that - by intention or not - Snap creates a walled garden where the Store controlled by one entity which has the ultimate say in what shows up there. It isolate software; if you look at what’s happening in the OSS space, projects that are packaged for Snap tend to be far more difficult to install outside of Snap - the authors often don’t even bother to provide basic building instructions that don’t require Snap. This has a chilling effect on other distributions - I am encountering more software that is only available through the Snap store; without build instructions ouside of Snap, it makes it harder for volunteers to create packages for other distros. It’s not a lot, but it’s a disturbing trend.

    • inZayda@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      i mean aren’t they more like distro independent package managers its not like its going to install a different library for every program unless they require specific versions, which ig could be true, but then youd have both installed anyways so ¯_(ツ)_/¯

      • I think it’s more accurate to say they will install different libraries for every package unless there’s a matching library already installed. I don’t think this is just semantics because, as you say, the former is what traditional distro managers do, whereas the latter is what Snap does. The Snap-based system will end up taking more space, with more different versions of the same library installed, than a traditional system over time. For that, you trade (arguably) fewer dependency-related issues, and more ability to concurrently run software with conflicting library versions. Regardless of the benefits, per the question OP posed, a Snap (or Flatpack) based distro will end up taking up more space.

        I haven’t run Gentoo in decade(s, almost), but IIRC you could configure it to build mostly statically-linked packages, which would pretty much net you OP’s hypothetical system. That would be even larger than a Snap-based system.

    • Sethayy@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      But you do get the simplicity, and following the real ‘free’ part of Foss you get the choice to do the fancy nerd magic if you want.

      But for most of us flatpaks are readily updated and easily installed, idfc about no gtk magicians spells.

      I think its a major step towards general linux acceptance

        • folkrav@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          End-users just see “huh, X distro doesn’t have Y app”. Application availability is a major component of adoption of a platform.

        • Sethayy@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 year ago

          As a mid-power user I almost exclusively use them just because I want my software up to date. for example using discover I can update all my godot installs on all my devices to the latest features I’m seeing everyone rave about. (this is mainly for the maintainers but it trickles down)

          Looking back into when I first was dipping my toes in linux though things like missing libraries and other scary looking apt errors essentially meant I didn’t go further from there and accepted not installing the package. I could see this frustrating lots of users early and causing them to return to windows as I did for a bit.

          edit to add on: I still consider it very important to have the option to install it through a deb let’s say, but more alike how one can compile from source with a thousand flags if they choose - just as long as its a choice

            • carly™@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              In my experience Arch is pretty unstable, though. I’ve never had an Arch installation that didnt break by the end of the month. Flatpaks allow me to use a stable base like Debian while having certain programs more up to date.

              • Sethayy@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                Exactly this is why I haven’t made the switch yet.

                Its like letting a package be managed per package instead of per distro, giving its devs some more fine grain control on stability vs update speed

  • jasondj@ttrpg.network
    link
    fedilink
    arrow-up
    16
    arrow-down
    2
    ·
    1 year ago

    How far you wanna go with this?

    Your shell itself is actually a docker container that just runs bash and mounts the root filesystem, and everything in /bin is just an alias to a dedicated minimal container?

  • amigan@lemmy.dynatron.me
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    1 year ago

    Well, back in the days of static everything, some OSes had every system command be a hard link to the same binary (just switching on argv[0], like busybox) so they could still share library pages. This was a big deal when your workstation only had 8MB of core. But to answer your question, I’m sure we will find out in the next few years with all of the garbage containerization and packaging systems popping up.

  • Zeth0s@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    1 year ago

    I guess nobody knows. It also really depends on the program installed. If you need all qt, gdk, jdk, java for each program with a ui… You’d run out of space soon

  • CondorWonder@lemmy.ca
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    I’m not sure how consistent it is but the static binaries I have for btrfs-progs are about 2x larger than their dynamic counterparts. If you statically compile it only the functions actually used are included in the binary, so it really depends on how much of the library is used.

    • This is a good way to look at it. Anything using a GUI would have a far worse ratio. Imagine statically linking a KDE application. Even for us tiling WM users, statically linking even the barest-bone X program would be huge.

      A reasonably statically linked system would probably be in the range you suggest: about 2x. All the GUI stuff would be dynamic, and even so that accounts for half an install size; more if you install a DE like Gnome or KDE; statically link all of the rest of the headless stuff and double its size, and it’s probably close.

      What would kill you would be all of the scripted stuff. If you bundled the entire python vm for every python tool, it’s more than doubling the size.

    • Tb0n3@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      1 year ago

      Windows has DLLs. I think OP is asking how big it would be if everything were statically linked or didn’t use linked libraries. Like every single thing was a flatpak.

      • echo64@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        Yeah, this is what windows and osx do. Two games/apps don’t share the same dlls. The os can do some stuff to reduce memory footprint if they do, but it requires them to be the same version, effectively the same dll.

        Generally apps on windows and osx do not share libraries between apps, they ship the libraries they need and run them.

        • Tb0n3@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          1 year ago

          Some do, but there absolutely system wide libraries installed from things like dotnet, and included in the system. Just like with Linux though some software was built against an old incompatible library and has to include it, or has its own proprietary ones.

          • echo64@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            You’re presenting it like most libraries an application will use are system libraries, and a few exceptional cases they will bundle a dll. This is absolutely not the case, the vast majority of libraries are bundled.

        • Skull giver@popplesburger.hilciferous.nl
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          The biggest difference between Windows/macOS and Linux is that you don’t need extra libraries for most programs. There’s the occasional C/C++ runtime (that does get shared whenever possible) and a few proprietary DLLs for things like games and Qt, but the OS itself already contains everything. That’s why Linux/BSD has a wide range of package managers to update system components, while Windows just updates itself as if it’s one big piece of software.

          You don’t need to manually specify a version of GnuTLS/BoringSSL/OpenSSL, because every target your application is deployed onto is guaranteed to have a TLS library. You don’t need to bother with picking a GUI library because the SDK provides you with all the GUI controls you need. Sometimes the existing libraries are too old (i.e. when Chrome wants to do modern TLS on old systems) or they’re products you paid (i.e. highly efficient video codecs, compression engines) so you ship them along.

          The packaged APIs aren’t always better (dealing with COM can be a real pain) but you probably don’t need anything that isn’t already installed on the most barebones version of the OS.

    • BareMetalSkirt@lemmy.kya.moe
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Not at all, each package has a regular dependency tree like on any other distro. The difference is that each package can see a different subset of dependencies so you can have multiple versions of a library. What would be a version conflict on most distros here just means you have two different libsomething side by side. The /nix/store on my desktop is roughly the same size as my Gentoo /usr.

    • Chobbes@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Packages can share dependencies in NixOS, and often do for packages in the same version of nixpkgs. It’s not quite the same as statically linking everything, but it basically gives you the perks of that with fewer downsides and other advantages.