Right now, I have around 20TB of data in redundant ZFS mirrors, so I am somewhat protected against any single drive failing. Critical data is backed up at various cloud providers, but that’s only a few gigs of all my data.

Looking at S3 pricing, It seems rather unfeasible to back up my data there or on the other “big” cloud providers, as it would cost me around $180 with AWS or half of that with backblaze.

How and where do you guys back up your data?

  • Moosemouse@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    It would be marginally risky, but considering how many people have large storage arrays having a “mutual backup compact” between two folks where each can run backups to the others array would help get you an affordable offsite backup for catastrophes.

    I see a bunch of people with 10TB of data and 30TB arrays and if two of them got together they would both be reasonably safe from a total array failure.

    • ddnomad@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      This does sound interesting! Would need some tooling to lay my paranoia to rest though, and some trust towards the other person.

      • Moosemouse@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I can imagine a containerized service that only runs, say, ssh which only runs a forcedcommand, like https://borgbackup.readthedocs.io/en/stable/usage/serve.html

        And set up the container with the storage-opt option to limit space usage. It would make it harder to misuse the space or cpu, or break out into the hosting server.

        You could go one step further and set up something like a tailscale/headscale network and only allow access over that, and limit the acls on the tailnet to only the ssh port. That should shield it from the Internet at large and also apple am absolute minimum of access to the other side.

        I wonder if you could run the tailscale client within the container? Having it all together would make it actually usable.

        I’m also looking at some of the distributed file systems out there, if one supports “m of n” connections to get the data, you could possibly use that to have the encrypted backups stored on multiple machines at once with more resilience.

        • ddnomad@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Tbh the idea does sound interesting, especially if there’s a way to do Shamir’s secret sharing on top of the encrypted snapshot or something. Cause I’m not too worried with exposing my stuff to the internet, as I at least partially do that for a living, but rather make sure I do not existentially send all my family’s documents in plaintext to some stranger on the internet.

      • Moosemouse@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I can imagine a containerized service that only runs, say, ssh which only runs a forcedcommand, like https://borgbackup.readthedocs.io/en/stable/usage/serve.html

        And set up the container with the storage-opt option to limit space usage. It would make it harder to misuse the space or cpu, or break out into the hosting server.

        You could go one step further and set up something like a tailscale/headscale network and only allow access over that, and limit the acls on the tailnet to only the ssh port. That should shield it from the Internet at large and also apple am absolute minimum of access to the other side.

        I wonder if you could run the tailscale client within the container? Having it all together would make it actually usable.

        I’m also looking at some of the distributed file systems out there, if one supports “m of n” connections to get the data, you could possibly use that to have the encrypted backups stored on multiple machines at once with more resilience.

      • Moosemouse@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I can imagine a containerized service that only runs, say, ssh which only runs a forcedcommand, like Borgbackup

        And set up the container with the storage-opt option to limit space usage. It would make it harder to misuse the space or cpu, or break out into the hosting server.

        You could go one step further and set up something like a tailscale/headscale network and only allow access over that, and limit the acls on the tailnet to only the ssh port. That should shield it from the Internet at large and also apple am absolute minimum of access to the other side.

        I wonder if you could run the tailscale client within the container? Having it all together would make it actually usable.

        I’m also looking at some of the distributed file systems out there, if one supports “m of n” connections to get the data, you could possibly use that to have the encrypted backups stored on multiple machines at once with more resilience.

      • Moosemouse@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I can imagine a containerized service that only runs, say, ssh which only runs a forcedcommand, like Borgbackup

        And set up the container with the storage-opt option to limit space usage. It would make it harder to misuse the space or cpu, or break out into the hosting server.

        You could go one step further and set up something like a tailscale/headscale network and only allow access over that, and limit the acls on the tailnet to only the ssh port. That should shield it from the Internet at large and also apple am absolute minimum of access to the other side.

        I wonder if you could run the tailscale client within the container? Having it all together would make it actually usable.

        I’m also looking at some of the distributed file systems out there, if one supports “m of n” connections to get the data, you could possibly use that to have the encrypted backups stored on multiple machines at once with more resilience.

      • Moosemouse@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I can imagine a containerized service that only runs, say, ssh which only runs a forcedcommand, like Borgbackup

        And set up the container with the storage-opt option to limit space usage. It would make it harder to misuse the space or cpu, or break out into the hosting server.

        You could go one step further and set up something like a tailscale/headscale network and only allow access over that, and limit the acls on the tailnet to only the ssh port. That should shield it from the Internet at large and also apple am absolute minimum of access to the other side.

        I wonder if you could run the tailscale client within the container? Having it all together would make it actually usable.

        I’m also looking at some of the distributed file systems out there, if one supports “m of n” connections to get the data, you could possibly use that to have the encrypted backups stored on multiple machines at once with more resilience.