I recently implemented a backup workflow for me. I heavily use restic for desktop backup and for a full system backup of my local server. It works amazingly good. I always have a versioned backup without a lot of redundant data. It is fast, encrypted and compressed.

But I wondered, how do you guys do your backups? What software do you use? How often do you do them and what workflow do you use for it?

  • beeng@discuss.tchncs.de
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    2 hours ago

    Borg to a NAS.

    500GB of that NAS is “special” so I then rsync that to a 500GB old laptop hdd, of which is is duplicated again to another 500GB old laptop hdd.

    Same 500GB rsync’d to Cloud Server.

  • Random Dent@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 hours ago

    I use BorgBackup with Vorta for a GUI, and I keep the 3-2-1 backup rule for important stuff (IE: 3 copies, 2 on different media, 1 off-site.)

  • lattrommi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 hours ago

    I want to say I’m glad you asked this and thank you for asking. In this day and age there are a lot of valid concerns for privacy and anonymity and the result is that people do not share how their system(s) work, not openly or very often. I’m still fairly new to Linux (3.5 years) and at times, I feel like I am doing everything wrong and that there is probably a better way. Posts like these help me learn about possible improvements or mistakes I might have made.

    I previously used Vorta with Borgbackup locally, automatically backing up my Home (sans things like .cache and .mozilla) to a secondary internal drive every other day. I also would manually back up a smaller set of important documents (memes and porn #joke) to a USB flash drive, to keep on my person, which also would be copied across several cloud storage providers (dropbox, mega, proton), depending on how much space their free versions provided, with items removed according to how much I trusted the provider.

    Then I built a new system. In the process of setting it all up, I had a few hiccups. It took longer than I expected to have a stable system. That was over a year ago (stat / …Birth: 2024-02-05 04:20:53…) and I still haven’t gotten around to setting up any backup system on it. I want to rethink my old solution and this post is useful for learning about the options available. It’s also a reminder to get it done before it is too late. Where I live, tornado season in starting. I lost a lot in 2019 after my city had 4 tornados in one day.

  • hallettj@leminal.space
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 hours ago

    My conclusion after researching this a while ago is that the good options are Borg and Restic. Both give you incremental backups with cheap timewise snapshots. They are quite similar to each other, and I don’t know of a compelling reason to pick one over the other.

    • Zenlix@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      As far as I know, by definition, at least restic is not incremental. It is a mix of full backup and incremental backup.

  • zeca@lemmy.eco.br
    link
    fedilink
    arrow-up
    3
    ·
    8 hours ago

    i do backups of my home folder with Vorta, tha uses borg in the backend. I never tried restic, but borg is the first incremental backup utility i tried that doesnt increase the backup size when i move or rename a file. I was using backintime before to backup 500gb on a 750gb drive and if I moved 300gb to a different folder, it would try to copy those 300gb again onto the backup drive and fail for lack of storage, while borg handles it beautifully.

    as an offsite solution, i use syncthing to mirror my files to a pc at my fathers house that is turned on just once in a while to save power and disc longevity.

  • Gieselbrecht@feddit.org
    link
    fedilink
    arrow-up
    2
    ·
    8 hours ago

    I’m curious, is there a reason why noone uses deja-dup? I use it with an external SSD on Ubuntu and (receently) Mint, where it comes pre-installed, and did not encounter Problems.

  • Pika@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 hours ago

    for my server I use proxmox backup server to an external HDD for my containers, and I back up media monthly to an encrypted cold drive.

    For my desktop? I use a mix of syncthing (which goes to the server) and windows file history(if I logged into the windows partition) and I want to get timeshift working I just have so much data that it’s hard to manage so currently I’ll just shed some tears if my Linux system fails

  • blade_barrier@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    9 hours ago

    Since most of the machines I need to backup are VMs, I do it by the means of hypervisor. I’d use borg scheduled in crontab for physical ones.

  • MentalEdge@sopuli.xyz
    link
    fedilink
    arrow-up
    2
    ·
    8 hours ago

    I recently switched to Kopia for my offsite backup solution.

    It’s apparently one of the faster options, and it can be set up so that the files of the differential backups are handled by a repository server on the offsite end, so file management doesn’t need to happen over the network at a snails pace.

    The result is a way to maintain frequent full backups of my nextcloud instance, with almost no downtime.

    Nextcloud only goes into maintenance mode for the duration of a postgres database dump, after which the actual file system backup occurs using a temporary btrfs snapshot, containing a frozen filesystem at the time of the database dump.

  • rutrum@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 hours ago

    I use borg the same way you describe. Part of my nixos config builds a systemd unit that starts a backup on various directories on my machine at midnight every day. I have 2 repos: one to store locally and on a cloud backup provider (borgbase) and another thats just stored locally. That is, another computer in my house. That local only is for all my home media. I havent yet put the large dataset of photos and videos on the cloud or offsite.

  • melfie@lemmings.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 hours ago

    I currently use rclone with encryption to iDrive e2. I’m considering switching to Backrest, though.

    I originally tried Backblaze b2, but exceeded their API quotas in their free tier and iDrive has “free” API calls, so I recently bought a year’s worth. I still have a 2 year Proton subscription and tried rclone with Proton drive, but it was too slow.

  • poinck@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    10 hours ago

    This looks a bit like borgbackup. It is also versioned and stores everything deduplicated, supports encryption and can be mounted using fuse.

    • Zenlix@lemm.eeOP
      link
      fedilink
      arrow-up
      4
      ·
      10 hours ago

      Thanks for your hint towards borgbackup.

      After reading the Quick Start of Borg Backup they look very similar. But as far as I can tell, borg can be encrypted and compressed while restic is always. You can mounting your backups in restic to. It also seems that restic supports more repository locations such as several cloud storages and via a special http server.

      I also noticed that borg is mainly written in python while restic is written in go. That said I assume that restic is a bit faster based on the language (I have not tested that).

      • drspod@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        8 hours ago

        It was a while ago that I compared them so this may have changed, but one of the main differences that I saw was that borg had to backup over ssh, while restic had a storage backend for many different storage methods and APIs.

      • ferric_carcinization@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 hours ago

        I haven’t used either, much less benchmarked them, but the performance differences should be negligible due to the IO-bound nature of the work. Even with compression & encryption, it’s likely that either the language is fast enough or that it’s implemented in a fast language.

        Still, I wouldn’t call the choice of language insignificant. IIRC, Go is strongly typed while Python isn’t. Even if type errors are rare, I would rather trust software written to be immune to them. (Same with memory safety, but both languages use garbage collection, so it’s not really relevant in this case.)

        Of course, I could be wrong. Maybe one of the tools cannot fully utilize the network or disk. Perhaps one of them uses multithreaded compression while the other doesn’t. Architectual decisions made early on could also cause performance problems. I’d just rather not assume any noticeable performance differences caused by the programming language used in this case.

        Sorry for the rant, this ended up being a little longer than I expected.

        Also, Rust rewrite when? :P