I’m just getting started on my first setup. I’ve got radarr, sonarr, prowlarr, jellyfin, etc running in docker and reading/writing their configs to a 4TB external drive.

I followed a guide to ensure that hardlinks would be used to save disk space.

But what happens when the current drive fills up? What is the process to scale and add more storage?

My current thought process is:

  1. Mount a new drive
  2. Recreate the data folder structure on the new drive
  3. Add the path to the new drive to the jellyfin container
  4. Update existing collections to look at the new location too
  5. Switch (not add) the volume for the *arrs data folder to the new drive

Would that work? It would mean the *arrs no longer have access to the actual downloaded files. But does that matter?

Is there an easier, better way? Some way to abstract away the fact that there will eventually be multiple drives? So I could just add on a new drive and have the setup recognize there is more space for storage without messing with volumes or app configs?

  • Karius@lemmy.ml
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    I’m going to be adding more drives to my current basic setup soon and I think LVM is how i’m going to go, then I can just extend the filesystem across multiple drives in the future as I need

      • MentalEdge@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        LVMs can be convenient, but in my experience, they are a lot more of a headache than they’re worth. But by all means give them a try. I think I just had bad luck.

        But there is also nothing that stops the arrs from working across multiple drives. I use six, with content across all of them, all detected and manageable in the arrs.

        • Billygoat@catata.fish
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          This is what I do. As soon as a drive is full I create a new default root path in the arrs. Tbf, I’ve only had to do this twice so far.

    • Ajen@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      You can’t remove drives from a zpool though. So if you start with a small drive and keep adding drives as you fill them up, you’ll eventually run out of SATA ports and want to replace the smallest drive. The only way to do that is to create a new zpool and copy all of your data to it, which means you need a second set of drives that’s at least as big as the first.

      Or you could add a pci-e SATA card, if you have an extra pci-e port. Used cards like the Dell PERC H310 are cheap and reliable and support 8 drive on their own, or >256 with cheap expander cards that can be daisy-chained (and only need power, so they don’t use up pci-e slots).

      Edit: looks like they added support for removing drives about 5 years ago.

      • ryannathans@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        I prefer m2 pcie cards but same deal, expansion go brrr

        Can also increase the size of a redundant vdev (eg zraid2) by replacing the drives one by one with larger ones. I recently used this approach to increase my 4TB vdev to 72TB

  • pory@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 year ago

    I’m using MergerFS, which makes this really easy. I set up a temp mergerfs array with all my disks except the one I want to replace, add the new drive to my first array, then run a command to move all data from the replaced drive to the temp array. The original array mount point doesn’t notice the difference. Once it’s done, I remove the old disk from my main mergerfs array, add the new one, and delete the “temp” array. Then I can remove the old disk from my Snapraid config and also physically remove it from the server.

    If you’ve got an old PC laying around, you should look into setting up Open Media Vault on it.

  • MentalEdge@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    What’s the problem?

    …the arrs can handle more than one storage folder/drive just fine? You don’t need to use hardlinks unless you want to continue to seed forever.

    If you don’t use hardlinks in the arrs, they wont duplicate the files, they will move them out of the download folder into the library folder. All the hardlink option does is allow you to continue to seed even after the media has been downloaded.

    The data folder is separate, it only contains library details and metadata, no media files. It should never get big enough to fill up a drive.

    My setup downloads media to a temp folder on an SSD, then moves the files onto one of my six drives depending on where I told it to put a series/movie when I added it.

  • Akareth@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I use btrfs. This allows me to add additional hard drives (of different sizes, too) over time very easily without having to touch any other part of the system.

  • SmoothLiquidation@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    When you get a new drive, you could move some of your library to it, like just the movies or tv or whatever. Then you only need to update one library.

    Are you using Linux? You could set it up to mount the new drive into the existing file structure. That way you would not have to change any configurations.

    It might also be handy to configure Jellyfin to save the nfo metadata with the media files so it doesn’t have to re match everything if the path changes.

    I definitely had data indexed at /Mount/driveA/movies and when I moved it to driveB it was a bit of a pain.

    In the long run you might want to invest in a NAS or something. That way you can just add more drives as needed.

    • MentalEdge@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      NASes don’t do anything you can’t setup yourself, and their price-to-benefit ratio is absolute trash. The only reason you should ever buy one is if you are completely tech illiterate.

      Otherwise, build one. If that’s what you meant, agreed. Having one is absolutely worth it.

      • SmoothLiquidation@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I mean, all I said was they should think about “investing in a NAS”, whether you buy a Synology or build your own TrueNAS or whatever it will take more hardware than plugging in more usb drives.

      • ChaoticNeutralCzech@feddit.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Begging my ISP to give me root access to the router they gave me so that I can set up one with a USB-SATA adapter and no additional equipment. (I already use SMB shared folders but they are a mess)

        • MentalEdge@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Routers make for terrible NASes.

          But, you could do what my dad does, he chains his own router after the ISP provided one so he has full control of the second one in the chain.

          My solution was to buy a router-modem that was compatible with the internet type my ISP provides, and ditch their piece of crap entirely.

    • SeaMauFive@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Googling off of this response, I think you’re right that an NAS is the best solution long term. And in terms of a fully scalable system, I saw that I can create a Distributed File System of multiple NAS systems to even further scale. So thank you

      • SmoothLiquidation@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Awesome, I upgraded my Jellyfin box from a Mac Mini with a bunch of usb drives attached, to a Synology 920+ and I have been really happy with it. I upgraded the ram on it and it runs Jellyfin along with the *arr containers just fine.

        As someone else said you can also build your own if you want. Both solutions will allow for easy scaling in the future as you need.

  • redxef@feddit.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I recently did expand my storage. I started with one raid5 array with 4 drives. I just added another drive and grew the array, the LUKS container and the filesystem.

  • notfromhere@lemmy.one
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I built a 5-bay NAS from old computer parts and put ZFS on it for storing media and LLM models etc.