Hi guys! I’m going at my first docker attempt…and I’m going in Proxmox. I created an LXC container, from which I installed docker, and portainer. Portainer seems happy to work, and shows its admin page on port 9443 correctly. I tried next running the image of immich, following the steps detailed in their own guide. This…doesn’t seem to open the admin website on port 2283. But then again, it seems to run in its own docker internal network (172.16.0.x). How should I reach immich admin page from another computer in the same network? I’m new to Docker, so I’m not sure how are images supposed to communicate within the normal computer network…Thanks!

  • PunkiBas@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    9 months ago

    I have Immich working fine inside an LXC with docker, You just gotta make sure that Keyctl and Nesting are activated in the LXC container’s options in Proxmox and make sure to use the Immich recommended docker-compose file.

    If you still have problems try to take a look at the containers logs with the “docker logs” command to see if there’s an error message somewhere.

    • iturnedintoanewt@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      Thanks! When I type my LXC’s IP:2283, I get unable to connect. I checked the docker-compose.yml and the port seems to be 2283:3001, but no luck at either. Is there anything that needs to be done on docker’s network in order to…“publish” a container to the local network so it can be seen? Or any docker running with a port can be reached via the host’s IP with no further config? Checking the portainer’s networks section, I can see an ‘immich-default’ network using bridge on 172.18.0.0/16, while the system’s bridge seems to be running at 172.17.0.0/16. Is this the correct defaults? Should I change anything?

      Thanks!

      • PunkiBas@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        That all seems correct, the way to expose services with a docker-compose is by using the:

        ports:
          - 2283:3001
        

        That means that you expose whatever is at port 3001 in the cointainer (in this case the Immich server inside the docker container, which is exposed by default to 3001) to port 2283 of the host machine (in this case, your LXC container). So it should work if everything else is set up correctly.

        The 172.x.x.x networks are normal internal networks for docker to use, normally you needn’t care about them because you just expose whichever port you need via the ports command above.

        Are you following this step by step to set it all up? is your .env file properly set up? did you check the containers logs?

        • iturnedintoanewt@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          9 months ago

          Thanks…I did follow their guide, step by step. The only thing that I customized was the immich uploads folder, which I want it to go to my NAS. I have it set up on an NFS mount handled by proxmox, and then it’s just a transparent bind mount in the LXC. The user in the lxc container has read/write access to this location, and docker runs on this same user. But I reckon I’m addressing this in docker in a horribly messed way, as I’ve never used it before. Checking the docker logs immich_server, I’m getting this:

          [Nest] 7  - 04/08/2024, 9:53:08 AM     LOG [SystemConfigService] LogLevel=log (set via system config)
          node:fs:1380
            const result = binding.mkdir(
                                   ^
          
          Error: EACCES: permission denied, mkdir 'upload/library'
              at mkdirSync (node:fs:1380:26)
              at StorageRepository.mkdirSync (/usr/src/app/dist/repositories/storage.repository.js:112:37)
              at StorageService.init (/usr/src/app/dist/services/storage.service.js:30:32)
              at ApiService.init (/usr/src/app/dist/services/api.service.js:72:29)
              at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
              at async ApiModule.onModuleInit (/usr/src/app/dist/app.module.js:58:9)
              at async callModuleInitHook (/usr/src/app/node_modules/@nestjs/core/hooks/on-module-init.hook.js:51:9)
              at async NestApplication.callInitHook (/usr/src/app/node_modules/@nestjs/core/nest-application-context.js:223:13)
              at async NestApplication.init (/usr/src/app/node_modules/@nestjs/core/nest-application.js:100:9)
              at async NestApplication.listen (/usr/src/app/node_modules/@nestjs/core/nest-application.js:169:33) {
            errno: -13,
            code: 'EACCES',
            syscall: 'mkdir',
            path: 'upload/library'
          
          
          

          Let’s see… So let’s say my LXC container has a /mnt/NAS-immich-folder path, already mounted and with rw permissions. Then I edited my docker-compose.yml volumes line as follows:

              volumes:
                - /mnt/NAS-immich-folder:/mnt/immich
                - ${UPLOAD_LOCATION}:/mnt/immich
                - /etc/localtime:/etc/localtime:ro
          

          And my .env path looks like:

          # The location where your uploaded files are stored
          UPLOAD_LOCATION=/media/immich
          
          

          …I’m sure I’m doing something horribly wrong besides the no-no of docker over LXC…Is there anything messed in these paths? What am I doing wrong? Thanks so much!

          • PunkiBas@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            9 months ago

            Ah! now I see the problem

            permission denied, mkdir ‘upload/library’

            It’s clearly having permission problems with the image library directory.

            Also:

            volumes:
             - /mnt/NAS-immich-folder:/mnt/immich
             - ${UPLOAD_LOCATION}:/mnt/immich
            

            with this command you are trying to mount this directory from your LXC machine:

            /mnt/NAS-immich-folder

            into this directory inside the immich container:

            /mnt/immich

            And then you also try to mount a second directory there in the next line. But immich doesn’t use /mnt/immich for its library, it uses this:

            /usr/src/app/upload

            You should NOT edit the default docker-compose.yml file. Instead you should only edit the .env file like so:

            UPLOAD_LOCATION=/mnt/NAS-immich-folder

            I can also see that there’s a specific tutorial on how to set it up with portainer. In that case you might have to edit the docker compose file to replace .env with stack.env and place the contents of the env file in the advanced-> environment variables of portainer.

            Try these things and ask here again if you can’t get it running.

            • iturnedintoanewt@lemm.eeOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              9 months ago

              Wow thanks! Let me take a look, I missed the portainer part! Sigh…I followed through the instructions. I deleted the previous stack, and created a new one, this time all the way from portainer. This time I ONLY modified the .env file, well and according to the instructions the .yaml referring to the .env as stack.env now. Made it deploy…and nothing. Still getting the same mkdir error :(

              • PunkiBas@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                9 months ago

                Might be some NFS permissions problem, can you try some other temp directory with say 777 permissions to see if it’s that?

                • iturnedintoanewt@lemm.eeOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  9 months ago

                  Thanks! Seems more about how to properly map a local host path/mount on docker. For which I’m completely noob…I think this is where I’m failing right now.

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    4
    ·
    edit-2
    9 months ago

    Did you say you’re running docker in LXC? So container in a container? If yes, that’s generally an anti pattern.

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    4
    ·
    9 months ago

    Wait, you’re running docker inside lxc? I would not do that. I would create a full VM and run docker in there. Or, if that’s all you’re running, skip proxmox and install Debian or whatever on bare metal, and docker on that.

    • iturnedintoanewt@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 months ago

      Sure…But proxmox is already there. It’s installed and it runs 5VMs and about 10 containers. …I’m not going to dump all that just because I need docker…and I’m not getting another machine if I can get use that. So…sure, there might be overhead, but I saw some other people doing it, and the other alternative I saw was running docker on a VM…which is even more overhead. And I fear running it on the proxmox server bare metal, it might conflict with how it manages the LXC containers.

      • chiisana@lemmy.chiisana.net
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        9 months ago

        Docker inside LXC adds not only the overhead they’d individually add — probably not significant enough for it to matter in a homelab setting — but with it also the added layer of complexity that you’re going to hit when it comes to debugging anything. You’re much better off dropping docker in a full fledged VM instead of running it inside LXC. With a full VM, if nothing else, you can allow the virtual networking to be treated as it’s own separate device on your network, which should reduce a layer of complexity in the problem you’re trying to solve.

        As for your original problem… it sounds like you’re not exposing the docker container layer’s network to your host. Without knowing exactly how you’re launching them (beyond the quirky docker inside LXC setup), it is hard to say where the issue may be. If you’re using compose, try setting the network to external, or bridge, and see if you can expose the service’s port that way. Once you’ve got the port exposure thing figured out, you’re probably better off unexposing the service, setup a proper reverse proxy, and wiring the service to go through your reverse proxy instead.

        • iturnedintoanewt@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          Thanks! When I type my LXC’s IP:2283, I get unable to connect. I checked the docker-compose.yml and the port seems to be 2283:3001, but no luck at either. Is there anything that needs to be done on docker’s network in order to…“publish” a container to the local network so it can be seen? Or any docker running with a port can be reached via the host’s IP with no further config? Checking the portainer’s networks section, I can see an ‘immich-default’ network using bridge on 172.18.0.0/16, while the system’s bridge seems to be running at 172.17.0.0/16. Is this the correct defaults? Should I change anything?

          Thanks!

      • grehund@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        Jim’s Garage on YT, he recently did a video about running Docker in an LXC. I think you’ll find the info you need there. It can be done, but if you’re new to Docker and/or LXCs, it adds an additional layer of complexity you will have to deal with for every container/stack you deploy.

      • vzq@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        9 months ago

        You are absolutely free to fuck yourself over by using a niche option plagued by weird problems.

        Or you could, like, not do that.

      • earmuff@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        Add a new VM, install docker-ce on it and slowly migrate all the other containers/vm‘s to docker. End result is way less overhead, way less complexity and way better sleep.

        • iturnedintoanewt@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          Thanks…So you think a full VM will result in less overhead than a container? How so? I mean, the VM will take a bunch of extra RAM and extra overhead by running a full kernel by itself…

          • earmuff@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            I was assuming you were able to get rid of the other 5 VM‘s by doing so. If not, obviously you would have not less overhead.

            • iturnedintoanewt@lemm.eeOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              9 months ago

              Yeah, the ones being VMs cannot be transferred easily to containers…I would have done so over to LXC, as it’s been my preferred choice until now. But Home Assistant was deployed over a VM template provided by HA, and the windows VMs…well, they’re Windows. I also have an ancient nginx/seafile install that I’m a bit afraid to move to LXC, but at some point I’ll get to it. Having Immich for pictures would reduce a bit the size of some of the Seafile libraries :)

              • earmuff@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                2
                ·
                9 months ago

                My HA is running in docker. It is easier than you might think. Forget about LXC. And just take your time migrating the stuff and only when the service works in docker, you can shut off the VM. Believe me, management of docker is way easier than 5 VM‘s with different OS‘s. Docker Compose is beautiful and easy.

                If you need help, just message me, I might be able to give you a kickstart

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        9 months ago

        I did it that way for years. It’s not worth the hassle my man. I did the same, told people that it’d be fine, that it was more performant and so it was worth it. But then the problems, oh lord the problems. Every proxmox update brought hours or days of work trying to figure how how it broke this time. Docker updates would make it completely bork. Random freezes, permission errors galore. I threw in the towel on it, figuring I was hacking it making it work anyway.

        Now I do vms on proxmox. Specifically I swapped to k3s which is a whole other thing, but docker in vms runs fine for how much less annoyance there is. Selfhosting became a lot less stressful.

        Learn from our mistakes, OP

  • Kaavi@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 months ago

    I’ve been using proxmox mainly with lxc containers for years. I gave an lxc running docker and portainer, for a few services I have running in docker.

    I wouldn’t do it with anything critical it anything that needs mich performance or resources. But honestly most things don’t need that.

    So is you like me just need a few docker containers and you already have everything else running - this can be a fine way to do it. Go for it :)

  • thirdBreakfast@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 months ago

    I routinely run my homelab services as a single Docker inside an LXC - they are quicker, and it makes backups and moving them around trivial. However, while you’re learning, a VM (with something conventional like Debian or Ubuntu) is probably advised - it’s a more common experience so you’ll get more helpful advice when you ask a question like this.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    9 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    HA Home Assistant automation software
    ~ High Availability
    HTTP Hypertext Transfer Protocol, the Web
    IP Internet Protocol
    LXC Linux Containers
    NAS Network-Attached Storage
    NFS Network File System, a Unix-based file-sharing protocol known for performance and efficiency
    nginx Popular HTTP server

    6 acronyms in this thread; the most compressed thread commented on today has 6 acronyms.

    [Thread #664 for this sub, first seen 8th Apr 2024, 10:55] [FAQ] [Full list] [Contact] [Source code]

  • N0x0n@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    edit-2
    9 months ago

    Docker networking is fun :) (IMO).

    Without having a look at your container and how you configured it, if you have correctly mapped your ports and didn’t change anything fancy and don’t use a reverse proxy

    Your container should be accessible on your host’s IP mapped with you Immich docker port:

    HostIP:2283

    Edit: Also, don’t run a docker container in… Another container (LXC).

    Containerinception

    • iturnedintoanewt@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Thanks! When I type my LXC’s IP:2283, I get unable to connect. I checked the docker-compose.yml and the port seems to be 2283:3001, but no luck at either. Is there anything that needs to be done on docker’s network in order to…“publish” a container to the local network so it can be seen? Or any docker running with a port can be reached via the host’s IP with no further config? Checking the portainer’s networks section, I can see an ‘immich-default’ network using bridge on 172.18.0.0/16, while the system’s bridge seems to be running at 172.17.0.0/16. Is this the correct defaults? Should I change anything?

      Thanks!

      • WhyAUsername_1@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        Can you reach other services on that vm? If you don’t know that then test this first. ( May be run a python http service to test this?)