• 1 Post
  • 28 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle
  • I’m not going to watch the video, but what’s the procedure for switching between Linux and Windows? Usually you dedicate a GPU entirely to VFIO, with a 2nd GPU for the host OS (or run headless).

    Anyway, will it work? Yes, minus some anti-cheat software. Will it be a simple solution? Well, once you get things stable, yes. The tech behind this is mature, but it can be a rabbit hole.

    I would look into a non-Nvidia GPU for your 2nd PCIe x16 slot (x4, shared with the 2nd M.2 slot FYI). Good idea to check IOMMU groups before buying anything, but modern AMD motherboards are usually fine. Blacklist the Nvidia drivers and dedicate the 3070 to VFIO to make your life easier, and run Linux off the secondary GPU. Intel A380 might be a good choice. Do gaming stuff on Windows and stream via Parsec/Looking Glass/Moonlight+Sunshine; everything else on Linux.



  • Mostly just as a wrapper for Docker. The main issue I’ve run into is Docker’s union file system functionality doesn’t work when backed by ZFS, so disk usage can balloon out of control. I wouldn’t use this in production but don’t tell me how to live my life mom.

    Beyond various Docker stacks I also have a Certbot container that uses Snap (sigh), and Hashicorp Vault container which runs as a vanilla SystemD service. I run Wireguard as part of my OPNSense VM. That’s something I would run in a VM since it’s exposed to the internet. I have an older MinIO and Concourse CI Docker Compose config that I’d love to run in LXC but I suspect that isn’t realistic.

    Note on Vault, I haven’t been able to get mlock to work (used to prevent sensitive memory from being swapped). By all accounts it should just work in LXC, but since it isn’t and there’s no swap on the host I just turned it off. I may migrate Vault to a VM at some point.

    I’m personally just interested in lightweight environments with good enough isolation and don’t break all the time over nothing. Docker mostly accomplishes that for me. LXC + Docker also mostly accomplishes that.

    (My heart yearns for FreeBSD Jails but with decent tooling)


  • I originally excited by Podman, but ultimately migrated away from it. Friendship ended with Ubuntu and Docker -> CentOS and Podman -> Proxmox + Debian LXC (which has its own irritations but anyway). Off the top of my head:

    • Can’t attach a containers to multiple networks. Most of my Docker Compose stacks have an Nginx reverse proxy and a network for each service.
    • But you can use pods. However since they share the same network interface if you have multiple legacy services that both insist on, say, port 80 they can’t be in the same pod. They also don’t isolate services, nor can you assert a specific pod is the one listening on a forwarded port.
    • Pods also have DNS issues with Nginx. It kept crashing since it couldn’t resolve the hostnames of the other containers in the pod, even if they were already running. If you launch a shell inside an Nginx container the other container hostnames resolve fine. I suspect the problem is the container is launched before its behind-the-scenes DNS infrastructure is ready.
    • Podman lets you use secrets on normal containers (yay) but if the secret changes you have to recreate the container. Amazing synergy with rotating TLS certificates.
    • Endless issues with SELinux and bind mounts. My Nginx container kept crashing because SELinux didn’t like the TLS certificate bind mount. This is where I reflected on the endless parade of random issues that I had no interest in solving and finally threw in the towel.

    I brought all this up in another community and was told the problem was [paraphrased] “people keep trying to use Podman like they use Docker” - whatever that means. I do like a number of design choices in it, like including the command used to create containers in the metadata, and how it’s easy to integrate into SystemD for things like scheduled updates.

    Cockpit is pretty slick though, need to install it on my bare metal Debian host.





  • It’s easy* to setup Hashicorp Vault with your own CA and do automated cert generation and rotation, if you are willing to integrate everything into Vault and install your root CA everywhere. (*not really harder than any other Vault setup, but yaknow). I may go down this route eventually since I don’t think a device I don’t control has ever accessed anything I selfhost, or ever will.

    I have a wildcard subdomain pointing to my public IP, and forward port 80 to an LXC container with certbot. Port 80 appears closed outside the brief window when certbot is renewing certs. Inside my network I have my PiHole configured to return the local IP for each service.

    Nothing exposed to the internet at all. There is a record of my hostnames on Let’s Encrypt but not concerned if someone will, say, deduce apollo-idrac is the iDRAC service for a Dell rackmount server called apollo and the other Greek/Roman gods are VMs on it. Seemed like a house of cards that would never work reliably, but three odd years later I only have issues if a DNS resolver insists on bypassing my PiHole. And that DNS resolver is SystemD-ResolveD which should crawl back into whatever hellhole it came out of.


  • They could hijack your site at any time, but with a copy of your live private certs they (or more likely whatever third party that will invariably breach your domain provider) can decrypt your otherwise secure traffic.

    I don’t think there’s significant real tangible risk since who cares about your private selfhosted services and I’d be more worried about the domain being hijacked, and really any sort of network breach is probably interested in finding delicious credit card numbers and passwords and crypto private keys to munch on. If someone got into my network, spying on my Jellyfin streaming isn’t what I’m going to be worried about.

    But it is why CSRs are used.



  • The layoff includes Mary Kirby, who’s been a core writer in the Dragon Age franchise since the first game. Saw takes that the layoffs are just eliminating multiplayer positions, but that’s not true.

    I’ve long suspected that Dreadwolf will make or break BioWare. Since it’s following the same script as Andromeda and Anthem - endless delays, no public progress just lots of b-roll and concept art - I don’t think development is going well. ME: Legacy might have bought BioWare some breathing room but I can’t interpret this as anything other than death throes for the studio.

    BioWare is dead, long live Larian and Spiders?


  • I’ve found the idea of LXC containers to be better than they are in practice. I’ve migrated all of my servers to Proxmox and have been trying to move various services from VMs to LXC containers and it’s been such a hassle. You should be able to directly forward disk block devices, but just could not get them to mount for an MinIO array - ended up just setting their entire contents to 100000:100000 and mounting them on the host and forwarding the mount point instead. Never managed to CAP_IPC_LOCK to work correctly for a HashiCorp Vault install. Docker in LXC has some serious pain points and feels very fragile.

    It’s damning that every time I have a problem with LXC the first search result will be a Proxmox forum topic with a Proxmox employee replying to the effect of “we recommend VMs over LXC for this use case” - Proxmox doesn’t seem to recommend LXC for anything. Proxmox + LXC is definitely better than CentOS + Podman, but my heart longs for the sheer competence of FreeBSD Jails.





  • Yep unfortunately. I would start with an Nvidia Shield or similar on the Samsung if that’s definitely where you want the 4k content. That’s more or less what I did, though I’ve evolved to where I want streaming to work perfect out of the box on whatever screen I feel like using.

    The good news is you don’t need a lot of server (that’s good)

    But you want very specific functionality (that’s bad)

    Optimal hardware is not expensive, a reasonably modern Dell/HP/whatever desktop is probably fine (that’s good)

    But it’s more complicated than “find something with H264/H265 support” because there’s like eighty flavors of everything and you might want things like AV1 so you don’t have to swap hardware in the future and you can run into problems where driver or library issues just randomly breaks certain codec combinations and there’s no Just Buy This answer. (…)

    … that’s bad. (can I go now)


  • Your options are something with more HP on the server side, or something that can direct play at the TV side. I’ve discovered a Ryzen 3300X is just not quite powerful enough to transcode 4k content, so I’m in the process of migrating media off my NAS and onto a dedicated server with an Intel 8700. The easiest path on the server side is to get an older Dell/HP/whatever desktop with as new of an Intel CPU as you can find - newer iGPUs support more codecs, but the 8th generation (like my 8700) seems to be a decent sweetspot. The Intel A380 also supports everything under the sun and is pretty cheap. Keep in mind you’ll need PlexPass or switch to Jellyfin if you want GPU transcoding.

    On the playback side, I think Nvidia Shield is your best bet. I’ve found my Roku stick works well with most video content, but generally needs transcoding if I turn on subtitles :/ I used an M1 Mac Mini as an HTPC, which handled everything extremely well but it’s not as TV friendly nor as cheap - if it supported HDMI 2.1 and 4k120hz output I’d probably still be using it as an HTPC.