You can do rollbacks if you’re using something like home-manager on a foreign distribution. It’s just a bit more janky admittedly.
You can do rollbacks if you’re using something like home-manager on a foreign distribution. It’s just a bit more janky admittedly.
In this context it actually means that you can take the source code, and get the exact same binary artifact as another build. It means that you can verify (or have someone else verify) that the released binary is actually built from the source code it says it is, by comparing their hashes. You can “reproduce” a bit for bit copy of the released binaries.
Yeah. There’s reasoning for why they do it on their docs, but the reasoning iirc is kanidm is a security critical resource, and it aims to not even allow any kind of insecure configuration. Even on the local network. All traffic to and from kanidm should be encrypted with TLS. I think they let you use self signed certs though?
Kanidm doesn’t require a CA, it just requires a cert for serving https (and it enforces https - it refuses to even serve over HTTP). I think that was just the OP not quite understanding the conceptual ideas at play.
Kanidm wants to directly have access to the letsencrypt cert. It refuses to even serve over HTTP, or put any traffic over it since that could allow potentially bad configurations. It has a really stringent policy surrounding how opinionated it is about security.
The last bit isn’t strictly true - there’s ways to trace such tasks by generating IDs and associating it per task / request / whatever, letting you associate messages together even in a concurrent environment. You can’t just blindly print but there’s libraries and the like to help you do it.
Right, but when there’s third parties involved which you may not trust (which is almost always going to be the case when talking to users not on your server), e2e’s benefit starts becoming a lot more enticing. And while you have a point on out of band key sharing being annoying, it makes sense as a default - especially when content is going across servers. Content should be secure with an opt-out rather than insecure with an opt-in. The latter is just more error prone.
Also: while it’s not friction free, apps like signal have shown that you can get verified e2e to be usable for the general population.
I don’t think the SATA acronym is right…
The point of federation means your content doesn’t only stay on your server. The person you’re talking too can be on a different one and their admin can see them too. Also, I wouldn’t want to be able to access content from any user - it’s a “no trust needed” thing.
Idk about everyone else but I was fine with the specs. A basic Linux machine that can hook up to the network and run simple python scripts was plenty for a ton of use cases. They didn’t need to be desktop competitors. The market didn’t need to be small form factor high performance machines, and I’d argue it wasn’t.
Yeah, and Linux still doesn’t have a good answer to AD for managing suites of end user machines. Linux has a lot going for it - but windows isn’t strictly inferior or anything.
Honestly, the entire AD suite with auth and everything else built in is genuinely a good product. And if what you want is supported by Microsoft, their other services are decent as well.
Instances aren’t banning other instances for federation with communities they dislike. Instances ban other instances for hosting content they dislike. The benefit of starting an instance is you choose who to federate with.
Problem is this assumes that everyone has to build their own captcha solver. It’s definitely a bare minimum standard barrier to entry, but it’s really not a sustainable solution to begin with.
Web 3 is different things depending on who you ask. Block chain, decentralization, or whatever else. We dunno, we aren’t there yet. I personally believe federated services have a chance of being web 3 (and Blockchain is not relevant).
Web 2 is basically big tech on the internet, everything becoming centralized. Everything became easy to use for the end user, all point and click.
Web 1 was the stuff prior to that, when the internet was the wild west.
Because CDNs lighten load and work as a global cache for load times? Game servers and plenty of other types of servers have exposed their IP since the dawn of time.
I have an auto deployed server with only a root user and service accounts… I think that’s valid. :)
Because I associate an OS with more then just an environment. It often has several running apps for instance, often a GUI or shell (which many containers don’t have), are concerned about some form of hardware (virtual or physical), and just… Do more.
Containers by contrast are just a view into your filesystem, and some isolation from the rest of the environment through concepts like cgroups. All the integrations with the container host are a lot simpler (and accurate) to think of as just simply removing layers of isolation, rather then thinking of it like its own VM or OS. Capabilities just fit the model a lot better.
I agree the line is iffy since many OS’s leave out a few things of the above, like RTOS’s for MCUs, but I just don’t think it’s worth thinking of a container like its own OS considering how different it is from a “normal” Linux based OS or VM.
I think the more intuitive model (to me) is instead of thinking of it as a lightweight virtual machine, or a neatly packaged up OS, is to instead think of it as a process shipped with an environment. That environment includes things like files and other executables (like apt), but in of itself doesn’t constitute an OS. It doesn’t have its own filesystems, drivers, or anything like that. By default it doesn’t run an init system like systemd either, nor does it run any other applications other than the process you execute in the environment.
For context for other readers: this is referring to NAT64. NAT64 maps the entire IPv4 address space to an IPv6 subnet (typically 64:ff9b). The router (which has an IPv4 address) drops the IPv6 prefix and does a normal IPv4 NAT from there. After that, you forward back the response over v6.
This lets IPv6 hosts reach the IPv4 internet, and let you run v6 only internally (unlike dual stack which requires all hosts having v6 and v4).