Docker is dead

Just for the record, rpm and dnf are two different things. rpm is how a *single* package is handled. Then came yum, which handled dependencies and the like. That is now replaced by dnf, which, in horse racing stands for did not finish, so is rather regrettably named. It does a pretty decent job.
Dang, I dunno how these things turn into fights so often. Yeah, there are Linux haters here, but as the FreeBSD FAQ says, when one says FreeBSD is better or worse than another operating system, that's user opinion only.
 
You're the one being uncivil. RTFM. Your seniority does not trump your abject and reproducible lack of knowledge in this moment in time. Cry more.
Oh well. Your choice.

(I'll note here to people casually reading this thread: we do, in fact, have full 3d acceleration under Linuxulator with Nvidia/Intel/AMD and there is a good chance we are going to have some CUDA support there as well.)
 
Oh well. Your choice.

(I'll note here to people casually reading this thread: we do, in fact, have full 3d acceleration under Linuxulator with Nvidia/Intel/AMD and there is a good chance we are going to have some CUDA support there as well.)
*Requiring HyperV under the hood, which 85% of the people reading this thread will not know the significance of.

That does not change the fact WSL DOES NOT (and I implemented 20% of it and know for a fact) use Hyper V anymore for anything but GPU acceleration.

If you pick it up and use it for "standard" workloads, you will never pay the price of hardware virtualization on WSL anymore assuming you updated anytime past roughly March last year.
 
You implemented what?
WSL's bootstrap is partially (almost half, 48.4% by line count) mine.

I remapped half of the linux system calls to NT's and resolved the transactions applicable. That comes out to roughly 19.4% of WSL's codebase now that Hyper V is gone for everything BUT GPU acceleration, which is detected by reflection and essentially something no one pays for.
 
Let me ask you one last time: where exactly those lightweight skip-the-Linux-kernel syscalls are documented? Surely, it must be good for marketing to make a few blog posts about them? Especially if that makes WSL so much faster? (FYI, I never made this argument, I don't care about Hyper V performance overhead in the slightest.)
 
Let me ask you one last time: where exactly those lightweight skip-the-Linux-kernel syscalls are documented? Surely, it must be good for marketing to make a few blog posts about them? Especially if that makes WSL so much faster? (FYI, I never made this argument, I don't care about Hyper V performance overhead in the slightest.)
Not my fault the Customer Relations team at Microsoft sucks. It's still in the code\. Or is disassembly and decompilation too hard for the almighty BSD dev...?
 
So, you were repeatedly telling me to RTFM machine code? Makes sense, thank you. I'm assuming you can't confirm you are working for Microsoft as well? NDA and stuff, right?
 
So, you were repeatedly telling me to RTFM machine code? Makes sense, thank you. I'm assuming you can't confirm you are working for Microsoft as well? NDA and stuff, right?
That contract ended last year in June.

And no, I told you to read the decompilable code that isn't remotely obfuscated.
 
Making it drastically easier to port a lot of software to FreeBSD and improve cloud hosting capability is a pretty sweet benefit.
Ok, I'll bite. How does this make software 'drastically easier to port'?
Or do you think Joyent doing EXACTLY that with Triton and being wildly successful with it is just a fluke?

So what's the issue?

And once again, they are not "Linux Containers." They are application containers. You can implement them with jails or however you like.

And once again, what's the big benefit? Oh wait, isn't it jails? Me, I like clonos.

Being able to say "give me an app environment with my provided application files in a specified directory path, some support applications (like nginx or Apache) from your native implementation repo, a set of port mappings I provide, a set of DNS resolutions to external resources I provide, and you secure it/lock it down however you like" all with one yaml file and an environment variables list is enormously beneficial to a lot of developers. Gone are the Ansible playbooks and galaxies, gone are the Virtual Box and VMWare cruft, gone are all of the network admin upkeep tasks like renewing loadbalancer and Apache SSL certs (new deployment = new cert) gone are the differences between local and remote software deployment...
Yes, yes. We know what you're selling, we've said we're not buying, your foot's jammed in the door and we're about to loose the dog on you... 😄
As I have tried to make crystal clear, it's supporting OCI, NOT Docker and NOT Linux. Make it native. And don't use straw man args about Linux to shoot down an idea that isn't even tied to Linux.
OCI is linux foundation sponsored, it is, as I said, free to fund OCI containers in Freebsd; perhaps your efforts would be best served canvassing them rather than USERS of an OS?
Unless things have changed overnight, that foundation is about increasing linux market share, not freebsd's. Perhaps freebsd should adopt the lsb as well?
 
For 99% of WSL use cases, zero reliance on Hyper V or any hardware virtualization.
Most of the documentation contradicts what you are saying. Even the Docker docs are very clear on the role of Hyper-V in Docker's Linux emulation: https://docs.docker.com/desktop/faqs/#why-is-windows-10-required

As for the GPU passthrough, they are again very clear about this:

Starting with Docker Desktop 3.1.0, Docker Desktop supports WSL 2 GPU Paravirtualization (GPU-PV) on NVIDIA GPUs. To enable WSL 2 GPU Paravirtualization, you need
In particular, prior to Docker 3.1.0, you still needed WSL 2 which still needed Hyper-V.

But all this aside, some version of Docker supports VirtualBox as a driver: http://docs.docker.oeynet.com/machine/drivers/virtualbox/

So use it and be happy. However I still recommend you will be better off running Linux software on Linux. But better yet, write portable / cross-platform software and run it natively in a FreeBSD Jail. No fuss, no muss and no Docker bullsh*t :D
 
patrickjp93, Have you looked at the recently announced support for FreeBSD jails in containerd? Isn't that more or less what you want?
Is it?

From what I've discerned, apparently anyone who uses FreeBSD is bogged down in a quagmire of mismatched Python versions, or FreeBSD just needs it: "I do, however, think BSD would benefit immensely from having an Open Container Initiative-compliant/compatible "container" runtime.".

Apparently, this lack of containers is "getting in the way of developer onboarding (OMG!) 😵 and productivity into a software product built to run on a completely different OS from their company-provided machine." (My comments in bold).

I freely admit, I couldn't care either way if FreeBSD spends time, money and effort incorporating this container-thingy into the OS. I also freely admit I've never used docker or the like. I have no professional interest in them (they seem to suit a web presence more than anything) and find them just about as interesting as dog poo. I am not a javascript programmer; perhaps this is more their domain? :rolleyes:

I just don't see how this improves FreeBSD any. Maybe as you, bakul, pointed out, it's up to a third-party to build it. If people want it, they will help and rush to improve it and implement it in FreeBSD. A win/win for thems container nerds. 🤓
 
From what I've discerned, apparently anyone who uses FreeBSD is bogged down in a quagmire of mismatched Python versions, or FreeBSD just needs it: "I do, however, think BSD would benefit immensely from having an Open Container Initiative-compliant/compatible "container" runtime.".
I read the quoted part differently than you: By "BSD would benefit having an OCI compliant runtime" I believe patrickjp93 simply meant such a thing would attract new corporate users to FreeBSD as they would have potentially a much more stable alternative to Linux containers. At least to me that quoted sentence doesn't mean that the FreeBSD project has to provide it. In fact most of us can happily ignore such a thing and continue using FreeBSD as we always have or want.

I do believe FreeBSD already has the necessary pieces for someone to build an OCI compliant runtime as shown by containerd support for jails (not having looked at it I don't know how good it is but at least we have some sort of an existence proof).

I believe the reason for popularity of containers is because people have figured out that especially services a require lot more than running a server program as a daemon. And orchestration layers such as kubernetes have sprung up to centrally manage, monitor & scale up a set of such services. [Though I do think kubernetes is rather over-engineered]

I'd love it if I can standup mail, dns, dhcp, web, etc. service jails with minimal work. Existence of tools to do this does not mean everyone has to use them or even that they will have any effect on their work (unless they turn out to be genuinely useful).
 
I believe the reason for popularity of containers is because people have figured out that especially services a require lot more than running a server program as a daemon. And orchestration layers such as kubernetes have sprung up to centrally manage, monitor & scale up a set of such services. [Though I do think kubernetes is rather over-engineered]
These two sentences contradict each other. Abominations like Kubernetes exist precisely because running a service requires more than what the containerization technologies provide. Why layer "orchestration" hacks on top of the broken architecture instead of stepping back and rethinking the approach? Because it's dogma that containers are the only proper way of deploying your stack. It's religion, not engineering.
 
Can't you just slap something like puppet on top of jails and have a Docker/Kubernetes-like system? I never really understood the hype about such broken software.
 
Just thinking it over, even if we slap something on top of a jail; it still wouldn't work. For the most part, it's like jails in that you have to have a base distro to run everything off from. Considering, all of the docker packages anymore are all based off like alpine/ubuntu/fedora or so; you'd end up have a linux distro inside a jail (so you still end up having the same issues as you do with linux-compatibility layer). Even after that, think about all the setup commands the yaml files have... First it downloads and sets a distro, then it usualy updates and downloads the dependencies (commonly using apt-get), then copies custom configurations and the intended package... The key issue is going to be the apt-get (which is a linux thing, not freebsd).

In the end, even if you get docker working, you end up reconfiguring everything all over again anyways. That that point is it even worth it, since you can't just distribute it between linux and bsd without enclosing everything with a complicated if/else? You have to remember, the Freebsd kernel is not directly compatible with the linux kernel, same with the compiled libraries. After everything is done, it is the same of comparing greek to french. They may share some common words, but beyond that they are 2 completely different languages.
 
That that point is it even worth it, since you can't just distribute it between linux and bsd without enclosing everything with a complicated if/else? You have to remember, the Freebsd kernel is not directly compatible with the linux kernel, same with the compiled libraries.
Yup. Where it is going to get fun is when the 2029 Linux kernel breaks compatibility with 2021 Linux distro userland. It will be just as impossible to support a "neatly packaged" Docker container as on FreeBSD or on some ancient SPARC64 processor.

Docker guys either don't understand the technology behind it (a set of ratty scripts around Linux chroot/cgroups) or they are simply kidding themselves.

I would actually recommend they try to get a 1st generation Docker container running on a current machine. Then they can really see the failure of the entire solution.
 
I read the quoted part differently than you: By "BSD would benefit having an OCI compliant runtime" I believe patrickjp93 simply meant such a thing would attract new corporate users to FreeBSD as they would have potentially a much more stable alternative to Linux containers. At least to me that quoted sentence doesn't mean that the FreeBSD project has to provide it. In fact most of us can happily ignore such a thing and continue using FreeBSD as we always have or want.

I do believe FreeBSD already has the necessary pieces for someone to build an OCI compliant runtime as shown by containerd support for jails (not having looked at it I don't know how good it is but at least we have some sort of an existence proof).

I believe the reason for popularity of containers is because people have figured out that especially services a require lot more than running a server program as a daemon. And orchestration layers such as kubernetes have sprung up to centrally manage, monitor & scale up a set of such services. [Though I do think kubernetes is rather over-engineered]

I'd love it if I can standup mail, dns, dhcp, web, etc. service jails with minimal work. Existence of tools to do this does not mean everyone has to use them or even that they will have any effect on their work (unless they turn out to be genuinely useful).
THANK YOU!!!

Finally a pragmatist ignoring the NIH bias.

These two sentences contradict each other. Abominations like Kubernetes exist precisely because running a service requires more than what the containerization technologies provide. Why layer "orchestration" hacks on top of the broken architecture instead of stepping back and rethinking the approach? Because it's dogma that containers are the only proper way of deploying your stack. It's religion, not engineering.
It's not poor engineering at all. It's separation of concerns.

Kubernetes provides you the tools for rapid provisioning of storage volumes (ephemeral, persistent, w/e), claims-based assignment & security on those volumes, network isolation at the cluster level, a "Factory" pattern to deploy Services (Application Images aka Docker Containers that you built separately), secure patterns for injecting secrets, and an administrative tool suite to manage the cluster scaling and health monitoring. And all of this is automated in an Open Source codebase not reliant on the cleverest esoterica of Bash scripts.

The architecture isn't broken in the least. And no one is saying containers are the only way, but they provide numerous benefits to smaller operations (or those who ended up with a poorly responsive cloud provider that can't keep up provisioning more servers) that don't have in-house datacenter resources to be able to grow rapidly and predictably on very stable infrastructure. And once you have that assurance and capability, there are very few reasons to go back to in-house big iron management, especially when you know it's a never-ending battle of upgrading hardware and middleware in a brittle in-house datacenter that will NEVER have the redundancy and safety of dedicated providers.

You'll always be paying some tiny performance and per-unit scaling cost for that container abstraction rather than running on bare metal, but that's where things like Focker can come in and even erase that cost, albeit only about half as feature-rich as K8s right now (which is not disparaging the author, because he's one of a literal handful of people working on it)

It's the Infrastructure as Code and Environment Setup Automation being highly portable which is useful/valuable. Focker is about halfway there.

And as for kpedersen , I just did exactly that: running a V1 Docker Container on my machine. Trivial. I'm not sure what problems you expected, but the external resource integrations, volume mounting, and virtual networking all snapped together about as simply as one could ask for starting from a clean slate.

The value is not in the underlying implementation. Yes, Docker and Kubernetes are deeply flawed. Shall we put a final nail in their coffin by providing the better and (perhaps) final word on automated deployment and orchestration of services? That would seem the more productive thing to do than get mired in NIH-ism.
 
NIH is the process of developing your own solution rather than using an existing or standard approach.

You see, we didn't do that. We already had Jails *long* before Docker (and LXC/cgroups) was being spec'ed up.
Once Docker matures, it might wrap around Jails. However it will probably be gone long before that. So it is more correct to say that Docker itself is the result of NIH.

Just because Docker (Inc.) made some noddy "Open Container Initiative" years after Jails / Zones / LPARs were in wide use as a means to sell their cloud services means very little. So many people new to the game don't see that ;).

But FreeBSD is flexible. We do support Docker. Install VirtualBox and away you go. That is exactly what they did for first gen containers on non-Linux operating systems.

(I even remember when Docker was called dotCloud. They did consider FreeBSD Jails but Linux was an easier platform to monetize because it was more widespread. The actual technical merits were irrelevant. Linux used OpenVZ at the time and that was much more basic than LXC).

And as for kpedersen , I just did exactly that: running a V1 Docker Container on my machine. Trivial. I'm not sure what problems you expected, but the external resource integrations, volume mounting, and virtual networking all snapped together about as simply as one could ask for starting from a clean slate.
It errors saying that the running kernel is too recent for the current libc.

Something similar to this but going the other way. Unlike FreeBSD, the Linux kernel is rarely compiled with backward compatibility in mind:
https://sourceware.org/legacy-ml/libc-help/2017-01/msg00011.html

Also, the very, very first containers will be 32-bit binaries and Docker doesn't really handle multi-lib. In particular many Linux platforms don't.
 
Back
Top