1. Library
  2. Podcasts
  3. The Kubelist Podcast
  4. Ep. #38, Exploring K0s with Jussi Nummelin of Mirantis
The Kubelist Podcast
55 MIN

Ep. #38, Exploring K0s with Jussi Nummelin of Mirantis

light mode
about the episode

In episode 38 of The Kubelist Podcast, Marc and Benjie speak with Jussi Nummelin of Mirantis. This talk explores the accessibility and consistency of utilizing a Kubernetes distribution with zero dependencies. Other topics explored include the K0s project from Mirantis, the evolution of container management, and the advantages of control plane isolation.

Jussi Nummelin is Senior Principal Engineer at Mirantis. He is a tech ​and ​team ​lead ​for ​Mirantis’s new Kubernetes distribution, K0s. Jussi embraced containers early, deploying Docker 0.6 to production. Since 2017, he's been working in and around Kubernetes.

transcript

Marc Campbell: Welcome back to another episode of The Kubelist Podcast, today we're here with Jussi Nummelin from Mirantis. Jussi is a senior principle engineer at Mirantis and a tech and team lead for the K0s project and we're excited to talk about that project and dig in. Jussi, welcome.

Jussi Nummelin: Thanks for having me.

Marc: Let's get started. Can you tell us a little bit about your background, what you did before Mirantis and what led you up to your current role working in the Kubernetes ecosystem?

Jussi: Well, that's actually a bit longer story so I hope I will have the time, but I actually started working with kind of like, "Cloud native world," even well before it was actually even called cloud native. So I've been working in IT for 20 plus years already, and I started with containers very, very early on. We actually started to build stuff on top of Docker on 2013 I think, and Docker was like 0.4 version actually back then, I think.

Actually we went to production with 0.4 version and, oh god, that brings back some bad memories. I don't want to go there anymore. Basically what we did back then I was working at a Finnish... kind of like a consulting shop in a way and what we built with a very small team was like an in-house Heroku cloud and that was fun, and that got me into the world of containers and what is the whole cloud native ecosystem nowadays.

Marc: That's cool. Those early days at Docker, it's fun now, but they were also a lot of fun. I actually remember we started our company, I think it was Docker. It wasn't that early, I think you said 0.4. I remember putting 0.6 into production and thinking, "This is cool. This makes a big difference in how we're actually deploying code." And now look where we are, but you can definitely see the promise and the platform shift that was coming back in those early days.

Jussi: Yeah, absolutely, absolutely. I think what you were probably witness to is Docker could break down in so many ways back then. At some point we were joking that there must be some Finnish guy working at Docker because it broke down on basically all the public holidays in Finland. Like, "Oh god, how is this even possible?"

Benjie DeGroot: I think it broke down on all the public holidays in the United States as well at that time. So don't feel too special. I think it just broke down a lot. So you started off super early with Docker, you were building this internal Heroku. What was the experience like trying to build a PaaS back then with it? And how did that start to shape your thoughts on the container ecosystem?

Jussi: Well, at least for me, of course, the whole container stuff and basically the cloud was still quite young and everything, so at least for me personally there was a learning curve, of course. But getting into that, you started to see, like Marc said, you started to already that early see the benefits of the whole cloud native way of doing things and thinking things.

For us what actually happened is that a couple of the guys in the team that I worked with on the Heroku clone thingy, they actually back then had a kind of a side hustle going on around containers and that got later on span up as a real startup. That was called Container. What we then created at Container was basically like container distribution, kind of like a Swarm thing because Swarm didn't actually exist back then yet. So we ended up building that at the startup with the same team.

Marc: Yeah, that was the early missing pieces, right? Docker showed application portability, containerization, the file system being incorporated and really building that good developer experience. But in those 0.4, 0.6 days it was service discovery, it was like, "How do you make an application that consists of more than one container functioning?" That was pre Kubernetes, pre Swarm, so that actually takes us to now. You're at Mirantis and you're building a Kubernetes distribution K0s. Can you tell us a little bit about that distribution? What is it? Why did you feel like another Kubernetes distribution was what the world needed right now?

Jussi: Like many great projects are born, K0s is borne out of like a spike test, that, "Okay, could we actually build something that's a bit different than any other Kubernetes distro out there?" And what led us for that thinking is, well, at Mirantis we have other Kubernetes products, so we have this MK product which is more like... how to say it?

Enterprisey Kubernetes version. What we heard from the people at the field, working with different customers, we started to hear things popping up here and there about edge type use cases where people actually want to run Kubernetes in very, very resource constrained environments. To be honest, I really hate the word edge. That has so many different meanings for different people, but I guess you get the point that I meant.

We started to think about those different use cases and those couple of things that became quite clear for us that a Kubernetes in those kind of environments would need. One of them is the capability to actually fully separate the control plane from the worker plain, because you just cannot run a control plane on a RV7 that has 500 megs of RAM. That's just not possible unfortunately.

Marc: So it was about running Kubernetes in resource constrained environments. We actually use a little bit of K0s also and, disclaimer here, I actually really do love the project. It's really cool. I think another compelling thing that y'all built was you made it statically compiled, so there's no external dependencies. I'd love to hear more about why you thought that that was valuable enough to put a bunch of engineering effort into taking plain, vanilla, upstream Kubernetes and making it statically compiled.

Benjie: I'm going to ask you, I'm going to pile onto that one. Explain to me, I think I know, but explain to me anyways what statically compiled in a Kubernetes landscape? Talk us through that decision, but also what you get out of that because that is... I know some people are like, "Wait, what are you talking about? It does what now?" So explain to us what that gets us, and then why you did it, please.

Jussi: All right. So let me start with the why part first. In many of those resource constrained environments what we're actually talking about is maybe like a factory floor where you have industrial automation, controllers and many of those controllers are actually running like a PC is onboard. In those kind of environments, the operating system, yeah, they are running Linux but they are very, very custom, based on different vendors and different environments and there's a lot of custom stuff in that.

How do we create something that we actually build once and it runs anywhere? Well, by statically compiling things.

What it really means for us and how we do it, is we build everything on Alpine. So let's take a Kubernetes API server, for example. We build it, it's GoLang stuff so it's easy to statically compile. We compile it against a Mussel Libc implementation and, voila, we have a binary that works on any Linux distro.

Benjie: So you've got this binary that is Kubernetes, and that can just install anywhere and, like you said, it's compiled so I can target ARM, I can target x86, I can target whatever I want or maybe not.

Jussi: We don't cross compile it, so we have different binaries for ARM64, AMD64 and RV7.

Benjie: Okay. But the tool chain can handle all of these things, which is crazy. So I, as a person running a factory line that has some weird PC from 1997 but it actually does run Linux, I can install this binary and basically the only requirements are that I have some flavor of Linux, is basically it?

Jussi: Yeah, that's basically it.

I mean, of course we're working with Kubernetes, we know a lot of things are not exactly that simple always. Of course we have indirect dependencies of kernel features and what not so that we can even run containers and have the proper kernel, networking, modules and whatnot.

Marc: So statically compiled, meaning I don't need to bring Haft or Yum dependencies in, or if I'm running on like a fork of CentOS or some weird thing like this, it's going to likely work. But you still need C groups and modern kernels and things like this in order to run the idea of containerization. You still need that?

Jussi: Yeah, yeah. You still need the basic containerization and networking feature from the kernel, yes. But that's basically the only dependency that we have.

Benjie: For people that are drooling over this, just off the top of your head, do you know what version of the kernel is when you super saw like it will work.

Jussi: 3 point something, 3.1, 3.2. but of course that all-

Benjie: I know, there's a lot. We'll put a disclaimer at the beginning of this. Everything we have said, there's always an edge case. Don't find Jussi's email and get mad that it doesn't work.

Jussi: Yeah, exactly. Just as a practical example, some enterprisey Linux vendors, they have very different versioning schemes for their kernel and it's impossible to say what's in the kernel just looking at the version number.

Benjie: Absolutely. And Marc was talking about the old days, and I was with Marc in the super old days and I remember being on support and having to figure out how to get REL5, we had to recompile. I don't remember what we had to do, but we had to do something crazy to get containers to work on REL5.

Marc: The interesting part for me is one of the draws for me with K0s is honestly less about those really old versions of operating systems, and more I don't have to think about the newer releases of REL that come out. K0s, the statically compiled binary is going to work. I don't need to make a package, I don't need to think about what dependencies I need to bring in and where they're going to be located, or what moves REL might do in their packaging so that I can't just Yum install different things anymore.

Benjie: Yeah, that's a great point. I had a quick question, we've had some pretty cool folks on Kubelist in the past, and they've told us about really, really weird environments that they've run Kubernetes in. You don't have to be specific if it's confidentiality, but can you tell us your top two most ridiculous places that you've ever heard of or seen Kubernetes running? Especially thanks to K0s.

Jussi: Industrial factory, I mean in the actual factory.

Marc: On the floor, like running the factory?

Jussi: Yeah. On the floor.

Benjie: Like in some Siemens controller somehow it's running? Wow.

Jussi: Yeah. Well, not exactly Siemens, but Siemens-like things. That's an actual use case why I mentioned RV7. On those devices there is one RV7 CPU, one core CPU and there's 500 megs of RAM and that's it. That's the resources you have to work with.

Benjie: And that worked?

Jussi: Yes.

Marc: So you're saying you don't want to use all of that just to run the Kubernetes control plane and leave nothing for the actual workload that they're actually trying to run?

Jussi: Well, I'll like to see control plane actually running on that box because it's probably not going to run. On that case, because with the K0s feature, the control plane separation, control plane isolation, in that exact case the control plane is actually running kind of like an on premise cloud thingy. Then it's a pure worker node running on that device.

Benjie: Okay. So a big benefit is you can get these really thin node layer, basically. You can move things even further out onto the edge than you've ever dreamed of?

Jussi: Yeah.

Benjie: I mean, when you started talking about ARM7 I was like, "Okay, sure." But I believe you. That's insane, that you guys are supporting that. That's super cool. Another quick question, Jussi, because this to me is the important... I'm a hard hitting journalist here. How do you prefer to say K0s or K0s? What do you think is the right way to say that? This is the big question that everyone wants to know the answer to, I think.

Jussi: It's K0s.

Marc: Just K0s plural? K0s.

Jussi: K0.

Benjie: And the zero obviously is because of how lightweight it is? Or what's the origin story there?

Jussi: It's actually our motto of having zero friction, which means for example, we talk about zero dependencies and whatnot.

Benjie: Disclaimer, slight dependencies on C groups and some networking stuff.

Jussi: Yeah, yeah. And a couple of others that we might get rid of some day, but it's actually a good example of that zero friction, zero dependencies, and how serious we actually take it. So back a few versions in Kubernetes 1.20, 1.21 or 1.22, Kubelet actually had an external dependency to find and DU utilities. So Kubelet was actually just accepting DU and find. For any Linux admin, those are the basic tools that you pretty much always expect to have installed on your system, right?

But we've actually stumbled on a couple of cases where those were not preinstalled by default and in an air gap environment where there's, well, installing stuff is not that easy. So what did we do? We actually created the upstream PRs in Kubernetes to get rid of find and DU in Kubelet code.

Benjie: And you get it merged? And you got it in there?

Jussi: Yes.

Marc: So it wasn't just K0s doesn't have that dependency, you actually contributed back into upstream Kubernetes to simplify?

Jussi: Yeah.

Benjie: It's cool. I can't believe there's a distro that doesn't have a DU. That's crazy, but that's the type of environments that we're talking about. I guess when you're on a industrial controller running ARM7, that makes sense. For all of us that Raspberry Pi Kubernetes clusters are really crazy, look up ARM7. That's insane.

Jussi: We're actually investigating currently because there's one annoying dependency in Kubelet still for execing DU amount, for obvious reasons the amount is secret volumes and whatnot. We're actually currently investigating whether we can get rid of that also in upstream.

Marc: That feels tricky, to get rid of that dependency. You're going to be digging into some C implementation to understand how the mount command works and is that how you do it?

Benjie: You might be doing some Assembly to figure out how to get around that one.

Jussi: Yeah. Like I said, we're investigating so no commitments here.

Benjie: So you take all this insane friction so we don't have to, is basically what you're saying so you figure out how to get around DU which, honestly, that's the most ridiculous example I've ever heard. I didn't even think of that as a dependency until you just said that, but of course it is. I am now going to check every distro I ever go on, I'm always going to see if it has DU at this point forward in my life.

Marc: Does K0s maintain patches on top of upstream Kubernetes? Talk more about that. It is just vanilla Kubernetes, you don't maintain any long live patches on it?

Jussi: We don't maintain any patches on it, so we don't have any patches currently. At some point in time we had two custom patches to fix some ridiculous RV7 compilation issue or something, but that's about it. Those were fixed upstream quickly so we could ditch the patches and whatnot, so we don't really maintain any forks or patches or anything.

Marc: So no forks, no patches. It's the way you're installing it out of the box, vanilla, I start K0s, it'll pass Sono Buoy conformance tests and things like this? It's a totally valid Kubernetes cluster, not anything special?

Jussi: Yes, yes. And we run extensive conformance testing on every single release and we have nightly runs for conformance and everything.

Marc: That's cool.

Benjie: So 100% coverage of the Kubernetes API?

Jussi: Yes, from their conformance testing point of view. If you took the Kubernetes end to end test to it I think it's currently 3,500 test cases and the conformance is currently like 300 something test cases for the conformance part.

Benjie: Yeah, of course. Even so that's super impressive. I have to ask a question, I am an avid K3S fan. Will you talk me through the full Kubernetes distro, K3S and then K0s, and where you see the different use cases lying for those different things? Or maybe they're all the same, I'd just love to hear your take on that.

Jussi: Right. Of course that's a question where we kind of painted ourselves in the corner with the naming of the product or project. I see that now afterwards.

One of the differentiating features is the full control plain isolation. I haven't been playing with K3S in a good while, but I believe still today you cannot do a pure controller node without running Kubelets and containers and whatnot on there.

Benjie: I think that's right, I'm not 100% on that one but that sounds right to me. So the control plane isolation without dependencies, that's a big one. But it sounds to me like the use case here is super duper, and you don't like the term, but edge for K0s. K3S is maybe a little bit underpowered, less complication, less moving parts, and then Kubernetes, the regular distro Kubernetes is... I don't know. Hey, Google and am sure everyone else will take care of that for me. Is that kind of the way to look at that as a developer or what do you think?

Jussi: I think that's a quite fair assessment of it. Yeah, we don't want to basically copy with say EKSR or AKSR, whatever, because those work well in that cloud environment and they are built for that environment. So why should we compete there? Unless you have very specific use cases where you actually need to be in control of everything. One of the problems with EKS and AKS and Franz is that the control plane is like a black box for you. There's very little configuration that you actually can do on the control plane, so say on EKS or AKS.

Benjie: Yeah. And there's all kinds of weird magic, sometimes broken magic. I don't want to be mean, but I swear to god the control planes on Azure go down every Friday. I don't know why they do. There's some weird network stuff that happens on Fridays to Azures. I see different things with EKS, I see different things with GKE. So having that granular control is a big, big plus. But it seems to me like I'm thinking now the more you're telling me this, maybe I should be running K0s locally on my laptop and that's a really good local development experience. Would you say that that's something that's a use case?

Jussi: Absolutely, absolutely. And that's a very common use case where we see people using K0s. The organization where my team sits at Mirantis, we have basically a neighboring team that works on Lens. Have you guys used Lens at all? Tried it out?

Marc: Yeah, that's the UI for managing a Kubernetes cluster, right?

Jussi: Yeah. The newer versions of Lens actually comes with an embedded Kubernetes environment, for obvious local development testing purposes and, surprise, surprise, it's running K0s.

Benjie: That's a pretty big one. That's cool. I didn't know that. I have some folks over at Shipyard, they really like K9S. Just a quick PSA, the reason it's called K8S is because there's characters between the K and the S, and that's not the same thing for K0s which is fine. Or actually it is, it's zero friction so it's K, zero S. I have no idea of the etymology of K9S or K3S. But I like the name, now that I understand the origin story I really like the name K0s more and more. I think it's really interesting. So the guarantee that I have is that I can use K0s on the lightest weight machine possible and I know it's going to work when I put that load all the way up to GKE or some managed Kubernetes distro?

Marc: Let's talk about that a little bit more. First, control plane isolation, that's a great feature. But it's not a requirement of K0s, right? I can still run the control plane and the workload on one. I can have a single node and do everything there if I choose to, for maybe a dev, a test, some other purposes.

Jussi: Yeah, absolutely. And you can run even bigger clusters, say 10 machines where you run pre your "normal" Kubernetes set up where you actually run workloads also on the controller nodes. So everything is possible, it's just that in the default way the control plane is isolated more.

Marc: Got it. For the record, that's generally a good practice because you don't want workloads to be able to interfere with the control plane and then bring down the control plane and then the entire cluster fails. It's just a bad practice. A good, reliable and scalable and highly available way to run Kubernetes is to separate the control plane.

Jussi: Yeah, absolutely. If you look at EKS or AKS, that's how they do it. You cannot run anything on the controller nodes.

Marc: Yeah, you don't even have access to them.

Benjie: Well, we don't know, to be fair. There may be things.

Jussi: Well, at least you cannot run your workload on there.

Marc: On the other end there, though, are there limitations on the maximum size of a cluster you would recommend before saying, "No, go to a managed Kubernetes cluster"? Can I run a 1,000, a 10,000 node cluster on K0s?

Jussi: Well, probably yes. We've tested up till 1,000 nodes ourselves and that worked pretty okay. Of course we were running some simulated workloads and your typical dummy Engine X pods and whatnot. But from the cultural standpoint, I think what the isolation also enables for you is to have more predictable scaling for the control plane because you're not running any random workloads on there so it's more predictable how it scales and how far it scales than your normal case where you actually run a mixed number of workloads also on the same nodes.

Benjie: Yeah. So what about ETCD? How does that work? Can you shim that in, or? How does the control plain handle the key value store? Is it ETCD? Is it something else?

Jussi: I believe it's all ETCD and K0s manages ETCD fully, and it's as elastic as ETCD can be so you have one controller node. Now when you want to scale out, what you can do is you create a controller joint token on the first controller, then you can join X number of new controllers and K0s scales up ETCD, reconfigures all the memberships and whatnot automatically.

Marc: Yeah, I've seen that recently. There's a lot of stuff that K0s does in that case, right? You have ETCD running and then when you add the second node in, it reconfigures ETCD to be distributed and there's a lot of things that I don't hae to think about when I add that second node in.

Jussi: We wanted to implement it that way based on history, this is actually already my third Kubernetes distro that I'm working with. But I've seen in the past years it's that ETCD, it's tricky to maintain and manage so we wanted to build all that into K0s, everything that makes sense.

Benjie: Okay, wait. So you're saying that I install a binary that has ETCD compile into it, so it literally just turns on, gives it to me, and high availability if I give it two control plane nodes just literally by installing K0s?

Jussi: Yes.

Benjie: Your yes was very low friction. That's crazy. That's really cool. I did not understand that. Wait, quick question here and this is a little off the topic but can I mix and match? Can I have the K3S node working with a K0s control plane, for example?

Jussi: Hypothetically, yes. But with a very, very strong, strong-

Marc: What problem are you trying to solve here?

Benjie: Well, maybe I'm a listener and I'm like, "Holy crap, this is really cool. I want to get K0s. I want to use that for some nodes, and I already have my K3S or even my GKE and I want to try and attach some remote compute or edge compute stuff to existing clusters to start experimenting with it." Have you seen anyone do that yet or if someone does that they're going to reach out to you?

Jussi: I'd guess we would see that in the GitHub issues, someone asking questions, "Okay, is this possible and how do I do it?"

Benjie: All right, well, you might see someone from Shipyard trying to do that. Just to warn you ahead of time. I don't want to make the whole transition, I want to practice. Start with the node.

Marc: I'm going to go back and talk about the zero friction, there's more than just, "It's really easy to install and it's statically compiled in a single binary." There's functionality you've added in around remote node management as an example, that normally in the examples you gave earlier I can go grab a join token and run it on a second node and now I've added another control plane or another worker node or whatever I want. But I think K0s has other ways, right? Like centralized ways to manage remote nodes via SSH?

Jussi: Yeah, yeah. One of the, I call it helper tool, that might be actually undermining it a bit. But we have a helper tool called K0sctl, that's basically a command line tool to manage a cluster and a single cluster. Basically what you can do is say that you have a couple of Raspberry Pis, you have the IP addresses of those, you have the SSH key for those. You punch those out in a YAML document saying, "okay, this IP is a controller. These two IPs are workers. These are the SSH keys," K0sctl SSH's in and installs everything, configures everything for you out of the box.

Marc: So I can have my servers, Raspberry Pis or real servers or whatever, set up and then I have SSH keys, public keys on them. I have the private key on my machine, I can write a YAML from my machine. I can say, "Go make these servers a K0s cluster. This one should be the control plane, these two should be worker nodes, these are the labels," and then I run one command and it's done?

Jussi: Yeah, absolutely. And not only the installation part, because that's the easy part always, but what about upgrades and during the upgrades you have to do the cordon, drain, uncordon dance as I call it nowadays. K0sctl does all of that for you.

Marc: And handles the scenario where one node is unreachable or there's a failure and one didn't upgrade properly? Like I want to go from 1.27 to 1.28 or whatever version, I run this one command and it's either going to my cluster as 1.28 now or, "Nope, the upgrade failed and here's the results." And then I have to obviously figure out whether it was a connectivity problem or whatever it was.

Benjie: And that's all through SSH keys?

Jussi: Yes.

Benjie: So you wrote a really, really one command Ansible basically?

Jussi: Yes, yes. Essentially that's what it is. To be honest, why we did it that way and not with Ansible and whatnot, doing anything like a rollout type of a thing where you do something on one node, drain it, uncordon it and then move to the next node, doing stuff like that with Ansible is super, super difficult.

Benjie: Right. And there's adding an obvious dependency of Ansible, whereas this is zero friction.

Marc: Ansible is good at deploying things but what you're describing is an orchestration task and so you needed to write an orchestration run time for it. That's cool. Wow.

Benjie: Marc, you knew about this. I didn't know about this so you got to give me a second here to process what you're saying. So these SSH keys, okay. One of the things that drives me insane about G-Cloud and all these other tools is it takes forever to get into these nodes because you download KubeConfig and all this stuff. So is it also just SSH so it's a lot faster, this tool, and you said it was K0s "Cuddle," by the way? I believe.

Jussi: Sure.

Benjie: I have thing, you see, is it "cuddle" or control? You already said control, but obviously I've corrected you now so you know it's "cuddle." So "K0s Cuddle," that thing will literally just SSH into whatever and that's going to be a lot faster than using these tools to just upgrade and stuff like that, or even get into the cluster itself, right? Or the node, I guess?

Jussi: Yes.

Benjie: Wow. If I have a ten node K0s cluster and I want to use K0sctl and I just do upgrade, is it going to do that in parallel or is one by one, or how does that work?

Jussi: You can actually configure how much it does it in parallel, whatever it does, all that draining and cordoning dances or not. But in the default settings, I might remember wrong here, but I think we have a default policy of 10% of the nodes are done in parallel.

Benjie: Right. But there is the capability to have multiple SSH sessions connecting to different nodes at the same time and then you can tweak it the way that you want to do it. So it's like an even better Ansible.

Marc: It's not Ansible. It's not Ansible.

Benjie: I love Ansible. I'm sorry. I don't mean to pick on Ansible.

Marc: I guess the point is, if I have a properly provisioned and highly available K0s cluster I can use K0s control to upgrade it with zero down time, but control plane and then nodes and everything. There is a possibility to be able to orchestrate a zero downtime deployment to from a minor version of Kubernetes to the next minor version of Kubernetes?

Jussi: Yes, absolutely. Of course assuming that your workload is implemented in a correct way to do zero downtime. But from the Kubernetes point of view, yes, it does have zero downtime.

Marc: Yeah, but that's the same on EKS or GKE or anything, right? If I'm going to roll a node group to a new version, my workload better be able to handle it or that pod might not be available for a period of time while the new node is spinning up?

Jussi: Yeah, exactly.

Benjie: So it's also a chaos tool in that sense as well, built into K0s. You can do upgrades and see how your stuff handles it.

Jussi: Yeah. Actually back in the day before all this Kubernetes stuff when we were building the container platform on top of Docker, what we suggested to many, many of the customers and people using it back then, CoreOS was a cool thing and we were heavily using CoreOS internally in all our stuff. What we did and what we suggested to all the people using it, have one or two nodes in the cluster running some Alpha channel of CoreOS as a tool for chaos and as a tool for picking up when things will break in the next release and whatnot.

Benjie: Okay. So Jussi, is there any other spectacular feature that solves a massive problem for me that you forgot about that you want to tell me about? Jussi, you go first, but you seem to not know how amazing your own product is. Is there anything, Jussi, you want to bring up or should I ask Marc?

Jussi: I want to highlight one thing about the zero dependencies. K0s doesn't only package up and build the Kubernetes things as an obvious thing. We actually build into K0s a lot of other things like, for example, IP tables. K0s comes with its own version of IP tables.

Marc: Not its own rules, its own version of IP tables?

Jussi: Own version of IP tables.

Marc: Let's talk more about that.

Jussi: There's a simple reason for that. Well, simpleish. About a year ago we found a bug in IP tables, it's basically a versioning compatibility in IP tables itself. So Kubelet, I might remember wrong, but they actually maybe got it out in 1.28 now finally, but back in a couple of versions ago there was a dependency on IP tables so Kubelet was basically setting up a couple of those marker rules on IP tables.

If you had a certain combination of things like IP tables in a host working in the NF tables mode, a specific version of IP tables on the host which Kubelet was by default using, and then your CLI provider, Calico, Qprod or whatever was using a different version of IP tables. Voila. The first rule in the IP tables got transformed to drop all.

Marc: Oh wow. That's the worst kind of failure mode.

Jussi: Yeah, exactly. And you can imagine how fun it was to debug that, so when you hit that problem your SSH connection dropped and everything was dropped. Okay, this is interesting.

Marc: You're like, "Why is this not working?" You're debugging this thing and it's probably a black box at that point.

Jussi: Yeah, exactly. So what we ended up doing is actually... because from that point on we knew that, okay, we can't only keep in sync the Kubernetes versions and different component versions, but also the underlying tools they use. Those versions have to be also in sync.

So what we nowadays do with K0s, we bundle in a certain version of IP tables, we ensure that all the components that work with IP tables, they pick up the same mode whether it's a mode for IP tables or NF tables mode and we make sure that every single component has the exact same version of IP tables bundled in. That includes Qproxy, the IP tables embedded in K0s itself, all the CNI provide is the support and so on.

Marc: And that guarantees, I mean it does a lot, but the primary benefit is that it guarantees the IP tables chains and the rules... they're going to guarantee to work because it's not the legacy or NF tables implementation.

Jussi: Yeah, exactly, exactly.

Marc: Do you deploy the standard KubeProxy with K0s?

Jussi: Yes.

Marc: Okay. And does that use what you were just describing there with your custom version of IP tables or does just use standard IP tables?

Jussi: It uses standard IP tables, but it uses a certain version of it.

Marc: Okay. The statically compiled or pinned version?

Benjie: The standardized version, if you will.

Jussi: Pinned version, I think that's the most correct word here.

Benjie: Okay. So that's cool, I didn't think we'd ever interview someone who found a bug in IP tables. I'm putting that one on my wall, that's pretty cool. Okay, so that sounds cool. Marc, when I asked you if there was another cool feature I saw your eyes open big. Is there another cool feature of K0s that you're interested by?

Marc: Well, another feature that we use that I'd love to hear how it got into the product is it's more than just a Kubernetes distribution. You have a Helm deployer built into it, you can deploy applications along with it. I know other distros do that, like K3S has a directory and if you leave Helm charts in there it'll automatically bootstrap them. What is the motivation for having that type of a functionality in bundling it directly into the distribution?

Jussi: Right. The main motivation was looking at the different use cases where people use K0s and given how easy it is to spin up new clusters. For example, for development purposes. In development environment, for example, there's certain kind of standard stuff that you always want to have in. We wanted to have a way where the cluster, the ops people or whoever is spinning up the clusters have a simple way to get those standard building blocks in their clusters, things like Prometheus, Cert managers and those kind of things.

Marc: But it's not just for those. Is it stable and a reliable feature that's documented and exposed, and if I want to use it to deploy whatever application I want to, not just a building block but maybe I want to put WordPress as a Helm chart into there. I can do that?

Jussi: Yeah, of course. Of course.

Marc: And you can just specify whatever values, YAML, whatever that is and it'll just configure it?

Jussi: Yeah.

Marc: That's cool. And that's a way to avoid running Helm install, KubeCuddle deploy or having ArgoFlux or something like this in order to bootstrap a cluster?

Jussi: Yeah.

Marc: That's cool. It's kind of actually statically compiling the chart itself into the distro in a way, because you don't have that external dependency of how am I going to get this thing installed?

Jussi: In a way, yeah.

Benjie: So my bootstrapability with K0s is pretty busy. You're making me into a fan boy here a little bit, Jussi, and I normally don't get fan boy always so this is really cool stuff I'm learning.

Marc: He does.

Benjie: I agree, we get fan boy sometimes, but not every time.

Marc: But actually, do you have a way to also bootstrap the images, the container images that might be distributed either as part of the Helm chart I'm deploying or the Kubernetes control plain actually? Sometimes it needs images, do you run ETCD in the cluster?

Jussi: No. ETCD is running as a plain Linux process.

Marc: But there are core DNS, something like this, that's running in the cluster. How do you actually handle distributing all those images or do you require that all the nodes have internet access so they can pull those images when they boot up?

Jussi: What we do is, in the K0s worker node there's functionality when K0s boots up. It looks in a certain directory and looks for tarball files containing images, and if it sees those files it'll automatically import them. For every single release that we do, we actually build an air gap tarball containing all the needed system images. But you can, of course, use that same functionality for your own applications say you're running in an air gapped environment in some bunker somewhere. So you just have your USB stick and punch in new images, image tarball for your own workloads and then restart the K0s.

Marc: And is that as simple as basically, "Hey, I have this image, I'm just going to export it to a file system, create a tarball of all my images," and then it'll load those into the container runtime on all the nodes.

Jussi: Yeah.

Marc: That's cool.

Benjie: So there's also a registry proxy built into this thing also?

Jussi: No, I wouldn't-

Benjie: I know it's not actually, but I'm saying functionally I can do some crazy stuff with it?

Jussi: Yeah, in a way, yeah.

Benjie: Don't take me seriously about that, I know it's not a full registry. I'm just saying it's pretty cool what I can get away with it. Where my brain goes to, which I feel like some of our listeners' brains goes to is, okay, I'm giving you my on prem version of something or whatnot, and I can bake in my operator or my CRV or whatever into the actual binary and have it all there.

Let alone the images that I need to run my application and all kinds of stuff, so you can just do that so it really, really simplifies... One of the things that's very frustrating for us at Shipyard, not frustrating but time consuming, is bootstrapping these clusters and then getting everything installed on them and configured correctly. But then if you just had something that you could literally just have something that's like, "hey, here you go. Single binary with a folder attached to it in a tarball," and you get everything.

Jussi: Yeah. That's pretty much it, and we actually have one customer that's using K0s in that way. What they do, basically, is they create one huge tarball out of everything, including K0s, including K0s system images, including their own application YAMLs, including their own application images in a tarball and then bundle that as a huge tarball and ship that to their own customers.

Marc: Yeah. At Replicated we've been building in helping folks ship on prem versions of their software for a long time on Kubernetes, and I 100% see the value in that. We are looking at adopting K0s, we have early versions of it. It's definitely solving something. It's a really good and creative solution to some of the hardest problems about application distribution into some of these difficult to reach enterprise environments.

Benjie: What I really like about this, just to be a bit computer sciencey for a second is using Go as the basis of the CNCF, not the CNCF but there's a lot of the ecosystem is based on Go and this compiled static stuff. Obviously you're not going to write Kubernetes in Python, but just taking these core computer science principles and then now finding these really cool ways to use them.

It still blows my mind that it can just give me a binary and it's Kubernetes. Obviously this is not the completely brand new concept, but I think it's really cool to be building on the backs of all this computer science work that's been going on for the last 60 years, and ultimately we get so complicated so then we get actually really simple. So that's really, really cool, when that actually seems to work, which it seems like it really does. I think that I want to know what's on the roadmap.

Jussi: Right. There's a few things that we're already at least partially working on and a couple of things that we have our eyes on. One of the things that we started to work on already and that could actually already partially ship with our 1.28 release a couple of weeks ago, we started to create full software bill of materials for K0s. And because K0s is advanced Kubernetes, so that's containing now basically everything.

We're now looking to actually provide fully signed artifacts like binaries and everything also as part of their release. Basically just making sure that when people are actually downloading K0s binaries they can actually be safe and sound that it is actually what it says it is.

Marc: You said that was in the 1.28 release or coming in the next release?

Jussi: This BOM part is part of 1.28. The signing part will be part of the next minor release.

Benjie: Are you going to extend that so I can do that to those images that I might be adding in a tarball.

Jussi: That's a good question, we haven't thought about it from the workload point of view. At least not yet.

Benjie: Well, I know Replicated would want that functionality so you should put that in there for Marc and his team.

Marc: So I'd like to dive into the details a little bit, I know a couple of versions of Kubernetes ago they started doing some of that also in upstream Kubernetes. Is this a different implementation that you had to use for K0s that you had to build yourself? If so, what tools are you using to generate the SBOM or sign and verify the signature of images?

Jussi: For the SBOM we use Incore Syft to gather basically all that information and then-

Marc: Incore Syft you said?

Jussi: Yeah. S-Y-F-T. And then for signing we're probably going to use Cosign.

Marc: Okay. Then verifying the signatures, you'll publish the public key and then somebody is kind of like, "Here's a way you could verify it, but you have to verify it at install time."

Jussi: Yeah, exactly. Exactly.

Marc: Cool. How long does it take to get the new version of K0s out? When Kubernetes 1.29 ships which I think is early December, what's the delay in the latency to get the next K0s to support 1.29?

Jussi: Usually it's like two weeks to a month after the first minor release.

Marc: After the first minor release?

Jussi: Yeah. So when, say, 1.29.0 ships out then it's usually two to four weeks after that when we have our first release.

Marc: Yeah, it probably changes depending on what that minor release changed in Kubernetes. The effort is probably just variable.

Jussi: Yeah, exactly. And of course depending on number of factors like the amount of sunspots. For example, our 1.28 release got actually delayed quite a few weeks because we had another cheaper, low level incompatibility issue. This time not with IP tables, but with the cool called IPSet. We actually found out during the final steps of the 1.28, we actually found out a Linux kernel IPSet version incompatibility issue. Very, very similar to the IP Tables thing which we had to work around.

Marc: But that's the value of zero friction. So I'm sure the extra couple of weeks of delay, A, it's super understandable and also that's needed. That's the value that you're adding on top of upstream Kubernetes so you have to do it.

Benjie: I hope no one's actually trying to install a brand new release in a production environment. Maybe CNCF is going to get mad at me for saying that, but whatever. You should be very careful if you're doing that.

Marc: The way that we actually started talking, before we actually set up a time to record this podcast is there's an issue in the K0s repo where you're discussing, "Should we contribute this project as a CNCF project versus keeping it as a Mirantis project with an open source license?" I'd love to hear the latest on that, what are y'all thinking? What are the challenges? How can people help if they really love K0s, but like, "Oh man, if it was a CNCF project it'd be easier for me to make that long term commitment to the project"? Yeah, I'd love to hear more about that.

Jussi: Yeah, and I guess you actually covered the main motivation where we weren't even thinking about that. So having a CNCF stamp on it proves that we're here actually for the long haul and not just staying as a somewhat proprietary. Although, of course open source Mirantis. Also what having K0s as a CNCF project would get is more open and more collaboration throughout and in between the different CNCF projects, so that's what we're looking out of it.

Of course, like in any decision there's also downsides. It does require a bit more governance and whatnot, and to set up all the functionality so I think that's the main thing that we are currently worried on that. Does it create too much, for the lack of a better word, I'd call it bureaucracy in a way?

Marc: Sure. So Mirantis, supports making the project a CNCF project, you're just weighing the cost of doing that, making sure it's justified at the end. The benefits outweigh the costs?

Jussi: Yeah, exactly. So we haven't still made the final go, no-go decision but I think seeing all the feedback and hearing from different folks on that topic, I think people would be fairly happy to see K0s as a CNCF project. But that of course means that we have to apply so there's no guarantees whether they'd be approved or not.

Benjie: There would be some more friction, I hate to say it, sorry to be that guy, but there'd definitely be some more friction if you were a CNCF project but there's also a great deal of benefit and so that makes a lot of sense. If we wanted to contribute right now or be a part of that discussion, where do we go?

Jussi: If you want to be part of the CNCF discussion then there's a pinned issue in the K0s repo in GitHub, so add your thoughts there and whatever you have your mind on that topic. Then of course in general, it's an open source GitHub project so it's typical when you see problems, issues, challenges, open an issue and we and the community will try to figure out what's going sideways. Then, even better, if you can figure out why things go sideways then open PRs.

Marc: Do you have community calls right now, even though it's not a CNCF project? Is Mirantis still hosting a regular community call?

Jussi: We actually don't currently do that. We started to do it very, very early on but there was only a few people and the same people always on the call, so we didn't see the value back then at least. But now that there is a lot more people using K0s, now it probably makes more sense and that's something that has been on the thoughts for a couple of months already, whether we should do it or not.

Marc: I would say that even without a community call, you and the team are pretty responsive in GitHub issues and I think you're definitely paying attention to the community out there.

Jussi: Yeah. We try our best, at least, and of course like in any open source project every now and then there will be some issues and comments and whatnot. There's actually quite a few different repos that we maintain and we launched a couple of months ago, we launched a new project called Cosmotron and trying to keep up with that and everything. But like any open source project, I think we'd like to see more people contributing and throwing out PRs and whatnot.

Marc: Of course. All open source projects, I think. No, no, no, don't contribute to this open source project. No, you definitely want that.

Benjie: I really like the name Cosmotron but I'm going to save that for the next time we have you on, Jussi. Everyone else can go and do their own research on that one, I'm going to do it after this podcast.

Jussi: Just as a teaser, it's throwing a cluster API on top of all this that we've already talked about.

Benjie: You're killing me. All right, now I got to go look at this too. It's too much.

Marc: We'll put links in the show notes here for some of these.

Benjie: For me, you're going to put links in the show notes so I can go learn some more of this stuff. Well, look, we really appreciate you coming on, really had a great time. I learned a whole lot and I am going to be checking out K0s very soon. Really appreciate it and keep doing what you're doing, and we'll be following the project closely.

Jussi: Excellent. Thanks for having me.

Benjie: Thank you.