Ep. #39, Live From KubeCon 2023
In episode 39 of The Kubelist Podcast, Marc and Benjie recount their experience at KubeCon 2023 and share interviews from the event with guests like Matt Butcher and Radu Matei of Fermyon, Umair Khan of Stacklet, Anna Reale of Keptn, Solomon Hykes of Dagger, Bailey Hayes of wasmCloud, and many more.
In episode 39 of The Kubelist Podcast, Marc and Benjie recount their experience at KubeCon 2023 and share interviews from the event with guests like Matt Butcher and Radu Matei of Fermyon, Umair Khan of Stacklet, Anna Reale of Keptn, Solomon Hykes of Dagger, Bailey Hayes of wasmCloud, and many more.
transcript
Benjie De Groot: All right, so this is the first episode that we've done right after we got back from KubeCon. Both Marc and myself were both there on the floor. We got some great interviews, we talked to a bunch of projects and a few companies, got updates from former Kubelist guests, which was great. We'll get to those in a second, but just wanted to take a second to talk about our KubeCon experience. Marc, how was your KubeCon experience?
Marc Campbell: It was great. It was good to be... a good turnout, the project pavilion was great, there were a lot of good events at KubeCon overall. I think the event was really good.
Benjie: Yeah. I'm going to say that I think KubeCon is back. I'm just going to say it. I feel like the last really great one before this one was San Diego, and obviously with the pandemic and all this other stuff there's been hit and miss stuff.
I was not at Amsterdam, I heard that was actually pretty great. So I'm going to say KubeCon North America is back and it was great. I really connected and saw all the old faces and a bunch of new ones, learned about a whole bunch of projects and really, really liked a lot of that first day track that was not the actual KubeCon-KubeCon stuff. I feel like WASM was really hot.
Now, is WASM going to be... you know I'm a fanboy of WASM but it really felt like a lot more people were talking about it and a lot of really cool projects were there. People were talking about those projects. I talked to Matt, will bring that up in a second in the episode.
Marc: From Fermyon.
Benjie: Matt Butcher, yeah, from Fermyon. Exactly. We've got an update there. The other thing is it seems that Platform was a big talking point. Now, what Platform is always interesting for me, especially as Shipyard's in that space. There's two versions of Platform in my mind, and so I think that it's really interesting that Platform is becoming a big, big conversation. Any buzz that you heard?
Marc: Yeah. There were definitely a lot of Platform teams there, and I think the other part that I'd add in is especially on the Monday, the day before the sessions actually started, there were a lot of conversations around Git ops and Argus CD and Flux. That was definitely carrying a big trend too.
Benjie: Yeah. Backstage, a lot of talk. Which, by the way, Kubelist scooping on that one four years ago. Marc, you did an episode with the Backstage folks. But yeah, I saw a lot of companies talking about Backstage and obviously companies that are enterprise on top of Backstage, building off of Backstage. Was there any other projects that you didn't know about that you learned about?
Marc: There were definitely some in the project pavilion. Actually, let's dive in here for a second here, Benjie. I think you recorded quite a bit of audio on the floor, interviewing some of these different projects.
Benjie: I mean, the interviews will talk for themselves but I learned a lot. I talked to Cloud Custodian, I talked to Kubevirt, I talked to Keptn, I talked to Porter, that WASM Cloud folks. I also spoke with the Build PAC people, and Cert Manager people. Those two actually did not make it into this episode. But we're going to bring those folks on to a future episode of Kubelist.
Yeah, I mean it's just kind of crazy how far folks have taken stuff. Kubevirt I think is really cool. You can use the Kubernetes API to manage virtual machines, I thought that was really cool so excited for you guys to listen to that stuff. Then also we had some, like we said, some updates with Matt Butcher and Radu from Fermyon, and also Solomon at Dagger giving us some updates as well. So I guess we can just dive into those interviews and take a listen.
That was our KubeCon. Great job, CNCF. I will say we got to work on the food, we got to work on the food. I'm speaking for the people, the food, there's work to be done. There's improvements that can be made, we're always iterating, folks. We love you, CNCF. We got to work on it. It's hard to feed 12,000 people. I will say on the food point, the buffet, the snack buffet in the evening was great.
Marc: All right. Let's jump into the interviews that you had on the floor.
Benjie: Thanks for having us, KubeCon, and enjoy these check ins we did. So we're here with Matt and Radu from Fermyon, and yeah, you guys are at KubeCon 2023 in Chicago, and some really cool stuff that's going on. We'd want to hear some latest and greatest updates, so, Matt, why don't you tell us what's going on?
Matt Butcher: Yeah. I think the show floor just opened a little less than an hour ago and it's already nice and busy around here.
KubeCon is always a really great conference for us. We all came out of the container ecosystem, many of us were in the virtual machine ecosystem before, and so as we began developing Web Assembly technology really what we were looking for was the next complementary technology that would be part of this ecosystem. So every time we come here, not only is it great because we're seeing old friends and still staying connected with communities like Helm and Akri and all of those, but we're also able to bring a new and novel technology.
So Radu, he's the CTO of Fermyon. On Friday they just cut a major release of Spin. Nothing like cutting a release at 5:30 PM on a Friday, before half the team leaves to go to a big conference, but that's what they did and it's ended up being a big success.
Benjie: Wait, so this is the Spin 2.0 stuff that just came out, right?
Radu Matei: Correct.
Benjie: Okay. Tell us about this, Radu.
Radu: Yeah. So at the end of last week on Friday at 5:00 PM we launched Spin 2.0, which is the latest major release for Spin which is our developer tool for building and running Web Assembly applications in the cloud. The really, really headline features for Spin revolve around the component model and finally being able to make use of things like component composition and polyglot component composition, and new features like streaming that are coming from WASHTP which is this specification happening in Web Assembly.
And so Spin 2.0 is really the first release of Spin that exposes the way we run components. Spin has been running components for about four or five months now, but this is the first lead where we actually make it available for users to have a stable target to compose components and to deploy those into a Fermyon cloud.
Benjie: Okay, so Radu, give me a tangible example of what I can do today that I couldn't do a week ago with Fermyon.
Radu: Yeah. The simplest example that we go on to demo in the blog post announcement of Spin 2.0 is being able to take a component written, for example, in a high performance, memory safe language like Rust and then import that from your web handler written in Python, for example, or JavaScript. Essentially taking those two components and linking them and running them as a single unit, whether you're running in Spin or in Fermyon cloud or somewhere in Kubernetes somewhere, being able to start composing applications from these polyglot components.
Benjie: Okay. So we're talking about being able to have multiple language components, tying them together, and running them anywhere you want?
Radu: Correct.
Benjie: So this sounds like the whole evolution of WASM and the whole idea behind what we're trying to do here. So what's the coolest application you've seen so far? It's been three days or whatever, but tell us anything really cool that surprised you?
Radu: Well, I've been traveling for most of that three days, but I think the one of the things that we worked really hard to make sure it's possible is being able to do things like streaming from one component to another.
The type of scenarios that enables are pretty exciting, in particular because we wanted to make sure that, one, we enable these new functionalities to users but also make that in a standard way that builds on top of the standards and builds on top of the stable targets that we want to make sure the community builds on top of. And so, I think the underlying cool thing that I'm looking for and I'm seeing people build is revolving around streaming and composition.
I haven't really seen a lot of actual applications. Well, one that we've been showing people is the Auth component and being able to inject an authentication middleware built with a Web Assembly component, and just bring that wherever you might want to run your components.
Benjie: Okay. So for that example we have... I've got my Python app and the auth mechanism is some Go microservice, so I can stream those credentials safely, securely, over to my Python app and then I could log in. And so it's really the evolution of the microservice or the service... Let's not use microservice. We've all learned our lesson.
The service based architecture being really independent, and that's running in WASM so that can go anywhere, so that's really cool. Is there any specific security stuff that Spin gave us to enable this? How do I know that that's going to be safe to stream between two processes? Matt, why don't you take this?
Matt: Yeah, yeah. Sure. I think the cool way to think about components is you're essentially packaging up multiple Web Assembly binaries into one particular application, and then as they execute, they execute in isolation from each other. The attractiveness of this from a security perspective cannot be understated because now we can take this OAuth example, we know we're going to be dealing with some sensitive exchange of token information and things like that but we can assert, for sure, that information is never making its way outside of that one component.
As we get going and this is why we're excited about the future of this, Web Assembly is a sandboxed environment, it uses capabilities style modeling to turn on and off features. So we can literally get to this place where we can say, "Hey. This untrusted or semi-trusted YAML parser component that I grabbed off the interweb..." Because we all do this, all the time. None of us audit the code that we're downloading off the internet.
But I can say, "Hey, I know what method I need to use and I need to use the parse one." I'm not giving it access to the network, I'm not giving it access to the file system, I'm not giving it access to any of those kind of system databases, any of those resources. It can only do these kinds of things. So when you think about the attack surface that you're changing there, we've all seen malicious code injection attacks through public repositories of code, and there's no way to mediate those in a traditional programming environment.
With a component model, we have a way of mediating those at least to some degree. So we've vastly reduced the attack surface that a third party library will have, and that I think is going to be a very profound change.
So, today, you already are reaping the benefits of this because the components are already running in isolation meaning tomorrow we can turn this into the kind of thing where, operationally, the platform engineering team can take an application and say, "These are the security parameters we're going to wrap around each of these particular components, and have this very auditable, configurable security posture that is just..."
I mean, there's nothing like that out there at the library level anymore, and I think that's going to be the evolution of where software supply chain really ought to go, where we can make much stronger assertions about the security of each piece of software that we use, each library that we use to assemble our application in.
Benjie: So SBOM plus WASM. You know for me, I'm the hype guy, so where's my EBPFs in all this? You're missing the one... Maybe get Backstage in there somehow. For those not listening, Backstage seems to be the hotness for KubeCon so everyone is talking about Backstage, which is a really cool open source project. But, more importantly, the security model is really cool, what you guys are talking about.
So it's kind of like you get a sandbox at the service level, you can guarantee what's coming out of there is hopefully safe, and you get this component level security sandbox and then you can really trace the entire security posture of the whole thing. Let alone the RVAC side of that where each service can have different permission levels baked in. Yeah, okay. Wow, that's really cool. Okay, so Spin 2.0, is there anything else to highlight with what's going on with Fermyon?
Matt: Yeah. I think there's one other thing that we haven't really talked about too much in the initial blog post but, ironically, Radu actually demonstrated it in the code sample of the blog post. So for the next thing that components also allows us to do is you can take an off the shelf component and you can write a thin wrapper layer around it so when you think about classic object oriented design patterns, you can use patterns like decorators and interceptors and things like that and be able to write those and configure them at the application level.
Now, that sounds very abstract and very architecty, but when you think about some of the things that you can do with that... Again, these are things you can do right now. Imagine you've got a component that you want to figure out why is this thing... what does the data look like as it crosses over here? We can basically write a lightweight wrapper around it as another component, and be able to instrument some of that there.
Say you have a problematic component that has a zero day and the upstream hasn't been patched. Well, you can write... What do they call them? An individual firewall, basically a component that wraps that one and says, "Oh, if this malicious looking pattern comes in, stop it and don't even pass it on to the real host component." Again, another really interesting thing today.
These are things that haven't been really exposed at that particular level. So we're looking at those and Radu demonstrated how to wrap a file server and add a middleware in front of a file server in his recent blog posts introducing Spin 2.0 blog post.
Benjie: Wait, so that's really cool so hold on. You're saying that I'm using a component and I'm using some library and there's a CV that comes out but no one even knows, there's no patch for it yet. In the Fermyon ecosystem, maybe one day soon, we're going to have a way to be like, "hey, wrap this around this particular service, and I can guarantee that you're safe. You'll be a little hobbled, but it's safe."
And so it's kind of like bringing object orientation to the service level. Okay. I went to computer science school so I understand what that means. Sometimes I feel old every time I start talking about inheritance, but I love the idea of bringing that back. So Radu, I feel like maybe you've got one or two other cool things to tell before we wrap up here. You've got a look on your face. By the way, this is weird for me. I'm looking at people while talking to them, so now I'm trying to intuit what's going on. But, Radu, tell us a few more cool things and we'll wrap up.
Radu: The one other thing that gets me really, really excited about Web Assembly and about talking to so many humans at this conference is around... So for the first part of this conversation we primarily talked about a new way of building applications, and how components and Web Assembly enable us to think differently about how we build software, off the shelf components, wrapping software in components and being able to analyze and think about, reason about what a component does.
The second part to this conversation revolves around, well, I built my application, where do I deploy this? What does the operational aspect look like in a Web Assembly world? The thing that makes me really excited about this is if you want to build a radical new platform, you can. But you can also incrementally adopt this wherever you might happen to run your applications.
Whether that's in System V job in a Linux box somewhere, or whether that's with Docker Desktop or whether that's with Open Ship or Kubernetes or Nomad or Fermyon Cloud, being able to take your Web Assembly app and deploy that in any of those places without changing anything about the rest of your operation is something that we've been really passionate about for the last year and a bit and it's starting to be the case that you can pretty much run Spin applications anywhere, and that makes me really happy.
Benjie: Wait, so I just thought of something and I want to make sure I understand this right. So hypothetically, I could have a component, say I build tractors and I've got a machine on the assembly line floor, and it's got an ARM processor so somehow I've got WASM or WASE or whatever the right... I've got it running out there.
I could have some little component that's checking the serial number reader for every screw that gets put into this tractor. I don't know where this example is coming from, but we're tracking screws in tractors, apparently, and I've got a barcode reader. That component can stream that information across the interwebs from the edge to a different thing?
Radu: And not even just that, you're at the point where you can actually take that component that has to, for now, run on an ARM constrained device and just move that to somewhere in your other parts of the infrastructure that might run Linux 886 for example. Web Assembly is pretty much the first technology that let's you do that.
Benjie: Yeah. That's really cool. I know we've talked to some folks where edge actually really matters, environments where you get very little compute and you need to have it on-prem for lack of a better term. And so Spin 2 is actually also a transport layer almost, is that right? To talk securely between the two? That's my question, I'm not saying the portability between architectures isn't super cool as well, but is it natively doing that or how does this work?
Radu: If you want to move an application with Spin, either 1.0 or 2.0 round, you currently have to go through a registry to do that which is like your regular container infrastructure registry that you can reuse for Web Assembly applications. We have been really thinking about the ability to stream content from one place to another for a really long time, and component composition was the final thing that was required for us to be able to achieve that.
And so don't be surprised if in the near future we are seriously thinking about moving running applications from one construct to another, from one run time to another, and be able to stream that content back and forth.
Benjie: Okay. That's really cool. For the listeners, the eyes were very shifty when he said that so it seems like an exciting thing that might be on the roadmap, so that's really cool as a fan of what you are doing. I'm looking forward to that day.
Radu: Really, getting the ability to run Web Assembly and Spin applications in all the places that I listed was really step number one in making sure that we can start moving those around. So we are really getting close to the point where we can really start achieving that.
Matt: I can tell that Radu is excited because this is the longest I've ever heard him talk without saying AI.
Benjie: Matt and Radu, it was really good to meet you. Radu, it was good to meet you. Matt I've met before. Return guest on the podcast, really excited, Radu Matei.
Radu: Surprisingly well pronounced, thank you.
Benjie: And Matt Butcher, that one's easy. So last thing as we always do, if we want to get more involved, learn more about what you guys are doing, what's the way to get more involved and find out stuff?
Matt: If you want to just get a quick glance of what's happening, Fermyon.com/blog is where we post really regularly, all the things that are happening. If you want to engage or interact with the community, we have about 1,000 plus in our Discord community if you go to Fermyon.com/Discord, that's where you'll get an invite to join our server and talk to a lot of people who are enthusiastic about Web Assembly.
Benjie: Awesome, guys. Thank you so much for Spin 2.0 and the update and good to see you again, and we'll talk later.
Radu: Thanks a lot, this was fun.
Benjie: All right. I am at the Cloud Native Project Pavilion and I'm going around talking to some of these projects, and the first one we have here is Cloud Custodian with Umair Kahn and you're with Stacklet. Umair, tell me a little bit about Cloud Custodian. What is Cloud Custodian? What stage is it at? Is it incubating? Where is it?
Umair Khan: Cloud Custodian is an incubating CNCF project. We are looking to graduate hopefully in the next few months. Custodian came originally out of Capital One. It's a governance code engine that allows you to write simple policies for security compliance, costs and operations. It doesn't matter what type of policy it is, you can write it up for your cloud infrastructure and you can also define actions in it.
So it's not only showing you red lights and green lights, but you can take actions in it. For example, you can identify these under utilized resources, notify the owner, wait a week, notify a manager, and then eventually turn it off. So you can have different sorts of actions as well, especially when you start scaling your operation to the cloud. You need something like Custodian, same with security, same with compliance, same with cost management.
Custodian now has over 200 million downloads now. We just crossed 400+ contributors from all types of organizations as well. So yeah, the project is getting more and more steam, especially I would say these days when everyone is trying to save costs on the cloud, Custodian really helps you save money and at the same time stay more compliant against security and compliance rules as well.
Benjie: Super cool. So is this a CRD? Is it a daemon set? How does Cloud Custodian get into my cluster, or how do I use it? How do I install it?
Umair: Cloud Custodian is more for cloud platforms, so it's Azure, GCP, AWS, Orca Cloud now supports as well, Tencent Cloud. Then we have a Terraform provider as well, so you can scan Terraform for your policy and compliance as well.
Benjie: Okay. So this actually plugs into my cloud and I can programmatically understand what's going on. Give me a tangible example, or one that you've heard of one of the customers or people that are using Cloud Custodian that you can share with us, just to give people an understanding of what would I use Cloud Custodian... Give me a tangible example if you don't mind?
Umair: I think tagging is a big one. So before you want to do anything with security or cost or compliance, you need to have the right tagging information on your resources, so that's where Cloud Custodian comes and helps you as well. You can define your tag policies, it can identify which resources are not rightly tagged or Kubernetes clusters are not rightly labeled.
It can even take actions in notifying people, or actually updating them with temporary standard tags as well. So I think that's one of the biggest use cases Custodian has, even before you start taking actions you need to know more things about the resource so tagging is big use case that everyone uses Cloud Custodian for.
Benjie: Okay, so I'm tagging my resources. Now, is this cloud native specific or can I use this for my S3 buckets? That doesn't really make sense, but my EC2 instances, for example? Or does that not cover that? It's just in the EKS world, or what am I tagging here? Anything?
Umair: Any resources, any resource that's on AWS' catalog or GCP's catalog, or Azure's catalog, and typically Custodian ends up supporting more resources than even some cloud provider tools. A lot of times those product teams are different, but in our open source community, someone wants to use that resource and they contribute that resource type to us and that comes upstream. So Custodian probably has the most depth and breadth in terms of resources covered across all the major clouds because we have such a big, vibrant community that's using it.
Benjie: You said you had 400 contributors?
Umair: We just crossed 400, probably 420-ish now. The number keeps changing, but I think 400 was a big number for us a month ago that we crossed.
Benjie: When did you get into incubating? How long as this project been around for?
Umair: Project has been actually around from 2016/17 I believe. But initially it was inside Capital One, it was open sourced, into CNCF Sandbox. Last year we became incubate, I think our numbers are pretty close. We're going through a process, we have to officially go through the process of graduation. But we are hoping by summer next year we can be a graduated project too.
Benjie: So that's interesting. We've talked to a bunch of people who are in the process of that. So you have to go through the audit, the security audit, tell me a few other things that you have to go through to get to graduation?
Umair: I think the requirements haven't really evolved a lot as well, but security audit is one. We were just talking to one of the CNCF partner companies who audit as well, and then your governance needs to be certain things. We updated our maintainers too, like we have from different companies like 23andMe, Intuit, Capital One, Stacklet is one of the maintainers, so the maintainership is pretty diverse, different companies as well.
So that's a checklist. I don't know the exact checklist but I know audit is one of them and the governing itself is different. But we added two new maintainers last year as well, so there are different types of companies who are contributing and maintaining the project nowadays.
Benjie: All right, super cool. All right, last, quick question, if we wanted to check out more information about Cloud Custodian or wanted to contribute, do you guys have weekly meetings? Monthly meetings? Where do I go to check it out?
Umair: I think you go to CloudCustodian.io, we have a very active Slack channel as well that you can join and we do have a biweekly meeting that you can join to learn about how you can contribute, the latest things, we talk issues with maintainers and so on. But we are pretty active on Slack as well, that would be the best place for you to join. If you-
Benjie: Sorry, that's the CNCF Slack?
Umair: We have a Cloud Custodian Slack.
Benjie: Okay, so I'd go to CloudCustodian.io and then I would find the Slack there. Are you guys on the CNCF Kubernetes Slack channel or not really?
Umair: We are on the CNCF Slack, but I think there are more people there. We do support it, but I think every project has their own Slack, right? So we can inspire everybody, as you go bigger you need multiple channels so it becomes challenging to maintain there. But naturally that channel is managed by CNCF, they have everything.
Benjie: Okay. So check it out, CloudCustodian.io. Every other week meetings. I hate biweekly because it could mean two things, I hate that. So it's every other week I take it, okay? And the Slack, the Cloud Custodian Slack that you can find at CloudCustodian.io, that's the place to find you guys?
Umair: Right.
Benjie: Okay, cool. Thank you so much, Umair, for your time and we'll check it out. Thanks.
Umair: Thank you so much.
Benjie: All right. So found the Kubevirt guys and we're going to talk and understand what Kubevirt is a little bit, get some updates real quick. Okay, so first we've got Ryan Hallisey, okay. We've got Andrew Burden, and we've got Vladik Romanovsky. All right, that was just on the fly there. Okay. I'm going to start with Ryan or anybody. Tell me real quick what is Kubevirt?
Ryan Hallisey: Kubevirt is a way that you can run a virtual machine on top of Kubernetes. The way that I describe it to people is that you've got Kubernetes that you run right now, you've got pods, you've got containers, you can run them, you like Kubernetes. You can take what you have now that you run in virtual machines, and you can port that over to run on top of Kubernetes.
Essentially, behind the scenes what you get underneath the hood is a virtual machine, a Kubevirt process running inside a pod, and so ultimately it's going to look and feel a lot like what you have in your traditional virtualized environment. But now, because you like Kubernetes, you really like this architecture and you want to use it for all of your systems, you now can have the ability to get those same APIs to run your virtual machines alongside your pods.
Benjie: So I could use the Kubernetes orchestrator and scheduler to orchestrate and schedule my virtual machines, basically?
Ryan: Yeah. That's correct. Yeah, so I guess another way you think about it is you've got your pod API right, everyone is familiar with that. Well, Kubevirt has a virtual machine instance, virtual machine API. They're all the same, you can use the KubeCudl client to access these things. They all go through the API server, so that's exactly right. You can use a Kube CTL create command just like you would for a pod built through the API server, and it would create this object for you. Under the hood it's just a virtual machine.
Benjie: Interesting. I'm trying to think of the security implications. Does that mean that you could use the Hypervisor? Do you get a cooler security layer based on that at all, from inside the cluster?
Ryan: Well, your workload is going to be running inside the GAS, it's now going to have a kernel layer that's going to protect it from breaking out to the host. So one of the especially really appealing things, at nVidia one of our use cases is we really like the idea of having a kernel layer to run our workloads.
It's something that we like that protection, we feel more comfortable with it. We like virtual machines, a lot of people do and that's one of the things we like about it. So you get that additional layer having the kernel there actually inside the virtual machine inside the pod, so you get that additional protection for having it.
Benjie: Right. And you said that you work at nVidia so I assume there's a good chunk of running GPUs in a Kubernetes cluster. I assume you get some benefits out of going straight to virtualization. Okay, cool. So I can use the Kube API to bootstrap and maintain, turn on and off these things. Now, what if I want to turn on a virtual machine with a certain amount of resources? Can I do that same type of limit, memory limit type of thing with Kubevirt?
Ryan: Yeah. Yeah, you can. Essentially the APIs have a similar feel to pods. A lot of the inspiration for the stuff is we want to use the same concepts, so a lot of these similar ideas... Essentially, when you're very familiar, when running Unity deployments to running your pods, a lot of those concepts have been taken, ported over and applied so it's just like running another application in Kubernetes and it's just now this is a virtual machine.
You get the kernel layer, you can now take your application, you've brought it over and we run it in a virtual machine, so it's the same kind of look and feel that you would as your Kubernetes application, but now it's just in a VM.
Benjie: That's super cool. Okay, so some quick questions around the project. How long has Kubevirt been around? I'm going to ask Andrew, how long has Kubevirt been around for?
Andrew Burden: It started around May, 2016. It started as an idea within Red Hat amongst some engineers as like, "Ah, this sounds like a thing that could happen," and it turns out it could happen. They got it working and running I think in 2019, it was successful enough that they decided to donate it to the CNCF as a sandbox project.
Benjie: Is it still sandbox or is it incubating?
Andrew: As of last year, I think I want to say in May, 2022, it became an incubating project.
Benjie: Congratulations. So are you guys trying to graduate right now, or?
Andrew: Yeah. So earlier this year we had version 1.0 released, today we hit our version 1.1 release. Hooray. And, yeah, our eyes are now firmly set on graduation.
Benjie: So come Paris, you think you're going to be graduated or later in the summer?
Andrew: As I understand it, there is somewhat of a queue. We don't yet have all the criteria in place, there's a few things we still need to get like the Plus Badge and the security audit. Once those are done we make our submission and then it's up to the CNCF to process that.
Benjie: Super cool. Yeah, I've talked to a few folks and they're all dealing with the security audit. It seems like a lot of people trying to graduate which is really cool. Okay, I'm going to ask Vladik if I wanted to check out Kubevirt, if I wanted to contribute, tell me about where you guys meet, when you meet, how to get involved?
Vladik Romanovsky: Yeah. So we have a page, we have a number of ways to follow us. First, we are on Slack. There is the virtualization group in Kubernetes Slack. There is also Kubevirt Dev specific to this. We also have a Kubevirt Dev mailing list, but we also have a regular meeting that's a community meeting and, yeah, you can find all the information about this in our GitHub repository. So it's GitHub @Kubevirt.
Benjie: Okay. So GitHub.com/Kubevirt. Do you guys have a website as well, or do you just-
Vladik: Kubevirt.io.
Benjie: Okay. And Kubevirt.io is the place to find all this stuff out. Okay, cool. Thank you, guys. I know some of you have to go to a talk right now. Appreciate the time. All right, I've got Anna Reale of Keptn. Is that the right way to say Keptn as well? Cool. She's a maintainer. I'm in the project pavilion with Anna Reale, and she's going to tell us a little bit about what Keptn is and how to use it, and we're going to have a quick chat.
Anna Reale: First of all, Keptn is an incubated project of CNCF. We are fully cloud native. What we do is out of the box we give you for your developers, traces and the Dora metrics for day one operations. So for when you write before you deploy, you deploy and right after you deploy you can get information such as how are my little microservices connected together doing, which one is causing my crash, my error, and you see everything connected in a single phrase.
You do see Dora metrics, so things such as what was the last time of deployment, how much is it more compared to before or after so that you can react to this. Then as a second use case, we provide you with another nice tool which is a metric operator. This is basically a broker that allows you to have metrics coming from many different providers, Prometheus, Dynatrace, Data Dog.
It's very easy to integrate more in a way that you can use them as if they are all the same inside Kubernetes to react on things, to do things with this metric, with this information that are spread all over the places from different providers. What we do in the demo here at the booth, for instance, we show you how to use this to configure and HPA, so Horizontal Pod Autoscanning.
What you can do is you register multiple metrics, one from Prometheus, one from Dynatrace and whatnot. You establish SLIs as law, an objective that you want to analyze, you apply a custom resource from Keptn. This objective is analyzed, it's pass or fail, and based on that, other tools that we easily integrate with can react. For instance, Argo can say, "Okay, let's roll back. This deployment is very bad," or, "This one is very good." Yeah, that's Keptn.
Benjie: Cool. So Keptn, is that CRD or an operator? A Daemon?
Anna: No, it's a toolkit so it's a couple of operators. You can install them singularly for each of these functions I have mentioned to you, and, yeah, it's basically interacting with the Kubernetes scheduler and it uses Kubernetes web hooks.
Benjie: Cool. And you said it works with Dynatrace and all this other stuff, so you actually are setting your SLOs in Keptn itself?
Anna: Yeah. You have a custom resource which you can use to set up failure and warning criteria, and you can combine them. In this resource you can associate each of these objectives with a different provider, which is yet again just a custom resource that tells you, "Okay, this is type; Prometheus provider, and you will find it at localhost blah blah blah, or you will find it on this remote server.
Benjie: Okay, cool. And you said that it's an incubating project. When did it start and where are you guys in the CNCF journey?
Anna: We are in the incubator, I think since one year or so. Before that we were sandbox, I think this comes back to probably 2018 or similar. I am a new maintainer, it's only two years I've been contributing so I was not there when it was created. But 2018.
Benjie: Okay, so 2018 it's been around, it's been incubated for a year, you said. Are you guys looking to graduate soon or what's going on with that?
Anna: I think the dream would be that part of what we are working on would be integrated in the Kubernetes instrumentation. That would be awesome for us.
Benjie: Okay, so the dream is to get this all mainlined.
Anna: Streamlined, yeah.
Benjie: Mainline right in there. Okay, cool. So if I wanted to check more out or join a community, tell me about how I get involved with Keptn?
Anna: So, first of all, on the GitHub we always have the calendar with the community meetings. We have one community meeting every week, every Wednesday. This is alternating between European timezone and American timezones, so every other Wednesday you can find us in a community meeting. The calendar is in the Keptn GitHub homepage, and on the Keptn.sh you find the documentation and the information for contributors.
Benjie: Okay. So Keptn, K-E-P-T-N dot S-H, or GitHub.com/Keptn. If I wanted to come chat with you guys on Slack, which Slack are you in? Do you have your own Slack?
Anna: We have our own Slack for Keptn, for the CNCF Slack has the Keptn channel.
Benjie: Okay, cool. Thank you for the time, Anna, I really appreciate that. All right, back at the project pavilion, talking to Tony over at TiKV, or Titanium KV I believe. It's T-I-K-V. This is another project here, we're going to find out a little bit about it. So Tony, tell me a little bit about TiKV?
Tony: So yeah, TiKV is a distributed transactional KV stall and it's actually fully open sourced and written in Rust. It's initially developed as part of the TiDB storage engine. Internally, it provides the Redis compatible API and it uses a Raft protocol to replicate the data model that Replicate has, so it's a strong consistency and it's transactional.
Also we use the Rust DB as the storage engine. It's very scalable in terms of the data size. Some of our customers, they use TiKV as part of the TiDB. They stay at like 500 terabytes that have a few hundred nodes. So yeah, you can think of it as like a standalone product. It's open sourced, data MO DB and it's very fast and very scalable, and it's very secure because it's written in Rust, there's no memory issues there.
That's pretty much it, and of course if your workload is more than the KV scenario, then you can use our SQL layer which is for TiDB. It's just based out of TiKV.
Benjie: Sorry, so quickly, is this an operator? How do I install TiKV into-
Tony: It can be either used as a TiDB operator, so it's a Kubernetes operator, or you can install it with our script. We have another tool called the TiApp which can be used to install TiDB or TiKV in virtual machines. So in short, we support you install our virtual machine or in Kubernetes container classes.
Benjie: Now, this is a CNCF project, correct?
Tony: Yeah, yeah. It's a CNCF project.
Benjie: Is it incubating, sandbox? Where is it?
Tony: It's graduated.
Benjie: Oh wow. Okay, so you've graduated. When did you guys graduate?
Tony: I do not remember exactly, because I joined the organization like two years ago. So I think before I joined the company, it's already graduated, so at least two years ago.
Benjie: Okay, cool. So this is a very reliable, key value store. Tell me why, and this is maybe a little bit of a dumb question, but why wouldn't I just use the Kube API and etcd if I was doing something with... yeah, if I wanted to use etcd?
Tony: I think we are more scalable than etcd, and function-wise we have more reach. Our API is a more rich function, and you can even cut to push down some competition in this. We have the corporate SHI, which you can push down the compute around in the storage layer.
Benjie: Okay. So that means I can actually do transformations on the key values?
Tony: Yeah, that's true.
Benjie: Cool. Okay. Now, I see that you're using Raft. For those people that don't know the Raft protocol, I think it's a pretty cool one. This is actually a little bit of an unfair question, but maybe a 30 second overview of what you mean by saying that you use Raft?
Tony: Raft is a consensus protocol which allows you to achieve the consensus on different nodes. So in this example, the way I write the data to three nodes, the writer can know that the three replica, they got the data and a position. At least the quorum nodes have the data posited and it's guaranteed by the Raft protocol. So if I manually implement this, it's hard to make data correct and it's very time consuming, so Raft is kind of a mathematically proven protocol that achieves a strong consistency consensus amongst distributed nodes.
Benjie: Okay. So KV, TiKV implements the Raft protocol which is a consensus mechanism so you can guarantee that whatever you've written is distributed and it's correct.
Tony: Yeah, reliably on the nodes. Yeah.
Benjie: Right. So that could be really helpful for distributed-
Tony: Yeah, in those situations because a network partition or a few nodes go down, right? So it can know, handle this problem automatically.
Benjie: Okay. So KV in particular is very useful for both sharded, well, maybe not sharded but distributed nodes across multiple places to make it larger, but also redundancy using Raft?
Tony: Yeah, yeah. That's true.
Benjie: Okay, cool. If I'm listening to this and I want to take a look at KV, how do I find it? Where do I find you guys?
Tony: In the GitHub, it's a project in the GitHub. So GitHub.com/TiKV is the repo.
Benjie: So GitHub.com/TiKV/TiKV. Is there a website as well?
Tony: Yes, it's at TiKV.org.
Benjie: Okay. So TiKV.org. Okay, cool. Do you guys still have community meetings or you're graduated so you don't do that anymore? How does that work?
Tony: It's continuously involved, add features to that, and we basically have online communications for that. Yeah.
Benjie: So is there a Slack? Where would I get you on Slack?
Tony: I'm not sure we have the official Slack channel for that. I think all the communication is on the GitHub.
Benjie: Okay. So GitHub is where to go. All right, well, Tony, thank you so much for your time. Yeah, I'm going to make sure to check out KV.
Tony: Thank you very much.
Benjie: I've got Solomon Hykes back on the Kubelist Podcast, and we're just doing a quick check-in on where Dagger's at. They've had some pretty exciting releases here, so welcome back, Solomon. Give us a quick KubeCon update in 2023.
Solomon Hykes: Yeah. Welcome to Chicago. Let's see, since we spoke. Well, two big things. One is we launched Dagger Cloud, or we soft launched Dagger Cloud so now you can actually buy something from us now. We've got this open source engine that will run your CI pipelines as code. That's all free and open sourced. But then you want to manage all those engines, see what's going on, visualize your pipelines, and also distribute cache across all the machines running these pipelines to make everything faster, then we sell that as part of Dagger Cloud.
Benjie: Okay, so hold on. So with Dagger Cloud now you've got distributed caching, so does that mean when I'm building my Dagger pipelines locally I'm able to leverage that cache the same with the CI stuff?
Solomon: Yeah, you can do that. You can definitely share your distributed cache between developer machines, so if you're a large team, for example, and first day you're running a really massive build, if someone else on the team already completed the same build, boom, it's going to be instant. The other main benefit is when you're running Dagger in an ephemeral CI runner machine, what happens is the storage of that machine is empty every time so you don't get any cache, unless you get it from somewhere else and that's a complicated, distributed systems problem, to distribute, to orchestrate the movement of that cache data across machines. And so that's where Dagger Cloud helps, it just moves the data around so you get cache.
Benjie: That's really cool. So distributed caching for my CI, that's going to save me a lot of time. Especially with the whole promise of Dagger, being able to test these things locally. I have to ask, how do you do that securely? What's the authentication mechanism you use?
Solomon: We're trying to make that implementation as straightforward as possible so there's an API endpoint, you authenticate with an API token so you pass that token to the Dagger tool. Then it'll go and communicate with our service and ask for data and get it back, and that's it.
Benjie: Okay. So basically using the CLI to authenticate in TLS, all that good stuff, so all of it's secure. Interesting. I think about egress costs when you talk about this, and that's always scary.
Solomon: Yeah. One question is what about egress costs? If I'm uploading all this cache data and downloading it back, is that going to cost me a fortune? The answer is no. The way we make that work is we basically detect where your engine is running, which cloud provider, and then we'll make sure that the data is uploaded to the nearest storage bucket so if you're running your CI in AWS US-East, that data is going to automatically get routed to a storage bucket in US-East, et cetera.
Benjie: Okay. Super cool. So this cloud product is actually conscientious of real workloads, that's great to hear. So it sounds like it's already getting pretty mature. Is there anything else interesting going on with the Dagger world on the open source side? Any new toolkits? What else is going on?
Solomon: Definitely, yeah.
There is one feature I'm insanely excited about, it's called project Zenith. It's a future release of the Dagger engine, but it's actually already being used by our community because they're crazy and they just want to play with it. So this is a feature that adds the concept of cross language functions, packaged in cross language models.
So that means in addition to writing your CI logic in code in your favorite language, Python, Go, TypeScript, et cetera, now you can package those functions that you wrote in a module that can be called by other functions written in other languages. So this is a big, big deal for the Dagger community because most DevOps teams have to connect artifacts and tools from different language silos to ship the apps.
So you'll have a frontend team that uses JavaScript and JavaScript tooling, a data team that uses Python, a backend team that uses Go, and they all like the idea of using Dagger and writing their own pipeline logic in code. But they don't agree on which language to use, they each want to use their favorite language. So then the problem is how do you collaborate? How do you compose one pipeline out of these different functions? Well, that's the problem that we solve with Zenith.
Benjie: Okay. That's super cool. So I can have a frontend data pipeline that runs my Linter, it's JavaScript or TypeScript, it runs my Linter and it does some RegEx to deal with some cores issue. Then my backend Go service, I can write my unit test integration, because of course JavaScript you don't do unit testing. Sorry, JavaScript peoples.
I forgot about that. But I use some integration test and you have that, and that's all written in the Go module, but those two things can interact and say, "Hey, JavaScript. You're ready for the Go backend and whatnot." How do those things talk to each other? What's the mechanism there?
Solomon: So first you write your functions. If it's Go, a few Go files, a few Go functions, you use our SDK for that. Then you package that into a module, and it's basically a directory with your code and a little JSON file that says, "I'm a module and here's my SDK." It's like two lines. Then you point the Dagger tool at that module and say, "Load this module," and it's going to do a whole bunch of building and repairing, and then it's going to give you an HTTP API. A GraphQL API to be precise.
Then you can query that API to call the functions in any way you want. So you can actually do that from the CLI, you can take any module from the Dagger-verse. There's a universe of Dagger modules and you can load it from the CLI and look at it and say, "Okay, what's in there? Oh, there's a build function in there. What are the arguments? I need a directory. Okay, build with this directory over there,"and it'll work.
You can also do that in code, so if you're writing code for your module, writing the function, you say, "I want to call this other function from this other module," you'll just type Dagger mod install this module, and then it'll generate a little client for you in your language of choice with all the functions just ready to call. But each time a function calls another function it goes through this HTP API.
Benjie: Okay. That's super cool, so definitely check that out. I also see that on your shirt here that you're proud to be a Daggernaut. A Daggernaut is the new term for a Dagger user, is that correct?
Solomon: Yes, correct. A Daggernaut is someone who uses Dagger to improve application delivery.
Benjie: Okay. Well, I'm a partial Daggernaut at times. Is there anything else to talk about? Anything else to announce? You're at KubeCon, these are very exciting things. But is there anything else? You got a lot of energy going, you seem very excited.
Solomon: Yeah, that's it. This module thing, really it's hard to explain but you've got to try it. It's just very addictive because we're all building now, whenever we have a little free time, we go and build a little module. I have a few side projects. One of them is a reimplementation of a Docker Compose compatibility layer, so you'll be excited about that.
Benjie: Tell me more, Docker Compose, what?
Solomon: Yeah. So you can add a module, it's on my private GitHub, and there's a function in there, a Dagger function, and you give it a directory and then you'll parse the YAML and it'll just run the Docker Compose project for you.
Benjie: All right. Well, you know I'm going to check that out very soon. Is there going to be a Dagger Hub to look at all these modules?
Solomon: Yes. There will be a marketplace of all these modules. Remember, it's a feature that's still in development so none of this is stable or supported, but we do have this website called Daggerverse.dev.
Benjie: All right. Spell that.
Solomon: Okay. Daggerverse, like the Dagger Universe, D-A-G-G-E-R-V-E-R-S-E, Daggerverse.dev. D-E-V.
Benjie: All right. So I want to check this out, I go to Daggerverse.dev.
Solomon: And search for Docker Compose.
Benjie: Well, yeah, I'm going to be looking at that.
Solomon: There's all sorts of crazy stuff in there. It's really fun. It's just fast moving, experimental, fun stuff.
Benjie: All right. Well, we're going to let you get back to the booth, Solomon. I know you gave a talk yesterday, I believe. Is that right?
Solomon: Yeah, yeah.
Benjie: What was your talk on? Just tell us real quick.
Solomon: The talk was about my experience with open source startups. Docker, Dagger, lessons learned, differences, et cetera.
Benjie: All right. Cool. I'm pretty sure that'll be published eventually, so we can all check that one out as well. Well, thank you as always, Solomon. Long time fan of all your things, and so really excited to hear what's going on with Dagger.
Solomon: Always fun to talk to you.
Benjie: All right. I am at the project pavilion with Sarah Christoff. She is the maintainer of Porter. Sarah, tell us a little bit about Porter.
Sarah Christoff: Yeah. So Porter basically takes your app, everything it needs, that includes environment variables, credentials and parameters, and all the other cool DevOps tools that you have laying around, and it puts them in a bundle. What a bundle is, is it is a container image, along with a JSON schema and that becomes like a OCI artifact. So you can put that in any OCI registry, pull it down, and deploy it wherever you need.
Benjie: Okay. That's cool. And this is a CNCF project, so are you incubating? Where are you guys?
Sarah: We are incubating, and so Porter is a bundle installer and this comes out of the CNAP technology that's our Native application bundle spec that is also within CNCF.
Benjie: How long has Porter been a project?
Sarah: For four years.
Benjie: And when did you guys get into incubation?
Sarah: I don't know. Probably about two or three years ago.
Benjie: Okay, cool. So give me a little bit of a tangible example of how can I use Porter?
Sarah: Right. So we have a couple of really cool customers right now. One is Aza Research, which uses Porter to create trusted research environments. These research environments were used during COVID to give to the NHS, colleges and other hospital systems to expedite COVID research between these people, and so that they could share information. Also give them one environment that is shared across, so that we knew all of that research was reliable.
Benjie: Okay. Explain this to me a little more, though. How? What did Porter enable there?
Sarah: So what they did was make a bundle that had Python and any other security things they needed, and also any other parameters or credentials. They created basically, like, "here is our base image," and gave those to all of these hospital systems and colleges, and said, "Go ahead, go in here and do whatever researchers do."
Benjie: Okay, cool. So how would I compare Porter to a Docker file?
Sarah: So with a Docker file you're basically saying, "Here is my container image and the state of the rules."What Porter does upon that is say, "Here are all of my deployment tools and this is what I want to happen, what I want to run and install. Here are all my deployment tools when I run an upgrade or an uninstall."
But on top of that, Porter gives you the option to create custom actions so you could say with Terraform, Spin and deploying Fermyon Cloud, what if I just want to do a test of a WYSI app into my cloud? It gives you that option, so it gives you a lot of flexibility. On one hand it's super unopinionated so you can kind of role with it how you want. On the other hand, unopinionated stuff is sometimes sorta fast.
Benjie: Yeah. That's exactly right, but this sounds really cool. Okay, so what does it look like if I'm using Porter in my tool chain as a developer? Is it a CLI? What is it?
Sarah: We aren't replacing anything. We are kind of wrapping everything up in a nice bow. So if I'm a developer and I just started a new job, and my first task is to get the app running locally, well, that's a great time to pull in Porter because you have a fresh machine.
So when you have to run and install of npm, everything you have to brew install and all of those pulls you have to pull down, you could just create a bundle then and so when you get more new people on your team, you don't have to have them do all these steps. You just hand them the bundle, have Docker running and there you go.
Benjie: Okay. So it's a bundle, I do Porter up. What do I-
Sarah: Porter install.
Benjie: Okay. So Porter, it is a CLI?
Sarah: Oh yes, it is a CLI.
Benjie: Okay, cool. And then I just have it running locally, and this is completely portable, thus Porter. Is that right?
Sarah: It is completely portable, yeah. Agreed.
Benjie: Okay. And then you can just get this anywhere you want, okay, cool. And are you guys looking to graduate soon or what's the plan?
Sarah: I haven't told anyone yet, but we are looking at graduating. We do have a lot of the customers and all of the things we need, but we're looking for more maintainers to make sure when we do graduate we have that support.
Benjie: Okay. So you're looking for maintainers, that's interesting. So if I wanted to contribute or I wanted to get more involved, where do I go?
Sarah: Yeah. We are in the CNCF Slack, just go to the Porter channel. We have community meetings every other Thursday at 10:00 AM Mountain Standard Time, and we're very welcoming, we're very cool. We'll help you get started, even if you don't know Go. We're just a fun group to hang out with.
Benjie: Okay, super cool. And do you guys have a website, or?
Sarah: Yeah. So if you go to Porter.sh we're all up in there. You can come see what we're all about. We have a lot of great quick starts, and I think it'd be really great for people who are interested in WASM right now and learning how to deploy WASM apps.
Benjie: Okay. And is this a big WASM ecosystem thing? Or it's OCI?
Sarah: It's OCI, yeah. We're adapting to the WASM space. Because we are unopinionated we can do that, we're very flexible. So we're coming into that space.
Benjie: Very interesting. Okay, Sarah. Well, I really appreciate that, so check out Porter.sh, and I'm going to take a look at that more. Thanks, Sarah. All right, I am here with Bailey Hayes and she's going to tell me a little bit about wasmCloud.
Bailey Hayes: wasmCloud is a CNCF project. We recently applied to move to incubating but right now we're a sandbox project. We've been around for several years now, it's a distributed app platform that helps you simply build complex applications with Web Assembly. Hence the name wasmCloud.
Benjie: Cool. Okay, so give me a real world example of what I would use wasmCloud for?
Bailey: If you've ever had a situation where you need to have a hybrid cloud, like I have my own data center on-prem and I also want to connect to other clouds, so with wasmCloud that's really easy and simple to do. The other reason why you would be interested in using wasmCloud is that it also takes away a lot of the complexities that developers experience when they're writing apps today, and that the code that the developer writes, they focus on their business logic and they don't build in all of the libraries and non-functional pieces into their WASM module. That's separate and decoupled away.
Benjie: Okay. So is this a command line tool or how do I use wasmCloud? What do I actually do to use it?
Bailey: There is a command line tool. It's Wash, the wasmCloud shell, right? Of course it's cute. Everything in the WASM ecosystem typically starts with a WA, so you'll see a common thread here. Wash is our CLI that let's you interact with a wasmCloud host. You may have lots of wasmCloud hosts, the way to orchestrate that is with wadm. wadm is a wasmCloud deployment manifest, and so I'd say wadm, some people say wadm. I say wadm because it sounds like a punch like, "God damn." Awesome.
Benjie: We got a Street Fighter reference right there.
Bailey: Yeah. So with wasm you use that to a lot of the orchestration and automatic scale-out.
Benjie: Okay. So where does wasmCloud run? Do I run this on-prem myself? Is it in my Kubernetes cluster? Is it an operator? How does this work?
Bailey: Yes.
Benjie: Tell me more.
Bailey: You can run it on Kubernetes. It is compatible with Kubernetes, but not dependent upon. So if you wanted to run it on your own VM without Kubernetes, you would run a command called washup, and that will launch you a host. It spins up Next, and then it connects it across clouds.
Benjie: Okay. So I literally run my own cloud and I can deploy any WASM app I want to that, and that's what wasmCloud is?
Bailey: Anywhere, yeah.
Benjie: Okay, that makes sense. If I wanted to be involved in wasmCloud or check it out, where would I go?
Bailey: I would say there's the CNCF Slack where you can jump in our channel there. We have our own Slack, that's a great place to chat. We have a really great open community. We went from about 80 contributors about a year ago, to 440 now. So I would say we're a great community if you want to join to learn Rust, Web Assembly, Web Assembly component modules, distributed systems problems. We've got all kinds of goodies. I would say GitHub as well, GitHub/wasmCloud/wasmCloud. Our main readme there has all the information you would need.
Benjie: Okay. So GitHub.com/ W-A-S-M-C-L-O-U-D. Is there a website for you guys?
Bailey: There is. wasmCloud.com.
Benjie: W-A-S-M-C-L-O-U-D dot com? Okay, cool. Now it's getting a little loud in here so I want to stop here. But is there anything else that's really cool that you want to tell me about wasmCloud and why I should check it out?
Bailey: We recently added support for the component model, which is an up and coming proposal within the W3C. I am the WASE code share inside the W3C's working group for WASE. That's where we've been working on it for the past three years, and over the next month I think we're going to finally launch it, which is so exciting. So if you want to learn a new way to build software that's built on these open standards that we're developing, wasmCloud is one of the best places to get started.
Benjie: Okay, cool. And I have to ask, I assume wasmCloud is Rust?
Bailey: Of course it is.
Benjie: You heard it here first. Okay, cool. Bailey, thank you so much for your time. Really appreciate it. I'm going to check out wasmCloud pretty darn soon. Thanks, Bailey.
Bailey: Thank you.
Content from the Library
The Kubelist Podcast Ep. #44, Service Mesh Evolution with Idit Levine of Solo
In episode 44 of The Kubelist Podcast, Marc Campbell and Benjie De Groot sit down with Idit Levine, Founder and CEO of Solo.io,...
The Kubelist Podcast Ep. #43, SpinKube with Kate Goldenring of Fermyon
In episode 43 of The Kubelist Podcast, Kate Goldenring shares her journey from Microsoft, where she contributed to Kubernetes...
The Kubelist Podcast Ep. #42, Zarf with Wayne Starr of Defense Unicorns
In episode 42 of The Kubelist Podcast, Marc Campbell and Benjie De Groot sit down with Wayne Starr from Defense Unicorns to...