Ep. #44, Service Mesh Evolution with Idit Levine of Solo
In episode 44 of The Kubelist Podcast, Marc Campbell and Benjie De Groot sit down with Idit Levine, Founder and CEO of Solo.io, to explore the evolving world of service mesh technology. Idit shares her unconventional journey into tech, the founding of Solo, and innovations like Ambient Mesh that simplify Kubernetes networking. Dive into the challenges of microservices, open source collaboration, and the future of AI in networking.
Idit Levine is the Founder and CEO of Solo.io, a leading innovator in cloud-native networking and service mesh technologies. With a career spanning groundbreaking projects at Docker, Kubernetes, and Dell EMC, Idit is passionate about simplifying complex infrastructure. Her work with Solo has redefined how organizations manage connectivity, observability, and security in modern cloud environments.
In episode 44 of The Kubelist Podcast, Marc Campbell and Benjie De Groot sit down with Idit Levine, Founder and CEO of Solo.io, to explore the evolving world of service mesh technology. Idit shares her unconventional journey into tech, the founding of Solo, and innovations like Ambient Mesh that simplify Kubernetes networking. Dive into the challenges of microservices, open source collaboration, and the future of AI in networking.
transcript
Marc Campbell: In this episode of The Kubelist Podcast, Benjie and I had Idit Levine, the founder and CEO of Solo.io, to really talk about service meshes.
It was a really technical episode, but it was super fun. She clearly knows what she's talking about when it comes to service meshes, has a lot of opinions, kind of walks through the history of them.
Benjie De Groot: Solo.io is, it seems like the vast majority of maintainers for Istio. What I really liked about the conversation was they addressed kind of a lot of the issues I had early on with service meshes.
I really found some of the new stuff they're doing, Ambient, the sidecarless service mesh can be super fascinating.
As a very, very service mesh skeptic, I would call myself, everyone knows that about me, I'm going to take another look.
So, I thought this was a really interesting conversation. Got to know the history of Solo.io, Istio, and just kind of the state of technology at these various levels. I was really impressed.
Marc: Yeah, it felt like a very unfiltered conversation about service meshes, kind of like transparent about the problems that we've had in the past with them and the complexity, and then kind of taking us to the state where we are today, and what problems are solved and what aren't solved.
I really hope you enjoy this episode, and we'll kick it off.
Hey, welcome back to another episode of The Kubelist Podcast. Today, we're here with Idit Levine, the founder and CEO of Solo.io, to talk about Solo and service meshes. Welcome, Idit.
Idit Levine: Hey, thanks so much for having me.
Marc: So, I'd love just to kick it off and get started by hearing the story about how you founded Solo.io.
Idit: Yeah, so I mean, look, I'm pretty sure that you're talking to a lot of founders, and most of them probably have a story about, or this like brilliant idea that they had in their mind, that definitely wasn't the case in Solo.
So, I can tell you that I worked in the open source community for a long time. I was in Docker when it started. I was in CNCF and KubeCon, and Kubernetes, and KubeCon when it started.
So I was in that area, and what I know very well is come with any ideas, like, I'm very good in collecting data and kind of like decided what to do next.
Before I did Solo, I was actually working in a company called DynamicOps we got acquired by VMware. We were doing cloud before cloud was called Cloud.
And I moved to work in more companies like, you know, startup that got acquired by Verizon and we built the next generation cloud for them.
And then I also worked in one big company in all my life, which was EMC. And I was reporting, even then I was in the CT office, I'm reporting to the CTO of EMC and now Dell. So I always was in the point of doing new stuff and innovation.
So when I started Solo, like, it was very clear to me that, you know, innovation and technology is something that I know how to do, but I was more interesting to create is a startup that will work differently. And when I'm saying differently is that it's going to be all about the technology, no politics, just technology. And that's what I did.
So I started the company. When I looked back then on the market, it was just when people kind of like were leaning to Kubernetes but still Mesosphere and Dockers was in the area and what I was identifying back then after I got the money to start a company is that the networking will become the next big problem to solve.
And the reason is because if you're taking something that is you know, big binary and cut it to small pieces, somehow you need to reconnect. So I understood that that's going to be a very, very important piece of the infrastructure.
So I went and attacked it even though if you look at what I was actually raising money for, I think that back then, I told the venture something about serverless or something, whatever they wanted to hear, I told them then I got the money and decided what I want to do.
So, then I decided to work on networking. But again, that was always my strength to figure out what is the gap in the market and how to innovate. That was all my career, that's what I was doing.
Benjie: Okay, so that's super interesting and we're going to get back to Solo in a second, but I want to zoom back, back for a second. I know Solo is an open source related company. We'll talk about how in a second, but I want to go back to the beginning with you.
When did you get started in open source? Kind of talk about how your career took off, maybe a little bit about your background, where you grew up and how you got into computer stuff.
Idit: Oh yeah. So again, nothing in my life is like you would expect someone to be like if you're looking at founder or I dunno, if you're comparing me to someone like as Mitchell Hashimoto, very different lifestyle.
So like to me it was very different. I was a basketball player, you know, I was honestly way more passionate about sport and I wasn't actually learning and studying, but I was always the person that everybody said, "She's very smart, she just doesn't care," right?
So I went to computer science because I didn't know what else to do and all my family was there like my, my brother, my sister and I kind of like said, okay then.
First of all, I'm coming from a family that doesn't have a lot of money. My parents did not finish even high school. So I needed to basically sponsor myself when I'm going to the university.
And I also wasn't 100% sure what I want to do. So what I did is say, well, you know, like let's teach myself some computer and see if I like it. And that's exactly what I did.
So I took a book, I teach myself, I start working in the industry before I actually had a degree, was building in the.com, a lot of website and then went directly to the backend, all the way, you know, the framework, if we need to...
So I basically started all the way in one side and went all the way down. And what I like, you know, as generally I'm a very competitive person, you know, that's why my work is spot.
But I also really like innovation. Like I'm bored easily, so I really need to see what next, what next, what is cool, what we can do. So this is why I kind of like went to the computer science, went very, very to the back, worked always more on startup, you know, startup companies and doing innovation most of my career.
In the open source, I think that I started it around when Docker started, like I was very excited about this movement. I went back, you know, I become very close.
Like I'm very close relationship with Solomon Hykes, the founder. I worked a lot with them and got excited about it. And then, you know, when Kubernetes started, as I said, I was talking already in KubeCon.
In EMC when I was there, interestingly enough, I kind of like hack up a team with two more people and we did innovation in the open source and we did some projects that are very interesting, back then it was Unikernel.
Marc: Yep.
Idit: We were basically building the Docker for Unikernel and that was relatively a very successful project, which gave us kind of like visibility. So that's why we knew a lot of people.
And then at that point, I could have basically worked for whoever I want, like Docker, Google, they all offer me.
But I kind also like feel that I wanted to build something for myself that will surround the people that I know very passionate as me on technology and build something that, that doesn't be the focus, you know.
Like the focus is to make customers successfully and users successfully because we are passionate about the software itself. And I think we did very well. Like I mean, relatively to the company, when we started, people came here, the engineer team was amazing.
You know, we very quickly found the customers that help us to, you know, that we solve a real problem for them and they help us build it. So altogether, still to-date, I can tell you very, very drastically, in Solo, there is no politics.
Benjie: That's wonderful. We're going to dive into that. So you were actually committing code or working and actually we had Solomon on this podcast a while ago, maybe a year or two ago.
And so yeah, we know all about the Docker story and the Kubernetes, that whole thing. We got into that in a prior episode, which is super interesting.
So you're the CEO of Solo, but you do have a highly technical.
Idit: Oh, I'm a engineer.
Benjie: Yeah, you're an engineer.
Idit: I'm not writing a lot of code right now, but I'm very, very technical and I'm an engineer and like, you know, I'm working very closely with engineering.
Marc: Like never writing code anymore or like just not as much?
Idit: Nah, not much. I wish I had more time, but I am definitely, you know, more on the architectural level, like I'm aware of every component in our product and how and why like, you know, like I'm very involved. I'm just not writing code myself anymore.
Benjie: I understand. I too used to write code and now I'm unfortunately a CEO of a company and I gave myself the title of chief architect.
Idit: I have an amazing chief architect, and honestly, an amazing engineering team in generally, so.
Benjie: Well, we'll find another title for you to make you feel, senior chief architect. So, okay, so we've been dancing around it. What is Solo? Tell us what it is.
Idit: Yeah, so as I said to you when we started Solo, I got the money, I recognized that that will be the biggest problem in the market.
Ridiculously enough, I will claim that it's still the biggest problem in the market right now because I don't think we fix it, let's be honest. So let's explain kind of like, what do I mean when I'm saying it?
Usually, when you're talking about networking, you are always talking about three things, right? The first thing that you're talking about is observability. Like people saying, "Oh, you know the network, I want to see what's going on."
Definitely very important when you're talking about distributed application, right? Because then your code is everywhere, you know, you don't even know what the request part is because it can go everywhere.
That's very, very useful, so that's the first thing. Second of all that you're talking about is usually security. So now every communication is done by, on the wire basically, theoretically everyone can come in the middle and kind of like come here.
So this is kind of like very, very important to make sure that it's secure and zero-trust. And the last one is of course, you still need the connection, right, so the connection itself is very important.
So if we kind of like go into the back of the day like how people done it back then, usually they embedded libraries inside a code. Not the most efficient thing.
First of all, definitely when you're in the organization, a lot of those libraries are basically, let's be honest, it's an operation code, it's not a business logic.
So now we are kind of like there's no real separation between I am an IT organization, my support is to operate versus I'm an engineer and my job is to make a damn good application running.
How do you make sure that there's no overlap in the responsibility? So putting it there is not great. Therefore, a lot of people went to the next level, which is API gateway.
In API gateway, it's not going to work east west, but at least the idea of API gateway was that in the North Star, everything that going to your cluster, everything that coming to your infrastructure can be managed and abstract by a proxy.
So all the configuration code for who you allow to see that code, you know, and metrics and so on, is going to basically, the proxy will achieve that technology. So this is what the API gateway, kind of like was the first evolution and it took time, but then everybody kind of like, it's a thing that everybody using today.
Okay, so that was really, really good working when, you know, when you outside your infrastructure, when you outside your cluster going in, but what's going on now when you are inside, right?
Because we took that binary and split it to small pieces so you're already inside. So there is also traffic going right now between the microservices. Again, it is still important to be secure and observe and connect.
How do you make sure that, you know, that this is being done and again, not being done by the engineer but being done by the developer, being done by the people whose own those responsibility in your organization. That's where service mesh came into place, right?
So can we somehow abstract the things away from the developer and give the power again to whatever DevOps or platform there. So that's basically was that move.
Okay, but then again, there is a lot of question of how it should be done. What is the implementation of all of this? The first service mesh was done by Buoyant, the Linkerd guys.
They basically were bringing it and basically from Twitter, they're very heavy shop for Java and therefore what happened, they basically did this like notion of a proxy that, you know, in service mesh, there's the concept of side car, that was a side bus.
It was like this monster proxy that was very, very hard. Okay, so that was version number one. Then Google came in on Istio and Istio was the next version and that was basically everything related to sidecar, right?
And then a lot of follower people creating a lot of service meshes, sidecar. Okay, so that was kind of like a very nice evolution.
Benjie: Idit, sorry to interrupt you. Quick question. Tell me about years, tell me about years of this. I think I know but I'd love to hear, I think it's helpful for folks, so-
Idit: When did it started?
Benjie: Yeah, Linkerd, the first one was like 2016, 2017, is that right?
Idit: Yes, yes. I think this makes sense.
Benjie: Okay.
Idit: And then I think that, I'm not 100% sure, but I think around 18 or 17 was the first Istio announcement, in my opinion. And again, the idea was that it's going to be basically re-envisioned the networking market.
The problem is that, and I think what we got wrong as an ecosystem, I think that, you know, we forgot who we're building it for and I think that we were way more focusing on making it cool.
I think it generally in that time, everybody was very busy on, we don't want obstruction, we just want it as complex as we can and we want to go all the way to the kernel, which is totally changed by the way right now, right?
Like I'm totally opposite but that's what people want. The cool guys want to work. So we build it very relatively hard, right? I mean sidecar is a complex concept.
That mean that there is a lot of dependencies. So every time that you, for instance, wanted to put the sidecar you need to redeploy application, the other experience wasn't great. I think the first API definitely wasn't great.
So there was a lot of challenging of we the cool guys tried to build something very cool for ourself, and forgot that we are not the customer. The customer is actually someone else was running and there's a lot of complexity involved in this thing. So I think that we totally got it wrong.
Benjie: Idit, we're going to, real quick.
Idit: Yeah.
Benjie: So we have all types of folks in our audience here. And just to talk real quick because you're an expert on this, we're talking about the Kubernetes ecosystem, obviously.
There's these pods and there's these sidecars. Will you tell us just high level for folks that might not be as familiar with the mesh ecosystem, we're talking about Kubernetes, we're talking about pods, we're talking about sidecars.
Will you just high level explain to me what a sidecar is and what a pod is, relative to the ecosystem just to set the base?
Idit: Yes, yes, yes. So I mean, Kubernetes is basically an orchestration, right?
So in the nutshells, you have a container. a Docker container, you need to somehow install it in some platform. So you will need something to do this and Kubernetes is the best way to do it.
There's the concept of services there, right? Which is basically the idea of eventually it's like an obstruction on top of a bunch of container that they will load balancing between, right?
Okay, so that's kind of like the idea in Kubernetes itself. Okay, so now the thing is this, in Kubernetes, I'm not saying that it's a good patent, but this is patent that unfortunately got introduced.
It's basically the notion of sidecar and the idea with sidecar is that, can we actually put this container of the whatever service of the application, can I put something next to it that will be in charge of something, right, in the same pod?
For instance, in the service mesh ecosystem, we are using it pretty heavily. What we are doing, we basically putting, or we were already using it heavily, we were putting basically a proxy in a sidecar next to that application.
And what we are doing, we basically playing with the IP table of your operator system to basically say that everything that is going to this application or from this application has to go through that proxy, which is kind of like, again, right, it's like you have to go through three things.
Now why is that so important? Because you're putting all those things everywhere next to your application. Now basically, you abstract the network.
What do I mean when I abstract the network? Proxy, to who doesn't know, it's a very, very, very powerful tool. It's basically a thing that can do a lot of stuff, but it's also pretty dumb. It will do whatever you would tell.
So there is the concept of the data plane, which is the proxy itself, which is we putting on the sidecar and again, that's where the data is actually flying, right?
But there's also the concept of what call control plan and control plan is basically what is in responsibility for him is to look all the time on the environment and everything that something change is giving the configuration or is exploring, is translating, is creating the snapshot for the proxy.
And then it's the instruction and basically pushing it to all those proxy, that when the request will come, the proxy will know what to do with this. It will say, "Ah, this is Idit, let's go for it. She can go, but now this is John, you cannot let her go."
Benjie: Right, so high, high level, I've got a Python application and it needs to talk to a Redis service and by using the sidecar, I can force that Python application, the only way it can get out to Redis is through that sidecar.
And in that sidecar, which is your proxy, I can force it to do certain things like mTLS and all these other like things.
And the other thing is, is say that Redis now becomes corrupted for some reason and Kubernetes puts a new Redis that I need to route to, the sidecar, the mesh in this particular example, will automatically know where to route.
And that's basically what a sidecar is in our ecosystem.
Idit: Yes, and it can do way more like basically the proxy is as powerful as a gateway proxy. So it can do whatever you can imagine. It can do blue-green deployment, it can be canary, it can do a read-write, timeout, it can do, right, I mean, it can do whatever basically you want.
It can do also mTLS, right? So you can actually configure it to do whatever you want. It's very powerful. It has a lot of stuff that related to, you know, networking functionality.
Marc: And just to like state what is maybe obvious here, like the value of the sidecar versus like, there's a lot of values actually like, and we should talk about them, but one of the benefits, the architecture that Benjie just laid out, the application doesn't have to know about that because that's happening at runtime, it's happening in the platform.
My application thinks it's talking to Redis as a developer, I just like talk to Redis. I have the two pods running, I have the two services running.
When it goes into production, this gets injected in at the network layer, at the runtime layer. So I don't have to go modify my application stack to do that, that's-
Idit: Not at all. Like the application only focusing on the application business use cases. And we kind of like abstract that, you know, that operation tooling, let's call it, put it on the proxy responsibility.
So now 100%, you as an engineer, does not need to worry about it. The platform team are responsible to make sure that it's zero-trust.
Marc: And then that platform team can add observability at that proxy layer. They can add security in like at that layer and like let's talk about how that actually happens often.
Is it eBPF? Like what technologies are you using to like do that at the proxy?
Idit: So in the proxy, proxy is actually most of the time, you want it to be in the user's face, honestly. eBPF is actually extremely, extremely limited if you're talking about you know, what can you do because it's only can see layer three and four, right?
It can't see layer seven. It doesn't understand request. So it's giving you way more advantage by actually putting it there and then that way you can do also layer seven functionality.
So you definitely, that's not that, you know... So we are going basic to what is a good proxy. There is the one that is the most famous today, because it is basically Envoy proxy. Envoy proxy came from Lyft, it's a C++ async, in purpose.
You know, when you're talking about the data plane, performance is extremely important, right? You can't right now go and do like so if you're writing it in Go, not as performance as it can be with a C++.
So you basically want to make sure that this is in a language that is very performed, you know, so that's one thing that is very important. I think that, you know, a lot of people before that were using Nginx or HAProxy. All of them were written in C++.
I think what Envoy introduced which is MUD client did it so well, you built it fully, but the ability it before that in different organization is he understands few things.
Number one that it have to perform. Number two is that back in the day, those proxy, you needed to give them a configuration and then how to restart the proxy that it will pick up the configuration.
So every time you wanted to make a change, you basically need to shut down the proxy and reopen it, which is again very okay if you don't have a lot of changes.
The previous environment, everything was very solid, 100%. Not the case in our environment, right? Kubernetes is up and running, staff changing all the time. So you do want it to be an API-driven one, so that's one of the biggest advantage in Envoy.
And the last one introduced the idea of a filter chain. So when the requests coming is going into a chain of filters and there's some that are building and there's some that you can create by yourself. So it's kind of like a mesh but it's a plugin for the proxy. So now you can actually customize it to your own good or you can invent stuff and so on and so on.
So that's basically the stuff MUD design really, really well after building it like three times, he learned.
So I think that Envoy was a huge kind of like leapfrog ahead from the other proxy, like Nginx and like HAProxy. So that's worked really well to an all purpose layer seven proxy. In Envoy right now we move to, I can explain that, but we move to another kind of like we're doing another leapfrog which is basically we are doing a sidecar-less mesh.
In that sidecar-less, we're basically separating between layer seven and layer four and in purpose because we want to make sure that it would be very fast to adopt layer four. Layer four is way more simple, honestly. This is why it's simple.
So we build a Rust proxy that basically is very dedicated to, you know, zero-trust, we're calling it ztunnel because of it, right? So zero-trust and very basic layer functionality. Rust is amazing, it's fast, it's very purpose.
We can go faster with it. So you can see some crazy innovation there. So that way we separate between, all you want to do is layer four, you will use that proxy.
But if you want to go to a more complex use cases, you will need to go like the ztunnel called the Envoy proxy. So that way we kind of like separate it.
I think that that's giving you an amazing functionality. Like if you go right now with what we did with Ambient, which is the new Istio mod for sidecar-less, you can be, like we have a customer that seriously went into production for less for three weeks 'cause it's that simple and all they did and it went with it.
So I think the advantage of it, and I can talk about it, but is that the fact that if we are not using the sidecar functionality but we are using the sidecar-less functionality, which is basically mean that we're putting one proxy per node, the user experience is way easier.
Marc: So sidecar-less, like it's not going back into embedding into the application, it's the proxy per node. So is that like, is there like a daemon set that you're running inside the Kubernetes world?
Idit: Exactly. So it is. But that's only for layer four and that's very important to say and you differentiated for instance, with what's been done in the Cilium ecosystem. So I wanted to kind of like double check on this because it's important.
There is a problem to take a proxy and put it on one node, and why is that? Because you have a lot of application there, right? And in layer seven, you potentially can do a lot of very complex stuff.
You can write a WASM filter, you can do a lot of very expensive operation in layer four. So in layer seven. You're not going to do it on layer four. Layer four is simple, it's very, very simple. There's not a lot to do there.
So we can take a proxy with also layer seven proxy and put it on the node, because what will happen, it will be a noisy neighbor's problem and one thing can potentially even take the proxy down because it's so busy and then you lost all networking for your connection.
So what we did was in purpose separate that. We basically create a very small daemon set, right, proxy that. It's basically only doing one thing and doing it very, very well, which is basically layer four, which is very simple.
It's only going to do mTLS, it's only going to do a very basic layer four stuff. Now when you have that, that's fantastic. So it's one node, the user experience is insane because as you said, it's a daemon set. So it's very simple and you getting quite a lot by that.
But if you want a layer seven, for that, you probably will want to separate it by either a you know, a namespace or something like that. Because you will want a host or service account, you will want to separate it to a group that own that functionality.
Benjie: Okay, so if I understand this correctly...
Idit: Sounds very complex but it's actually extremely simple.
Marc: No it actually is, right? It's like, like I can appreciate that like explaining it is complex but it kind of maps together and do like a simple architecture in the end. But I think we do want to talk more about it to help kind of paint that.
Benjie: Right? So by the way, am I supposed to use this sidecar-less thing for my mTLS but I'm also use the sidecar version for my application layer stuff?
Idit: No, so I mean you don't need sidecar anymore basically, right? Because what we are doing is this, we're saying look, what we want to make sure is that the first thing and basic thing that you're getting is everything need to be mTLS.
I think this is a prerequisite for a lot of the stuff. This is the zero-trust, kind of a thing. You want to make sure that it's mTLS. So what we did, we said okay, every application that going out, we'll talk to this and make sure that it's mTLS.
So the ztunnel responsibility is always going to go. Now a lot of customers and we have a lot of them, only need this, that's good enough for them. They don't care about the observability but they don't care about the connectivity. What they really, really care about is they need to make sure that it's mTLS and there is a lot of customers that that's all they need.
So now if you're taking those customers, well, all they need is mTLS, very simple basic functionality and you bring them to the architecture of the sidecar. That mean that they need to put proxy next to every application, which is expensive by the way, right?
Some piece that you're putting on your hardware, it's also giving you regression of your performance and it's just overkill because every time you want to actually put a sidebar, you need to re-deploy your application.
So every time that, for instance if there was a CV in Envoy, I needed to call my application team, say, "Hey guys, listen, there's a CV in the proxy, we'll have to restart your application."
No one like, right? So this interruption between all the idea of what we tried to do with this software is abstract and separate it, the business logic from the operation in a way we didn't really do it with the sidecar, because it's still very, very attached together and they really need to communicate and so on.
So what we did, we said okay, customer only need mTLS, all this functionality is overkill for them. Let's just put one daemon there that will make sure to do it. Boom, you have mTLS in all your infrastructure. And I'm not kidding you.
When I'm saying, "Boom," I'm saying you can put it in your infrastructure for like less than a minute. Boom, you're all mTLS. Simple, unbelievably easy to use and the application doesn't even know like suddenly it's mTLS.
You don't even know. Like we seriously have people, customer that calling us, said, "Prove us that it's mTLS."
Marc: So yeah, exactly. So like that's like, it doesn't come with that, you know, either real or perceived complexity of Istio and switching from services to virtual services and all that kind like, the changes, you just deploy it so you deploy it and it works, is what you're saying.
Idit: You seriously have, your application is running maybe two years, you're coming, you're saying I want to install the mesh, it's installing those ztunnel, Now that's it. You're getting mTLS to everything.
You want it to say, I don't want it to talk to something or you want to get ready, Immediately you're getting it, you don't need to do it. Like it's that simple.
Marc: Is that all open source?
Idit: Yes, it's called Ambient and it's the Ambient mode of Istio, so yeah, it's all open source. It's part of the Istio community. We rebuild it, we basically, Solo specifically partner with Google and we together build it. Think about it like Istio 2.0.
Marc: Okay, I want to go a little less technical for a minute. So you're contributing pretty heavily into the Istio. Like do you have full-time engineers that that's their job is just contributing upstream or how do you actually manage that organizationally?
Idit: Yeah, so I mean, basically we do have a team that that's all they're doing. They're basically every time that our customers because don't forget, we also have a gateway that is built on Envoy.
Every time that our customer have a problem that related to this, we're basically contributing it back. With Ambient specifically, we got a lot of feedback for our customer about the complexity of Istio. So we did, we had that vision.
By chance we discovered that Google has the same vision. So we partner behind the scene and basically, I had a team that dedicated with the Google team and we basically build it together behind the scene and then contributing into the community, when it was ready.
And we continue right now with the community contributing a lot and you know, we're the lead, yeah, we are the lead contributor right now.
Benjie: Okay, so by the way we kind of skipped over this, but Solo is Istio corporate, kind of, right?
Idit: Not only. Think about, you know, if I have to describe what Solo is doing, Solo is basically doing the networking, what we is doing to secure it, right? You're moving to the cloud. The market was distract.
Right now, you need to do stuff differently. You're basically taking risk to keep your security or you're taking, actually to automate your clusters, right?
Benjie: Right.
Idit: You're taking Solo to take care of your networking.
Benjie: But you are, or some folks from Solo are like the maintainers of Istio, correct?
Idit: Oh, oh, yeah. And we also have people on but, honestly like, it's like, I know people making this association with Istio-
Benjie: It's not one-to-one. Yeah, it's not one-to-one.
Idit: I mean, first of all it's a community project for real. So there is no only Solo working on this. We're working with Google, Microsoft is a big contributor there, Red Hat. So I mean there is a lot of other companies.
It's true that Solo is the major one and we are right now very dominator. But this is only part of our product. If you're looking at our product, we also have a gateway that is not based on Envoy, it's all on Envoy, not on Istio, it's all on Envoy.
And we have, I don't know like you're probably using it every day without knowing. We actually have a lot of customer using only this without the mesh.
We have a lot of customer using only the mesh without this and we have a customer that basically buying to all those vision of both of them together.
Benjie: Sure, absolutely.
Idit: Yeah. But, I mean we contribute to whatever we need. Like we have people contributing to the Kubernetes gateway API, we need contributing to Backstage, whatever we need, we're doing, right? I mean to me this is like-
Benjie: How many developers are at Solo? How many developers?
Idit: Oh this is the majority of the team like, I mean is developer like, I mean we have go-to-market and we growing the go-to market drastically because we are doing really well right now. But we have, I think that was said that in the structure of the engineering group, like we probably, I would say maybe 70% of the company is engineers.
Marc: Okay.
Benjie: Wonderful.
Marc: That's a good startup by the way. That's good. You should be heavily engineering.
Idit: What can I say, right.
Marc: Exactly.
Benjie: So by the way, I want to go way back for a second. I remember KubeCon San Diego, at Marc's booth, at Replicated' booth.
Marc: The one right before Covid.
Benjie: The one right before Covid. Was that 2019? That was 2019. And Shipyard had just started and I was playing with Istio a whole bunch and I got to be honest, I kind of hated it. The main reason I hated it was because it would literally take twice as much cluster to run.
So like whatever I had to, whatever like service I had, I'd have to double the capacity to run Istio. And I'm trying to say this, I love and respect and I think it's super cool.
Idit: No, dude, we got it wrong. I was the first one to admit.
Benjie: No, I'm not. This is not a hit job on you.
Marc: It's a learning like we have to like, you know, maturity takes a while in projects, right? We have to solve hard problems and then simplify.
Benjie: Absolutely. So that was the first iteration of Istio. Now I know that there was an Istio 2.0
Idit: There's no really, I mean I'm calling it 2.0 but we did announce together with Google something that called Ambient. Ambient, the idea of it's actually really what we always plan it to be.
The service mesh should be Ambient to your cluster, right? You just need to get all the functionality but not need to have a PhD to operate it, which is what you need with the regular Istio.
I feel that, so 100% the advantage of the came in this, the new architecture, the advantage that it's giving is a few things.
Number one, as you said, is cost. Right now instead of having a sidecar next to every application, you have one simple Rust daemon set that is by the way, work on your extra capacity on the cluster itself.
So it's basically utilize, you don't need to, it's not costing you more than your cluster itself. It's very meaningless. So cost 100%. We seriously have a customer that told us that by the money that they will not need to pay to the cloud provider on the Istio sidecar is going to sponsor our license. That much.
Marc: Nice.
Idit: Right? So yeah, 100% it's better. I think that the second thing that I feel is very, very important is speed and performance. You know, people are very confused about eBPF. Now look, I was doing operative system. I'm telling you, I know and appreciate what should be there.
eBPF is a very good thing but very limited thing. There's a lot of limitations there. We should be very, very optimistic. It's not like, I kind feel that we have, you know, a hammer and we kind of like put it everywhere because we are very excited on this.
Let's be honest, it's not in user space. So everything that related to you know, requests, all this stuff, it's not relevant to this. That's not going to be done in the network. It doesn't make any sense to take, I know that there was this perception that's being spread that you can take the proxy and put it in the kernel.
Now you can take Envoy and put it in the kernel. You do not want to do that. That doesn't make any sense, right? This is just the confusion and marketing's being-
Marc: Is that because it's like just so, the complexity of it or is there just really no value? You're not going to get any performance wins or?
Idit: No, I feel, I don't even think that you can sometimes, right? I mean think about it, the layer seven is the stuff that you have in the layer three. Like your operating system is working on layer three and four. I mean, you don't have the concept of request, you know what I mean? Like you're very limited so, it doesn't make any sense.
Benjie: Well, I mean I think it would be to get what you want from the layer seven stuff, you then have to go back to the old model of embedding stuff into the application itself to tell the proxy-
Idit: That's option number one. And option number two, which is what the Savant guys tried to do is basically said, "Well, when we need to, we will translate it to layer seven."
But if you're translating to layer seven, what's the point? You know what I mean? Eventually, it's not an eBPF so what's the point? So there's a lot of complexity and by the way and purpose, I think there was a lot of confusion in the market in purpose of eBPF will solve all the problem in the future.
I think eBPF is a very powerful tool, don't get me wrong. I'm a big fan of it. We wrote a lot of the stuff there. It's make a lot of sense. I would just say that the concept of it's going to replace service mesh, again like even mTLS to do there got to help you, are you going to do this?
It's like going to be horrible. There is a lot of confusion. I can explain you the technical part of it but honestly, it's pretty much a joke.
Marc: But so you're doing everything in user space including like observability?
Idit: Not everything like I mean, you know, I'm moving the package to where I need to and if I need to I will use it in like, CNI should be in there, right?
So you do need your CNI to be in eBPF. It makes sense. I will argue that your policy maybe could be actually in user space because it's something that you would want to maybe customize and it'll be very hard if you're putting it on the eBPF code.
So I'm not, again, we need to use it, like eBPF is very powerful and it is helping us to get some time a better performance. But I can tell you that we are running Ambient mesh, we're getting to the same performance on eBPF, I dunno, CNI, like it's not less performative than that, actually sometime it's more.
Marc: I think that taking that super pragmatic approach of like let's just take, look at the problem we're having and here's the set of tools that we have that we can use to solve it is good.
Idit: Exactly. And I feel we need to, mainly we need a lot of education. I will be honest.
Marc: Do you think that there's like, the story that Benjie shared about the early days of playing with Istio and kind of the cost and the overhead and the confusion and the complexity of it kind of turned him away from it.
Do you think there's like a, you have to like kind of reeducate the market a little bit that this is not as hard anymore? We've done a lot to make this easier?
Idit: 100% and we also know that we have only one last chance. But here's the way I'm looking at this. When I told the story, if you remember in the beginning, I specifically said I don't think we fixed that problem in the networking.
Let's be honest. Today, most of the customers are still using something like Apogee.
Marc: Mm-hmm.
Idit: And doing hairpinning. Like every request getting outside of the cluster. That does not make sense. If you're looking at what the Cilium team tried to do, they basically tried to absorb the service mesh to the CNI.
But let's be honest, it's not a service mesh. Like I mean there even mTLS is not a functionality that could have created that way.
So again, it's probably not a solved thing. I think that what we did with Ambient is exactly solving that pain. So three pain that I saw, and again let me know if there is more. First of all is cost.
That was eliminated immediately by us basically moving to Ambient because now it's a daemon set, it's not costing you anything. The second thing that there is, is performance and security.
Again, no problem with Ambient. It's as good and secure as the sidecar. So it's really to the last one, which I'm going to push as much as I can to say, to me, this is the biggest advantage of Ambient is the operational.
In my opinion, the problem with Istio was, as I said to you, every time that you wanted to make, to be successful with this, you needed to have a PhD, seriously it's so hard and so hard to understand. And it just was crazy.
I think that again, when we started it six, seven years ago when the market was, "Let's just do stuff as complex as we can," and everybody was, "No, just not gimme obstruction, I want to go all the way to the kernel. I want to do all this stuff."
I feel that makes sense. That's definitely not what we see in the market anymore. When I'm looking right now in the market, what I see is that people really care about adoptability, they want to make sure that people can adopt so the platform team building this amazing platform and they want to make sure that other people in the organization will be able to use it easily.
They do not care about about Kubernetes, the application team, and they do not want to know about all this stuff. In Istio, how do you make it easy to consume?
And I will tell you that this is why in my opinion, projects like Backstage so trendy right now. Why it's so trendy? Because people doesn't want this complexity. They want to make it very easy to adopt.
So I feel that this is, to me, what Ambient is like the biggest one and I can tell you, talking to our customers, same thing, it's about the ease of use.
I think this is something that other people got right and that's why I'm focusing on Istio and in Solo right now, make this thing adoptable that everybody can run it.
Marc: That totally makes sense. And I think like going back, you said like the three biggest problems in networking that you wanted to solve were observability, security and you know, connection, right?
Like just obviously having a connection, the network has to network. I think it would make sense. I'd like to dive into the weeds a little bit and talk about each of those.
Like what are you doing for observability of the network? Like how? And like what functionality do you offer if I'm using the Ambient mesh or what would be your recommendation for somebody who's deploying Kubernetes and wants observability in the network layer?
Idit: Yeah, so think about what service mesh is giving you is the piping, right? I mean we are not competing with Datadog. You should still use Prometheus in Datadog, or Grafana. But what you want this to pipe the data somewhere.
I think there's two problem with observability, in general, in microservices. Number one is, you know, get the data, right, so you want to make sure that you're piping the data. Number two, which I think is even more important, is the context.
So, you know, there is OpenTracing, which is exactly attached to those problem. Ben Sigelman who basically started that concept is amazing. And basically the idea is that when you have so many services, which mean that you have so many container on the same type, how do you know when the request it?
It can go to basically each of them. So before that, you have a monolithic application sitting on your hardware, you know, on the virtual machine. You know exactly like you're putting an agent there and you're sucking the data because you know it's there.
That's not the case with microservices. The request, there's a concept to it, it's starting and it could go everywhere to each services and then to the next service and everywhere. So how do you actually follow the logs eventually?
And that's what basically, I think, OpenTracing did very well is the context. I think where service mesh can help there is basically, start the concept. So basically, it's putting that thing that will make it easier to understand where it's coming from.
So that's number one. And also piping it. So think about it, every request in and out from your application is going through the proxy. So the proxy can easily pipe it, right, to any observability tool that you have.
So that's basically giving you all the piping to understand everything that happened in your cluster, let's say, as well as the context of where it's happening.
So that's the two things that I think is very, very powerful. Some of observability for the ecosystem of Kubernetes and I think that service mesh is doing it really, really, really well.
Benjie: Right, and the key there is that I'm not doing any of that tooling inside my application code itself. I'm just getting it automagically by leveraging a service mesh in this case.
Idit: Exactly.
Benjie: Yeah.
Idit: And I will say more than this, there is other stuff that we can do because of it. Everything that's related to resiliency. So as I said, we can do, of course, reach rise, right, and timeout, but we can also do stuff like top filter and top filter basically mean that I will be able to actually record the request.
So assuming that right now if you think about what's going on because everything that going in and out from this proxy, I can see, I'm owning it. Can I potentially record a log of everything that going in and out from those requests?
And then if I have a problem with this, maybe rerun it outside my environment and try to understand what's wrong, you know, debugging. So everything that's related observability, so there's way more we can do there and I'm really excited about that.
Benjie: So like replay is something else that you-
Idit: Replay basically every problem you have in production with all the data that came to those application and this thing, from everywhere.
Benjie: Wait, Idit, Idit, that's not observability, you're skipping ahead. All right, tell us about security.
Idit: Okay, so observability I think we covered, that's good. Yeah, the second one is security.
So security again, mTLS is basically where it's lie. So that's first of all, we need to make sure that you're giving an identity to each of those services and verifying that when this service is talking to this service, it's actually the real service and not me coming in the middle.
We actually, we prefer this to the request path itself. So basically we may verifying that it's not something that someone in the middle can come with us and that's the mTLS and I think it's very, very important. So that's number one.
Other stuff that related to security that related to service mesh is everything that's related to authentication and authorization is that person could write to this or not. I mean think about it, you can also put it in the Kubernetes network policy.
Marc: In the CNI.
Idit: In the CNI. But think about it for defense in depth, you can also put it on the gateway. So think about the best way I can describe it, imagine that I have two houses, right, you know?
What we're doing with the service mesh is we putting it, usually what people are doing, they're creating the road and they're putting basically two guard here, you know.
If we really don't want them to talk, right, we can actually not create the road. So don't even create that connectivity in the CNI layer but it's double check, still put the guard, right?
So kind of like a defense in depth, making sure that they never can talk to each other. So we can kind of like double doing it in two layers in the layer of the CNI, as well in the service mesh.
Marc: And so like if I have a more custom policy, I probably don't want to implement the same policy in both of those, like in the CNI?
Idit: Yeah, so we have, someone just created one policy and we update both but, yeah.
Marc: Oh, okay.
Idit: Just for us though, we know both of them is locked, right? So if you locked, it's really locked. And what the worry is that sometimes CNI is running by a different team event, right? So you know, so you usually rather would do both.
Marc: Okay, and then the last one was like network connections and I think you talked earlier about how API gateways, you know they were like north south ingress and egress into the cluster but like not handling connections pod-to-pod.
So obviously, like that's the thing that the service mesh is like capable of doing and really excels at doing. Like what else does it do to just enable general connectivity between different services?
Idit: So honestly, what I'm excited about it, if you will think one second of what we are doing with Ambient and we have a customer, a huge bank, and we basically did it with our own gateway in the beginning.
And with Ambient we kind of like came with that concept based of what he did. It's basically becoming it a micro gateway architecture. And what does it mean? If you think about it, we're taking the gateway, by the way, the same gateway that Solo is using for North Star and we are putting it also on east west, okay.
Now it can live everywhere. It doesn't have to be, it can live in your cluster and what's happening is that when the ztunnel is getting the request, because again we have to mTLS it, if there is a layer seven proxy, you know, policy to it, it will go to the relevant end point, we're calling it waypoint and we basically do.
So we basically took the same gateway, exactly the same gateway that we put for north and south and we put it basically on east west of the service mesh. When you need it. You not always need it, right?
So that's basically the way Ambient work. I think maybe I'm confusing you even more, but don't worry, this is an architecture. For you to use, it's very simple.
Benjie: I look forward to listening to this episode like four times just to understand everything you said 'cause you're packing it all up, but this is great.
Idit: If I can draw it, it probably will be easier, but-
Benjie: We're not going to guarantee a diagram! But maybe Idit will give us a diagram to show notes.
Idit: I will give you a diagram. It's pretty simple, but in the next-
Benjie: No, it is simple. I'm joking. You're doing a wonderful job of explaining. I'm just joking.
Idit: It's hard to explain, but honestly, like what you need to do is this, number one, for the user itself, it's damn simple. It's damn simple.
And the second thing that you need to know, like it's the same thing. You're just saying what you want and it's happening.
And in terms of how it's going behind the scenes, you know, like the only thing that you need to know is that what we did is we took the same functionality of the gateway and now we brought it, instead of only doing ingress, we are also doing it on the egress, we're also doing it east, west, and we're also doing multi-cluster by the way.
Benjie: Okay, this is great. Honestly, I'm going to be completely transparent here. I kind of like gave up on Istio altogether a while ago and now I'm ungiving up. You're convincing.
Idit: Give it a try. Try to put mTLS in your cluster and tell me if it's hard.
Benjie: No, I'm just telling you, you're very convincing. So here's my question, now as a founder. How does Solo make money off of all this?
Now, don't give us any specifics, but how do you guys monetize all the things you're talking about? Just to give the audience an understanding of that.
Idit: So I think in open source there is like, you know, Kubernetes for instance, right, it's very, very powerful orchestration, but it's based cluster, right?
They're not bringing you out of the box, for instance, the multi cluster experience. Same thing with Istio. Like Istio or most of the service meshes, they're giving you that on a cluster, you installing on a cluster, that cluster, you're getting all the functionality.
But what if you have a fleet of clusters, and guess what? Everybody has more than one cluster, right? So how is that going to work? How do you make sure that, for instance, if you have a service in one cluster, you will want first to go to the service that is close to your application.
But what if this service fail? You probably have a failover service. How do you actually move the request to the right service and basically fall back?
How do you making sure that basically all those fleet of cluster look like one big cluster, that they're all communicating and they're all basically secure and observe and connect?
That's one thing that Solo is doing on top of it, which is basically bringing it to this. The other thing is a lot of enterprise use cases, we putting developer portal on top of the service mesh.
Let's see, we have some functionality that we are doing that is not exist in the open source, like for instance, filter that we built to Envoy.
So we have WAF filter, we have data loss provision, we have a lot of that kind of stuff that you not expect to have in an open source project, but our customer does need it. So we basically have a lot of layers.
The focus is either a functionality that is not on the open source and probably should not be in the open source but needed by our enterprise customer or anything that's related to user experience that we can really, really make it better outside the open source community.
So those two is basically where we are focusing.
Benjie: Right, and it sounds like you guys are doing pretty well now, so-
Idit: Yeah, we are.
Benjie: Yeah, that's great. That sounds great. What stage company are you guys from a funding round perspective?
Idit: So we took C round maybe a year and a half ago.
Benjie: Okay, so series C funded, been around for a while, doing well, premature on the product side and on the go-to-market side and all that stuff. So, you know, people can leverage Solo with confidence that it's going to be here for a while.
Idit: Yes, we have tons of customers and very, very happy customers.
As I said like, to me, it's like we are basically becoming part of the team of the customers and we basically helping them to make sure that they are successful with either software or even if we need hand-holding.
Benjie: Okay, I have a random question. Over the years, whenever I'm trying to explain what a sidecar is to anybody, the only use case I can ever come up with is a service mesh.
Can you help me with this? I always, I'm like, okay, so I think we need a sidecar here and the only example I ever come up with is basically Istio or Linkerd.
Idit: So first of all, you don't need to come with it already in service mesh now. With Ambient, don't even use that there, but-
Benjie: Forget Ambient for one second! I'm just saying, every time I try and explain what a sidecar is, I can't come up with anything other than what service meshes do.
Can you just tell me personally so that when I'm explaining on a whiteboard to someone, what a sidecar is, what another example that is not a mesh?
Idit: Oh my God. So I mean, and maybe I'm not sure that I'm the right person to ask that, but based of what I know and talking to team, for instance, Hopkins from Google, you know, unfortunately it's being used like crazy and they hate it because this is not what it was intended to do.
I think that, I'm not 100% sure, you will need to ask more people, but sometime, I think that they're putting it because they need the approximately of the database or something, so sometime they're doing some crazy stuff, putting it together. I think that for some agent, they're using it, but I honestly, like I have a better people in the company to answer that question.
Benjie: Alright, here's another important question. Kube Cuddle, Kube C-T-L, Kube Control, How do you say it?
Idit: I'm the one who's going a Kube C-T-L, but I think I'm the only one because no one using it in my company like this. They all call it Kube Cuddle, as you said.
Benjie: Correct, Kube Cuddle's the correct answer.
Idit: I don't know like I'm on old style. So me, I'm seeing C-T-L.
Benjie: All right, well, that's fine. That's a question I always ask everybody. I'm a big Kube Cuddle fan.
Another thing I wanted to say as we start to wrap up here is that I'm glad that we are going to have Java, C, C++ people and Rust people all chiming in on this episode.
You covered all of them when it comes to performance and I was just waiting to ask you, well, why not Rust, when you started talking about C and C++ and you hit it, so it was wonderful.
Idit: Yeah, maybe if we'll build Envoy right now, we probably will build in Rust. Just, it's a lot of work right now to rebuild Envoy. Envoy is such a solid piece of software.
Benjie: No, no, no, I'm not pushing anything and I have been using Nginx and OpenResty for years, which is like the Lua compiled.
Idit: Yeah, yeah, yeah, I know exactly what it is.
Benjie: Yeah, we all know how this works. So two questions. One is on the Istio side, the Ambient side, Roadmap, announcements around that, and then same question, but for Solo. Are there anything coming up in the next few months?
Idit: Yes, so there's a lot of stuff coming up, so let's see. So as I said to you, we always had the vision but we never done it, right?
So I mean if you think about it, you know, in the beginning there was the gateway, then there was the mesh, then now there is Ambient, which is kind of like in the middle.
I feel that what we got wrong, which I will argue that Cilium guys didn't, is that people want simplicity and I think what Cilium did very nicely, they understood that everybody need to use the CNI. So what if we will be able to take the service mesh into that?
I have to say that it didn't work well for them, but still, it was a nice effort to think about it. I personally want to think the same on the level of the gateway. The way I'm looking at this is that eventually what you need, it's very simple.
You need, as I said, ingress, egress, which is very important for a AI use case, which we are working on a lot of now and east west, it's basically all the same thing. What you need is I just want to come and tell you what I want and that should be forced everywhere it makes sense.
So to me we are calling it Omni-Gateway and what it is, is basically everything gateway and that's what it is. That's really what it is. Service mesh is basically a very small piece of it, but it's basically everything gateway, right?
That's what we did. That's the vision of what we did. And I feel that right now, to me, Omni-Gateway is basically the idea of it need to be cloud native. I mean, I'm a big fan of it.
We started that initiative, like we called API gateway, a cloud native gateway so I'm still going to defend that this is something that's very important.
The second thing I think that zero-trust should be responsibility of the gateway. I'm double down on this. I feel that everybody need mTLS. I think everybody knows that they need mTLS, not always the one service mesh. What if the gateway will give it to you out-of-the box, very simple thing? So that's number two.
And then the third thing, which I think that is very important is everything that's related to Federated. If you're looking at Gartner for instance, and what they reporting is there is a lot of proxy. We talk about it, there is Rust and there is Envoy.
Eventually, they all need to basically be Federated and do the job that they need, but they all need to own by one API that is very simple to operate, which is basically the Kubernetes gateway API. So we are a big fan of this, so we're calling it Federated.
And last but not least is everything that related to unbundle, which is the idea of ecosystem. It should work with everything. It should work seamlessly with everything. Going back to the adoption, we have to make it working very seamlessly with everything because if it works seamlessly with everything it would be very easy to adopt, more people be using.
I think that this is what to me is Omni-Gateway and I think that that's the direction that the market will go to. This is the new evolution of the, you know, there was a gateway, then there is cloud native gateway and now there is Omni-Gateway. That's the way I'm looking at this.
Marc: I'm actually curious. So like, a lot of what we've been talking about is infrastructure and like layer four, and layer seven and how this all works. But then you threw out there in that explanation like some of the changes that you have to make specifically for AI applications.
Like, I mean they generally operate at the same layers, but like clearly, there's a different problem that you're trying to solve there. Is there anything more that you can share about that?
Idit: Oh yeah, sure, sure. We're working with a lot of customer right now. So we actually needed to change Envoy a little bit and add a lot of other stuff to the gateway in order to support it. So there is a little bit change.
Let's just talk about something simple like semantic caching. So what does it mean? So today, there is a caching in the gateway, right? So if I'm getting a request, give me whatever the pet of my favorite, you know, give me a pet of John, want to gimme the, right, and I can catch that request.
Every time the request is coming again, gimme the name of the pet of John, I know exactly how to do it, I'm returning it. It's giving you performance, right? You don't need to go to the application at all.
The gateway is basically giving you back. But what's going on with AI is that now it's become more language, right? LLM bringing this idea that I can basically ask the same question in a lot of different way.
I can ask, give me the name of the pet of John, but I can also say, gimme the name of my best friend, John. Is that the same request? Yes, it is the same request but not based on the API gateway.
APIs gateway will look at really this different one and basically go to the server and get that feedback. So the solution for this is what called semantic caching. And semantic caching mean that we can actually group together the request that go into the LLM and identifying what of those requests are the same.
And it's basically mean that the semantic caching is a little bit different. It's like you basically actually need to make sure that you're grouping them and sometime it's a human, right, which is before that, the API get, decided what the cache is by itself.
So that's only one example, but I can give you 100, a hundred example more of how it's changing, yeah. To me, from my side, the stuff you need to know is that we are going to, we are ramping up really, really nicely right now.
So you will see us everywhere and making a lot of announcement. I think AI is a thing that we're doing right now and working with a lot of customers like you know, T-Mobile and others that are going to be like, I have a talk on this for instance in KubeCon.
I think that besides that Ambient is a big thing for us that I think that everybody should at least take a look and Omni-Gateway, which is the thing that we're doing together. It's also something that we're going to do very good together.
There's a lot of stuff that we are doing and you will see us coming. I mean, again, I don't know when is that release.
Marc: Is some of it going to come in KubeCon here in Utah?
Idit: Before or after? That's the question.
Marc: During?
Idit: No, no, because, the reason I'm asking is because we will make a lot of those announcements for KubeCon, but I do think that everybody should go and look at Ambient again.
I think that AI is a very interesting space that we're playing a lot on the networking piece of it, right? I mean, the stuff that we are expert in, you know, and also in the interface between the agent and the AI gateway, I think there's some interesting work that we are doing there.
And then the last one is, as I said, is we do believe that everybody need a gateway and the gateway need to have those functionality that will make it very, very useful to use, which we are calling it Omni-Gateway.
Benjie: The one last question that I have to ask, we ask always, if folks want to contribute to Istio or any of these open source places, is there community meetings? What's the typical way to contribute?
Idit: So Istio is actually is a CNCF project and therefore there is a lot of community work being done. I 100% recommend people to go. It's actually a very friendly, funny enough, a very friendly community that people really like each other and want new people to come and join.
There is a community meeting there weekly that you can just come and listen and then start. You know, it's open source, all of it.
There is a Slack channel for it in the CNCF, so Istio Slack channel and yeah, and also always, always they can come talk to us in Solo. Like we also have Slack channels.
So if you will come to us, like we'll love to help you, you know, on ramp and do whatever you want. But yeah, it's all open source. This a contributing to Envoy and anything else honestly that we need to, we'll be honest.
Benjie: Wonderful, okay, so Idit, thank you so much for coming on.
Idit: Yeah, so nice to meet you. Are you going to be in KubeCon?
Benjie: I will be at KubeCon. Marc will be at KubeCon. We will see you there. Super excited. Really appreciate you coming on and I'm going to reserve the right to bring you back in like a year.
Idit: Oh, I love that.
Benjie: To talk about all this AI stuff because normally, I'm always like, "I don't want to talk about AI stuff," but this is actually interesting.
Idit: Oh, I swear to God, I was exactly the same. Everybody knew AI. No, I actually telling you that we're working with real customer that actually, actually doing AI and big ones and we learn so much from them.
So we adjust based off the stuff that we learned from them. We actually didn't start it, it came from the customer.
Benjie: Well, we're going to save that for the next time when we get you back, Idit. Thank you so much for coming out. We really appreciate the time.
Idit: Thanks so much guys, appreciate it.
Content from the Library
The Kubelist Podcast Ep. #43, SpinKube with Kate Goldenring of Fermyon
In episode 43 of The Kubelist Podcast, Kate Goldenring shares her journey from Microsoft, where she contributed to Kubernetes...
The Kubelist Podcast Ep. #42, Zarf with Wayne Starr of Defense Unicorns
In episode 42 of The Kubelist Podcast, Marc Campbell and Benjie De Groot sit down with Wayne Starr from Defense Unicorns to...
The Kubelist Podcast Ep. #41, vCluster with Lukas Gentele of Loft Labs
In episode 41 of The Kubelist Podcast, Marc and Benjie speak with Lukas Gentele of Loft Labs about vCluster. Together they dive...