
Ep. #11, Unpacking MCP with Steve Manuel
In episode 11 of Open Source Ready, Brian Douglas and John McBride sit down with AI expert Steve Manuel to explore the Model Context Protocol (MCP)—a framework that enhances how models interact with their environments. They break down why context-awareness is crucial for machine learning and how MCP is transforming open source AI.
Steve Manuel is the Co-Founder & CEO of Dylibso and was previously an engineer at companies like Cloudflare and Rigetti Computing. With a strong background in open-source AI, Steve has worked on projects that push the boundaries of model adaptability and efficiency.
In episode 11 of Open Source Ready, Brian Douglas and John McBride sit down with AI expert Steve Manuel to explore the Model Context Protocol (MCP)—a framework that enhances how models interact with their environments. They break down why context-awareness is crucial for machine learning and how MCP is transforming open source AI.
transcript
Brian Douglas: Welcome to another installment of Open Source Ready. Man, I am so ready. John, are you ready?
John McBride: Hey, I'm super ready. Caffeinated up. I got both the coffee and the Yerba Mate. Let's get into it.
Brian: Oh, that might be a bit intense, John.
John: It's a lot.
Brian: I want to check on you tonight when you're not sleeping and writing code in a fever dream.
John: That's when I'm at my best.
Brian: Excellent, well I've been writing code at night actually recently 'cause I've been building a bunch of agents as a recent.
And then it feels like Twitter has been like taken over with this whole acronym around MCP.
So if you finally caught up what RAG was, and then also what LLMs are, congrats, there's another acronym and it's MCP.
And we have the MCP master here Steve, welcome to the show.
Steve Manuel: Thanks for having me. Excited to talk MCP.
Brian: Excellent, so Steve, you want to give us the manual of who are you Steve and what are you doing?
Steve: Yeah, sure. My name's Steve. I'm a Co-Founder and CEO of a company called Dylibso.
We started this company a couple years ago to basically bring extensibility into every application, largely through the advent of WebAssembly as a virtual machine and execution format.
So you can execute code inside your apps that wasn't written by you safely and take code from your users and finally give, you know, that "eval this JavaScript" without all the concerns that come along with that largely for plugin systems.
And you know, we've iterated quite a bit on great developer experience for WebAssembly, which was largely one of the things that I think held most people back from trying to leverage the technology and have kind of found an interesting inlet to expand the world of function calling and extend AI with secure and portable tools.
And it just so happens to have kind of caught on fire online as more and more developers have found how useful MCP is.
Brian: Yeah, so like you started this two years ago. Like I've been adjacent to Wasm like I've definitely built and leveraged Wasm style applications. Why Wasm, like why the interest?
Steve: Well initially WebAssembly caught my attention when it was first announced for the browser and it was basically a way to execute code alongside JavaScript inside the browser that wasn't necessarily written in JavaScript.
So really largely for the first time, if you want to kind of ignore Flash and other Silverlight style kind of embedding environments inside of the browser, which were not official but largely just kind of-
Brian: Yeah, Silverlights, that's a callback. Is that still a thing that Microsoft's using?
Steve: I think they've officially deprecated that.
John: I feel like I've recently seen it. I feel like I can imagine the logo as it's popping up on my browser.
Steve: Yep, so this is the kind of the original way to like run code that wasn't JavaScript inside the browser.
And you know, the browser renders got together largely after Mozilla kind of proposed this concept and agreed on a new standard called WebAssembly.
And this allowed you to take code written in other languages largely like static compiled languages like C and C++, and compile that down to a target format called WebAssembly, which you can kind of think of as like X86 or ARM instruction set and paired with a virtual machine that can execute those instructions.
With this unique characteristic that by default the virtual machine has no access to the host environment whatsoever.
So it can't read your environment variables, it can't talk to the network, it can't read and write for the disc unless you explicitly supply capabilities through functions that are added to the virtual machine's code.
And then the guest code that executes inside the virtual machine can talk to the real world.
And I thought, okay, I like JavaScript, I don't love JavaScript. I'd love to be able to write other languages and run them inside the browser.
But furthermore it was always the intention a WebAssembly to live outside the browser just took a little bit longer to get a implementation outside of the browsers itself.
And so when I saw that I realized WebAssembly is this universal language, it's safe to embed inside of a host application. To me this seems like it's the last plugin system that ever needs to exist.
And so finally when some folks did work to implement Wasm virtual machines that lived outside the browser, I thought okay now I can embed a Wasm runtime inside of my program and then let my users reprogram my code dynamically.
And that really was kind of the first instinct like okay, this is going to be a big deal. We need to be able to make this developer experience great.
We need make it easy to embed these runtimes into every application independent of the language its written in.
And then make it easy to interrupt between the guest and the host. 'Cause Wasm a very low level technology you're literally managing memory and passing pointers.
Wasm only has support for numbers and no strings or anything like that. So it can be a challenge to work with sometimes.
And so we've worked to really improve the developer experience through a project called Extism, which is an open source framework for working with WebAssembly.
Brian: Okay, excellent. So like you spent the last two years working on this and then you got into like the MCP thing.
So like I want you to explain what MCP is, but also I want to like put some date and timestamps on like when you started working on it because like we're a week into like this craze on Twitter and like newsletters all talking about MCP but you've been already messing around with this stuff for a while.
Steve: Yeah, I mean the overlap with what we've been doing and with MCP is not immediately obvious but actually just like makes a ton of sense.
MCP broadly is like a plugin system for your AI. It gives an LLM, any kind of model that can do function calling the ability to understand tools and then execute those tools just by reading data from an MCP server.
And so you can kind of think of like an LLM as this like super intelligent being, but it can't really do anything. It only has access to its own knowledge.
But if you wanted to try to like go to a website and read some information or the classic agent example of like book my flight for me because it's just unable to do those things because it's just data that's compressed into a binary that's executed.
And so function calling emerged as this technique to be able to give a model the ability to say, hey you know what, you told me that you have a web search tool, this is a point in time where I think you should try to call that thing.
And lots of different frameworks and models then supported function calling. But everybody had their own paradigm to do so.
They had a description format to say here are the tools and the function signatures, the name and the kind of data that comes into the function.
Here's the function calling model to determine when I should extract the parameters from the prompt and suggest them to you.
The kind of application layer that's talking to the model to actually execute that code.
And because of that fragmentation, if you were trying to build infrastructure in the space, you basically needed to spend all your time writing adapter code to make the OpenAI format work with the Mistral, with the Gemini, with the cloud format.
And so we actually looked into this as a way to demonstrate what a WebAssembly powered kind of extensible layer for AI function calling would look like.
And we realized that immediately, this is maybe six months ago, that the fragmentation and we'd be re-implementing the interface over and over and over again for every single model that we wanted to bring this tool into.
And that just like doesn't fly, this is too much work to support all that stuff.
So fast forward to November, November 25th I think it was, you know we had just kind of come out of this spike to say, you know, this really isn't going to work 'cause all the fragmentation then suddenly Anthropic, you know, released the model context protocol.
And that provided this universal format, this universal description layer, which was only a protocol, meaning it didn't have a dependent transport, it didn't have any dependent implementation, it was just a protocol.
So anything on either side of the sending or receiving end of that protocol could be implemented any way you'd like.
And it just like smacked me in the face that this was that missing piece that removed that fragmentation, that gave us a standard to implement. And because it was a protocol we could actually implement the server in WebAssembly.
It didn't have to be in TypeScript or Python, you know, leveraging one of the Anthropic SDKs they released. As long as the protocol sent the server side implemented the correct response format, then it would interrupt perfectly.
So that day we literally were like, let's build that thing. Put up a registry, leverage all of our WebAssembly tooling to provide portable cross language AI function calling through MCP.
John: One of the questions that I have kind of around all of this, you know, all this was happening over kind of Christmas break and I was like, I want to learn about this so I'm going to go like read the spec and start building a bit of it and go just like as an exercise in understanding the spec.
Two things--I was very surprised that they picked JSON-RPC 2.0.
It seems like kind of a goofy... And maybe they're just trying to get away from, you know, not having to say like, yeah you're going to use HTTPS or something or like this protocol has to look like this in a byte stream.
Or you know, even further down the road where Google was like protobufs all the way, you know, everything's got to be protobufs you know, maybe they just wouldn't get away from that.
But I'm curious your take on why you know that as kind of the underlying transport. And then my hot take for you is like why would a company want to like open up, you know, their MCP server to large language models to potentially like, you know, exfiltrate a bunch of data?
That seemed like the biggest kind of missing piece of this of like we want to create a standard, but so many things in Web 2.0 seem to be kind of shut down anyways because they don't want a bunch of this like retraining scraping happening, right?
Steve: Yeah, totally. So the first part of the question with regards to the JSON- RPC, we've talked a lot to the Anthropic team who you know, made MCP I think just being early to implement it and you know, we've worked together to kind of figure out how to refine the spec and add things like authentication and so on and so forth.
And from their perspective, and I don't want to speak on behalf of the team, but from what I've gathered in our conversations is that the agnostic, you know, implementation for JSON, it's kind of available in every language already.
You don't need to have some kind of special parser or anything, usually it just comes as part of a standard library, made it a very attractive format.
So immediately it would be available to use in any language regardless of you know, the languages, you know, interest in implementing MCP and it's not the lightest weight, you know, format for sure.
You know, you can definitely get more concise with something like protobuf, but I think that because we are in the phase we are with AI and context window sizes, that was less of a concern, you know, if you are going to put any of the message or data format into the model, which you don't need to do with MCP.
But I think thinking about it from like a-
John: Usability.
Steve: Usability.
John: Yeah.
Steve: You know the context windows are so large, JSON is fine.
And from a readability perspective too, so like for debugging, for introspection, and for security, having a human readable format where the transport does not have to have any kind of special decoding or parsing on either side and a human can just like look and scan, I think is a pretty attractive component of it as well.
Yeah, I love JSON, I think it's a great format. I think, you know, lots of developers are familiar with it.
So I'm happy with the choice but I agree it's not to be what you would pick if you were trying to optimize for performance, speed, compactness.
John: I don't know your familiarity with Go, but I mean dealing with JSON and Go is a bit gross.
I mean there's like three different standard libraries for doing it and I don't know some people are going to flame me for saying this, but I wish JSON implementation in Go was a little easier.
So that's super fair though. Yeah, it was very easy to read and debug as I was trying to work through implementing the protocol.
Steve: Totally, generally speaking an MCP server from an enterprise or an organization that provides an API is largely going to mirror that API.
Possibly with some compression of the steps that an API may provide, which are usually extremely atomic, you know, get a user by an ID, find that username, or other property and use it in a follow on request and so on and so forth this back and forth and back and forth.
MCP you could, you know, probably wrap several of those very atomic operations into kind of one more broad one.
So the AI only has to actually understand one operation that converts into several. So I think there's lots of reasons why you may want to implement an MCP server for your... Just as a wrapper over your API.
And you're not exfiltrating any other data that you otherwise would if that API is accessed by a program just general other HTTP client.
So using the same kind of authorization, using the same kind of, you know, control mechanisms you have for your HTTP API just kind of pairs very nicely within MCP.
So I don't think it's really much different than implementing an API, it's just that it's ready to go for any LLM to use.
Brian: Yeah, so I saw Harrison, CEO of a LangChain, maybe in like a stream of consciousness, like sent out a little Xpost or Tweet, formerly known as a Tweet, conflating what MCP is, is basically a Zapier Zap.
And like would you think that would be like more of a crude explanation of MCP and like what the its capabilities are?
Or do you feel like that's the path that this thing's kind of going down?
Steve: Yeah, I think an MCP as more of like the integration the Zap in Zapier parlance may leverage. and just for transparency as well, we have a product on mcp.run, called tasks.
Which basically lets you roll up all of your MCP servers into integrations that a prompt can just execute magically.
So if you want to think of it like Zapier, that's very okay with me. But the point is that you actually can totally string together chains of these MCP servers from a prompt because the model knows exactly which tool to call based on what you need to get done.
And which parameters to pass into it based on the context that it has from that prompt itself. Or from previous results from an MCP server response.
So instead of a developer having to painstakingly build all of the integration code themselves for every single platform.
Now broadly any platform can implement an MCP client, which is the thing that knows how to make calls to the MCP server and it gets to use all the MCP servers automatically, which from an integrations perspective is just a very, very beautiful thing.
Brian: Yeah, yeah, I mean it's like very much... And John, when we were talking about this like last week or a couple weeks before, I guess the hype of X, the idea of like Web 2.0 and like how APIs were, and like funny enough like we started talking about Netflix, and Silverlight.
Like my intro into engineering like full stack engineering was Ruby on Rails and like using the Twitter API which is no longer as accessible.
And also using Netflix to basically find out what shows to watch. I know that's like how I learned how to code and like I no longer have access to any of the same data.
So like now my brain's like turning of like man this is great and like the adoption it seems like, or at least a hype looks like there could be lead to adoption of like this protocol and this might be the standard that we look forward to.
Like when building agents, I dunno like what you're over under on like, you've mentioned mcp.run, it sounds like you're fully invested in like this is now how you could get the WebAssembly stuff in front of people but also hitch your wagon to like this new standard.
Steve: Yeah, we're definitely bought in. I mean having gone through that exercise and spike to do the research to determine that there's so much fragmentation in this tool calling part of AI.
Wanting to build infrastructure for developers there, wanting to build infrastructure just generally in that space, the juice wasn't worth the squeeze, you know, and so for having this standard, which we had gone through like all the research and used all the existing things out there to finally see one that was extremely well thought out and extremely well documented just out of the gates.
With tools, with inspectors, with all these other kinds of just like helpful components ready to use and backed by one of the largest, you know, foundation model labs on the planet who puts out some of the best, you know, technology out there.
That was a very strong signal that like this was going to be a big opportunity. And before we went too, too deep we tried to implement MCP in a bunch of different contexts.
So we wrote our own client, rebuilt our own transport, largely you're going to use a standard IO an SSE transport because that's kind of what's documented, but we have an in-memory transport, you don't actually have to pass anything across a network.
'Cause our code is in Wasm, you know, it's portable and it can actually be embedded directly where the AI is. So having all those things in place, it just felt like this was well set up.
And remember this is days after it was announced, we could already tell this was well set up to become what we thought would be a defacto standard.
And then you just got a ton of a way of adoption on the client side. So the AI IDEs like Cursor and Windsurf, they implemented it.
Block released a awesome application, and agent called Goose. and it was MCP native from the very beginning.
And then just more and more and more this massive swell of adoption came. And you're hard pressed to find something that's not trying to go for a large broad adoption that doesn't have MCP now already. And it's only been a few months since people have been aware it's available.
But I think the Wasm thing for us is really more about like if you think about this from first principles and you think about how do I leverage this technology securely?
And how do I maximize the mileage I get as a developer to implement an MCP server? I want my MCP server to be as useful in every context as I possibly can make it.
So I'm not rewriting this MCP server all the time or porting it into different languages or compiling it into different targets. It's literally that right wants to run everywhere.
And I hate to say that but it's true, right? It's like I could implement MCP server now one way, and it's going to work in my local environment to run next to Claude desktop.
It's going to work in my hosted environment and I can host it myself inside my enterprise, behind my VPC.
It's going to work in the browser if you want to do a demo or do a pure browser based AI application 'cause we're getting models that work in the browser.
It's going to run there too. And an extra kind of fun piece of magic, which we're I think quite a bit ahead on is mobile.
And like as models get small enough and performant enough to run well on device, you have all these kind of privacy oriented application use cases where I want my tools to run on the device too.
I don't want to exfiltrate my data, my banking data off my phone or my health data off my phone just to go up to some fitness application in the cloud to crunch the numbers, that should just run locally.
So because we do have all this, you know, primitive technology for WebAssembly, one of our projects is Chicory, which is this pure Java WebAssembly runtime and it runs on Android super well.
And we also wrote a backend compiler for Chicory that will translate Wasm instructions into Dalvik instructions, which is Android's kind of flavor of the JVM bytecode.
So that you can actually run your MCP server on your Android device at native speed.
You know, so eventually when these models get capable and powerful enough to be on device, there's going to be a really interesting story to tell around local function calling on the phone.
John: I think the "write once, run everywhere" actually feels like a good kind of framework to think about.
It sort of makes me think that like OpenAI kind of dropped the ball on this opportunity at least, or maybe they weren't targeting this but for a while the write ones run anywhere for at least AI agents kind of things.
Was the OpenAI API and so many of these things like Ollama and llama.cpp and some of these other inference engines were just like, ah, we have an OpenAI like API thing, just run your inference on that and you know that gets a little lower level where you're actually trying to do, you know, generation and stuff.
But I really love the idea that, you know, you wouldn't have to, you know, lift and shift a bunch of that boilerplate to different APIs or different, you know, tool calling things.
You can just have those little servlets even local right next to the agent or whatever to be able to do those things. So to yeah a 100% heard on the "write once, run anywhere" idea.
Steve: One of the original inspirations to even do this in the first place was just like the horror we felt when we read the original, like suggested "getting started with model context protocol," which was like NPX install a executable node script.
John: Woof.
Steve: Which like come on, yeah this thing could do anything that I can do on my machine and I'm giving it to my artificial intelligence.
So it's like, no that's not going to fly. You know, this might be fun to play with right now, but like when we bring this into like real use cases, sorry, not going to happen.
So we thought okay there needs to be a more secure way to bring these servers into the context in which they're used. And obviously being like Wasm-pilled for years it was just like, duh, this makes a lot of sense. But now that we've actually put it through tons of use and is being adapted and implemented in these enterprise settings, it just makes a ton of sense.
It's like way easier to get the security approval. It's way easier to move these modules around. I don't need some huge container or you know, node to be implemented somewhere.
They run in CloudFlare Workers, they run all over the place. So we're very excited to see, I think Wasm find another place where it's useful.
I kind of mention Wasm as little as possible in any of it because fortunately it carries with it some baggage that doesn't need to kind of live on in this space, but.
John: One of the things that I think, you know, probably rightly they left out of the protocol was authentication and authorization.
Yeah, probably be too much to put into that initially at least. But the gears kind of started rolling for me where I was like, okay, what would this look like at bigger and bigger scales?
Do you imagine that eventually they'll start to be components or like plugins to, you know, MCP or something that are like AI service accounts or something like cloud scale where you know, these things can actually have some true authorization and authentication to be able to, you know, do different things.
You know, I think you're solving for kind of the low level memory and actually on the piece of hardware, you know, not being able to like read files or whatever from the Wasm runtime.
But you know, for being able to say like, no, I actually don't want this AI to reach into the bank data.
You know, the workout AI shouldn't be able to reach into that. Only the one with the service account for, you know, doing my finances for me that AI should have access to that.
How do you envision that future?
Steve: Absolutely.
I mean I think there will be a substantial effort put into kind of the existing auth companies you think about like Okta and Auth0, to put work into applying that kind of role-based access control onto an agent as a entity that is deployed into your cloud just like any other program is.
So yes, a service account for the thing, and apply it to the whole, you know, whatever the spec is that is deployed into Kubernetes, for example.
But I think that like, I can only give you kind of where I've thought most around this and for me, I think that MCP has the largest opportunity for non-technical users.
I think like it's an interesting wave the developers are kind of being hit with this, it's cool new protocol, we love protocols, but because it is this containerization of an integration or of some utility that can be just leveraged by anybody, I think it's more interesting to someone who can't write code or isn't want to write code or writing code is not a function of their role.
They just want to get work done. And for them, what we've tried to put in place is a notion of like a containment group around a set of tools.
And so if you think about an application that's going to load these tools because it wants to read data from Notion, and then it wants to summarize that or generate new content and push it up to WordPress or send it to Slack or kick off some other kind of workflow.
Each individual user, you know, shouldn't be responsible for like maintaining credentials and identity.
So we've kind of rolled these tools up into what we call a profile, which is like, you know, an organization of tools that are specifically useful for a particular task at hand.
And that profile carries with it the authentication and credentials for all of those tools that it needs access to.
So it's less about like, which resources can this use inside my cloud?
More of what do I want to give this profile access to, to go and talk to the real world so that my, you know, marketing team can just start using those tools pre-authenticated before they have to like go to Notion and find the API key and copy and paste it into, you know, some configuration page.
So it's different because it's a different paradigm. It's not just a resource in your cloud, it is this amorphous collection of tools and capabilities that now can be shared across teams and departments in your company.
And how do you control that while making it very easy for that group of people to just like pick it up and start working with it.
So yeah, we're working on a bunch of stuff. MCP does now have OAuth support in its spec, but it's largely more to like kick off an auth flow from something like I've installed the tool, now Claude desktop wants to use it, it realize it does not have access on authentication errors and it's going to initialize a sequence to run through like the OAuth flow for that particular API or endpoint.
John: Okay, so more on the MCP server side.
Steve: Exactly. But yeah, there's still a lot up in the air. It's an exciting time to be building in this space right now 'cause you're literally at like HTTP 0.9 of like the lifespan of the protocol.
So yeah, more people should get involved and come check it out and join the discussion.
Brian: Yeah. Is there a community for MCP now? Like I've looked through mcp.run, there's definitely a lot of good examples there, but like is there like a Discord where the Claude folks are like delivering the stone tablets?
Steve: There's a behind the scenes one for people who are working in kind of the working groups. And that I think will be made public sooner than later.
But for people who want to get involved the best place to go is the Model Context Protocol Organization on GitHub. And look at the discussions that are going on there.
'Cause that's really the entry point for everything. Everything gets posted there and starts there. So yeah, it's totally open.
I think Anthropic's done a really good job of kind of stewarding this protocol into the open and trying to make it a everybody thing, not an Anthropic thing. And I think it's worked out really well so far.
Brian: Yeah, do you think Anthropic's winning in the AI arms race?
Steve: You know, I do. I think that they have some of the best models. I think that they have maybe the most kind of pure intentions of any of the others.
Not to name any of the other ones, but yeah, I trust them. But I do think there's, you know, a tension between the developer community and any of the foundation, the large model foundation labs, because they have to eat into the application layer.
There's no way they can remain low level infrastructure just because of the revenue pressure they have based on their financing and the way the companies are set up.
So yeah, I think there's this uneasy tension that is yet to really explode, but, you know, it's percolating.
Brian: Yeah. Excellent. Yeah, I feel like I've got so much more I want to ask, but I think we got to give ourselves like a few months to see like when 1.0 hits for MCP. And then see how much more of this adoption happens.
Because I think like, congrats on you being in the right place right time to like hang out the Anthropic team and work with MCP and like giving developers something that they could get their hands on like sooner.
I was early days GraphQL, and I remember being at the first GraphQL Summit before 1.0. And then GitHub was like one of the first like biggest enterprises to implement it outside of Facebook.
And even today, like it's a bit of a mess because like no one really kind of talked about tooling and et cetera about GraphQL.
And like I say that because that's the one I experience firsthand in my career. But yeah, I think there's like a lot of things to like about the current state of MCP.
Steve: Yeah, I mean we have plenty of time ahead of us, you know, then there's lots of things that will still be figured out. But yeah, it's early and it's a fun time to build.
I actually want to get your opinion on GraphQL really quickly. 'Cause I think that there should be more done with GraphQL at MCP.
Predominantly because of, like, the reason GraphQL was created was like client side selection and recognition payload that is responded with.
And as context windows, you know, are precious and tokens are precious. The results from these tool calls start to eat into those context windows.
So the ability to refine and restrict the response that GraphQL provides kind of out of the box, I think should be something we look at more closely as a feature for tool call responses.
Brian: Yeah, and I think it's like, so there the GraphQL Foundation now, and so like there's an outlet to like start driving some of this interest.
The challenge though is like there's a lot of less enterprise support and buy-in on things like GraphQL now. And it's mainly just sort of the state of the world had shifted fastly into new avenues.
So a 100%, like if anybody from the guild is listening who's like big consultants in the GraphQL space and like help support and maintain a ton of open source, I'd love to know if like those folks are like eyeing MCP and what they're doing in AI.
I haven't really paid attention to like what Yuri... Yuri's been on a previous podcast of mine. So maybe I got to catch up with Yuri and bring them on the podcast to see what they're doing in the space.
But a 100% with you, there needs to be more buy-in. I think Shopify is still heavily involved and invested in GraphQL, so perhaps they will build a MCP server if they haven't already and make it GraphQL heavy.
Steve: Yeah, absolutely. If anybody in the GraphQL world wants to chat, just slide in my DMs.
Brian: Excellent, well speaking of DMs, well actually I don't know how to segue after that awkwardness, but we want to transition to Read.
So these are things that, articles that John and I have been reading and the past couple weeks. And question for you Steve, are you ready to read?
Steve: I'm ready to read.
Brian: Excellent, so my first pick, which I'm excited to share, which is, I wasn't even aware of this, but apparently Typescript's going to be rewritten in Go.
So if you're a Wasm fan and you don't like TypeScript, turns out it's going to be underwritten by Go.
And they've cited actually performance improvements as soon as November of 2025.
So I'm curious, Steve or John, do you guys have a comment or have you... John, you've been following the story?
John: Yeah, I think importantly it's a port, I guess maybe not quite a rewrite. Like they're not going to be exactly like rewriting, you know, github.com/microsoft/typescript or whatever.
Which is important because you know, trying to piecemeal things out that would be like part of the compiler.
You have a little bit of Go and slowly that Go starts to just eat up more and more and more of it until the rewrite is complete. Is very different from like ground up.
We're rebuilding the compiler as sort of a, I guess, you know, in its own thing. I think Go is an excellent choice for this.
You know, I've been on Go Teams and you can move really fast on stuff like this without having to deal with some of the low level crap that you get in Rust and you know, even lower level and like pointers in memory with C and all that stuff.
But I'm a little surprised they didn't pick something like Zig, which seems to be kind of the opinionated, like we're building a language, it's going to be, you know, Zig, which has some of the like tight matching and like enums and you know, some of the things that I think are a little nicer.
I read a book, now I think I maybe made it halfway through, it's called "Writing a C Compiler," and it was really gnarly, like it'd been forever since I looked at any of this stuff like my compiler class at school.
And you know, the person in it basically recommended OCaml. Because I think she was saying that like a lot of the just things in OCaml just make it so much easier to do the like, you know, big switch type matching on like big statements and stuff, which you don't really get in C or C like languages and even in Go kind of is a mess.
So I'm really curious how they're going to actually approach this. But I would love Steve's hot takes, 'cause you know, he's deep in Rust and Wasm land.
Steve: Well when I originally scanned this Read, I thought that they were talking about building a runtime in Go, and, you know, not like Node.js, or any kind of Java group engine that exists today, but to actually execute the TypeScript natively in a runtime written in Go.
But they're talking about the compiler.
John: The TSC compiler. Yeah.
Steve: So that makes more sense. You know, aren't there other TypeScript compilers, does everybody shill out to TSC?
Like, but I mean esbuild is a good example of something that's like crazy fast that works well for JavaScript and TypeScript. So I think like Go's a great language.
I would actually love to see an alternative runtime built in Go where TypeScript gets to actually leverage some of Go's native capabilities.
So like if I could actually spawn Go routines as, you know, I'm air quoting proper threads inside of TypeScript and not have like the process, you know, child or whatever the JavaScript APIs are for kind of launching threads, again, air quotes, it would be very cool.
But hey, good luck on the rewrite. Those are always fun and I hope they ship it. 'Cause that sounds like a great idea.
John: I mean one of the things I don't know, like it's hard for me not to just be like why? Like there's Deno at this point.
Like you kind of called it out like there's a bunch of these that sort of end up being, you know, do they just shill out to TSC and granted I'm not in this ecosystem enough to like probably have an opinion.
So again, people please don't blame me, but why not take something off the shelf is kind of my next thing. Why not try to unify the community more?
Like I don't understand why the like front end and TypeScript ecosystems seems to want more of these frameworks that just keep like kind of fracturing things and fracturing things and fracturing things when, you know, ecosystems I've been a part of like in Rust and Go, you know, if there's an open source library for it, people are like, yeah, take that off the shelf.
We are not doing our own thing. And maybe it's just in some of these ecosystems, right?
Steve: Yeah. Well Deno was originally implemented in Go, like the very first version of it, but it still used like a V8 binding.
I think for the Go fans out there, if you have to touch CGo usually you're an unhappy person.
And so I think that affected their decision and I think they're just passing like Protobufs over the process.
John: Yeah, that's exactly how it goes. It's awful. You have a channel open that's just sending all this garbage back and forth and you're just like, oh gosh, yeah, it's not fun.
Steve: Hey, maybe we'll get a new framework out of it. Who knows.
Brian: Yeah, I mean I'm excited to see progress and the TypeScript team's huge. and I end up running the same TypeScript team every time I go to a GitHub Universe event.
So it'd be nice to catch up with them next fall and be like, hey, how's that life going? Which I doubt they're working on this exact problem.
But I know GitHub's been doing a lot of rewriting in Go, as of recent some of their like action runtimes.
So Go is still like strong and tried and true and like it's a choice out there. And that's probably why like not the choice for Zig is probably the maturity.
It's like how anybody who's buying into Zig right now is like investing as an early adopter and investing in building an ecosystem.
And I don't think Microsoft is in an interest of like, they love open source.
That's what it says on the front of the box, but I don't know if they're like at the point where they're like looking to build out the next Zig conf for whatever.
Like someone else is going to have to pick up the pieces there.
Steve: I was just going to say, I think Go actually is very well tuned for this like their own tokenizer and parser inside the Go compiler is also, it's obviously written in Go.
So like you don't need quite the pattern matching magic that lots of other languages that might look like a better choice to really build a high quality mature parser and tokenizer.
I would love to see this actually because it's in Go also compiled to WebAssembly, but it could be the last, you know, TSC implementation that needs to be done, but like my Ruby app that needs for some reason compile TypeScript code can do so leveraging the go version of TSC implemented and run as Wasm inside the Ruby process.
So I think there's some more WebAssembly to be had if they do rewrite this in Go which I'd love to see.
John: That was exactly what I was going to say is that the standard library and Go is incredible for all the different utilities and different things and they keep bringing into it.
Like recently introducing generics I think was huge. And honestly could have been a part of, you know, why they're picking Go because that was probably a big missing piece in some of this, but you know.
Go is also... It was built at Google because, you know, and a lot of people say that this is a criticism of Go, it's not, they built it at Google because they wanted to bootstrap people really, really quickly to be able to build huge scale services in Google.
So it's a Googleish like language that looks like C, and they didn't want people writing C++ anymore.
You know, and a lot of the people like Ken Thompson who like were there building it and Russ Cox were like, we wanted to be able to, you know, people to build these huge scale services who, you know, don't have a PhD in languages and don't have, you know, like a crazy vast understanding of a bunch of this stuff, which people would say that's a criticism.
It's like, well it's kind of a dumb person language, right? And no, it just means that you can be really, really efficient in a huge team and you can scale to Google's engineering type of culture really easily using Go.
So that's probably why this huge TypeScript team wants to do this rewrite, it's an efficient language.
Hot take, hot take.
Brian: It is, speaking of hot takes Sean, you got to read for us.
John: I got a real hot one today, so this is why layoffs don't work.
And this was something that I saw on "Hacker News," had had, you know, a couple hundred up votes and a bunch of people getting wax poetic in the comments going back and forth.
But I thought it was a great read. I actually have a friend who works at Southwest, and you know, at the beginning of the article they talk about how, you know, in 2001 when there was 9/11 and the .com bubble burst, a bunch of the airlines did these huge layoffs, but Southwest was one of the only ones that didn't do these layoffs.
And they said that nothing kills your company culture like layoffs who was, you know, the Herb Keller at Southwest, who was the CEO at the time.
You know, this thing goes on to talk about, and it's kind of more an opinion piece than anything, but talk about how, you know, since the 1980s layoffs have just like, kept increasing, kept increasing, kept increasing.
And stock prices for a lot of these companies that end up doing layoffs, you know, seem to drop and there'd be some sort of correlation there.
Is it correlation? Is it causation? You know, maybe that's up to the reader, but there's been a lot of layoffs in tech and I felt like it was a very, very good read for kind of where we're at right now.
But, you know, I like to believe that layoffs maybe aren't as efficient and effective as, you know, big tech CEOs and billionaires would like us to believe.
I don't know, I'm still waiting to see the tech market wraps back around I guess. What do you think about this Steve?
Steve: You know, without reading the full article, I feel bad answering, but I think like layoffs are always hard, right? It's not something that any company really truthfully wants to do.
I think there are maybe some, you know, motivations from executive teams to show improvement across different dimensions.
Some of those are cost savings, especially when you have pressure from Wall Street and stakeholders that depend on number go up effectively to put it in too few words.
So yeah, I mean there's hard choices that are made all the time and I think it's up to the management team to figure out how to most gracefully implement something like that.
And I don't think they're ever implemented extremely well. And you know, I don't think anybody ever really wants to do them.
I think there's forcing functions that make them do or die, but it's oftentimes a failure of the management team who's overhired or allowed over hiring to happen.
And truthfully, I think that most people who are employed in positions and companies they want to work for, really want to do the work. And if they're not enabled to do that work, if they're not given the tools or the environment to do that work, that's a failure of management.
And so usually I look at those stories and say, well, it's really not the employee's fault, it's probably the company and the direction they were moving in.
John: Yeah, that was probably the biggest criticism in the "Hacker News," comments was that a lot of these companies whose stock prices were going down after layoffs anyways, ended up going bankrupt like Kmart, Sears, et cetera, et cetera.
So it was kind of more probably a reflection of just the failing business in general and just like that the market sort of, you know, washing them away essentially.
But yeah. Brian, what do you think?
Brian: Yeah, so I think listeners know, like I have a finance degree I graduated 2008, which is like during the recession times as well, so.
And I also studied the Southwest case study in Jack Welch if he did business management. Like you got to read a Jack Welch book. Like it's like required.
So it's like it understand like the economics of the situation, but also like, like in a more recent layoffs, like with Meta, Facebook, they have like these performance type layoffs, which is like, hey, you could leave like, here, take the money and leave, take your severance, pre take your severance.
But then there's also a way to do it when there's like performance, where I read articles about Netflix around like every engineering team's like built like an MBA squad and it'd be like, you're not performing like it's time for you to start seeking employment elsewhere.
So like there are ways to approach this and like get ahead and I think when you're managed... Like when the company's managed well, like you can get ahead of this and be like, hey...
Actually as a manager at my last managing role at GitHub, like it was always said, hey, if you get a bonus, congratulations. Like you did a great job. If you get equity as part of your bonus, it means we want you to stick around for four years.
So like, if you're getting your review and you're like, oh wow, I've got a more equity grant, and it's going to best over four years, well that's because you did a great job and they don't want to lose you.
And like those are kind of the signals that are set in place and like the OKRs and like the review cycles that you can kind of see the writing on the wall.
And I'm a bit of a person who loves reading the writing on the wall and kind of understanding the situation, the economics.
'Cause I hate being blindsided about things. So you can kind of, like I've shared this story with you John, but like I was at a startup and we had like six months of runway and I was like, okay, cool, that means I'll be interviewing.
Because there was no real success that was coming out of the six months. It was more of like, these things are not working.
Six months is not like... we got six months. So I knew layoffs would happen. So I ended up getting a new job and which kind of set me up the rest of my career.
So I love reading business books and I love these case studies and like history sometimes repeats itself and like Jack Welch is an amazing person, but also Jack Welch is Jack Welch, so like, and Elon's Elon.
So like, I'm not going to put Elon on my wall, not putting Jack Welch on my wall. But it's good to know like historical reasons and why things happen so that way as a manager you can do better.
John: I think one of the things that, I don't know, I feel like my brain's been breaking a lot frequently just with like things happening and just not computing with like what, you know, maybe I personally believe in stuff.
But Brian, you also know that I'm a huge fan of Cal Newport, and a lot of his writing and sort of his ideas around productivity and I just don't understand how quarter by quarter by quarter you could really weigh and determine the value of somebody in knowledge work, you know, like software engineering.
Because it seems like really like to deliver incredible value. This is like years is what, you know, sort of I feel like, you know, beyond like I'm changing the color on a button or something.
And you know, Steve, like working in a WebAssembly compiler, I'm imagining that was not something that just happened within the span of a year even.
Like that was, I even remember looking at WebAssembly stuff when I was at Pivotal on Cloud Foundry.
And thinking about how could we like shift the VM paradigm onto WebAssembly eight years ago, right?
Steve: Yeah, that particular compiler was actually done within a year. We have a phenomenal team and a shout out to Eduardo on our team who joined us recently.
Brian: Oh wow.
Steve: And yeah, you know, because we have a great setup.
We're very engineering oriented, and I think this goes back to like, it's the management's job to set up the employees for success. You need the right kind of planning in place. You need to see opportunity and exactly know what to execute and how, and how to bring in the right people in small enough teams to give them the leverage they need to get that job done.
And so, like to your point about how do you evaluate quarter by quarter, like the evaluation needs to start like six months prior. And by the time you get to evaluating at the actual quarter, like you already need to know what should have been done, how it should have been done, who is, you know, being provided to and who the value is accruing to.
So that you can have those kinds of accurate comparisons and benchmarks, but it's not an easy thing to do, especially for these very, very large companies that are moving in all kinds of different directions. So yeah, it's hard. Always hard.
Brian: Yeah, it is. I had a great dinner with the CEO of HackerOne, and he was talking about like, yeah 'cause he ran MySQL and ran another company that was very similar to Kubernetes.
I forgot what it was called, but it was like Kubernetes or these other 10 other things. And he was one of the 10 things.
And had a great story about how he has sort of navigated that and would love to have him on the podcast to talk about that story because he changed my thinking of like when managing folks and like believing in people is a 100%, like if you can believe in people and give them the right tools and like let them work.
It's amazing like what you can accomplish that way. But with that said, I do want to go ahead and close out this podcast 'cause we're the top of the hour.
And Steve, I just want to thank you for coming on and chatting about MCP. I'm very excited about like what Dylibso is doing and also what you sort of stumbled into at mcp.run.
Folks, if you haven't typed at mcp.run, that is a actual URL, check it out. And start giving feedback and jump in the MCP discussions as well.
And with that, listeners, stay ready.
Content from the Library
Platform Builders Ep. #2, Charging Customers was a Mistake with Jamie Davidson
In episode 2 of Platform Builders, Jamie Davidson, CEO of Vitally, joins Christine Spang and Isaac Nassimi to discuss the...
Generationship Ep. #33, Developer Experience with Nicole Forsgren
In episode 33 of Generationship, Rachel Chalmers is joined by Nicole Forsgren—developer productivity researcher and co-founder of...
Open Source Ready Ep. #10, The Whirlwind Pace of AI with Taylor Dolezal
In episode 10 of Open Source Ready, Brian and John chat with Taylor Dolezal, former CNCF Head of Ecosystem and current Chief of...