
Ep. #8, Bridging Software & Hardware with Daniel Mangum of Golioth
In episode 8 of Open Source Ready, Brian and John sit down with Daniel Mangum, CTO of Golioth, to discuss his journey from distributed systems and Kubernetes to open source hardware. They explore the rise of RISC-V, the potential of decentralized social media, and how AI is shaping the future of computing. Plus, Daniel shares insights into FPGAs, the AT Protocol, and why open source innovation matters more than ever.
Daniel Mangum is the CTO of Golioth and a leading voice in open source infrastructure, distributed systems, and hardware innovation. With a background in Kubernetes, cloud-native technology, and open-source hardware like RISC-V and FPGAs, Daniel is passionate about bridging the gap between software and hardware. His work explores the future of decentralized computing, AI, and embedded systems.
In episode 8 of Open Source Ready, Brian and John sit down with Daniel Mangum, CTO of Golioth, to discuss his journey from distributed systems and Kubernetes to open source hardware. They explore the rise of RISC-V, the potential of decentralized social media, and how AI is shaping the future of computing. Plus, Daniel shares insights into FPGAs, the AT Protocol, and why open source innovation matters more than ever.
transcript
John McBride: Welcome back to the Open Source Ready podcast. I'm here again with Brian. How are you doing?
Brian Douglas: I'm fantastic. You know, I just had a coffee downtown and it's actually raining out here, so it's a bit of a mess. But other than that, inside it's not raining, so, enjoying that.
John: Nice. I got my classic Liquid Death Sparkling Water, not a beer, although it looks like a beer.
Brian: Not a sponsor either.
John: Not a sponsor yet. Liquid Death reach out to us. But today, I'm very, very excited.
We're here with Dan Mangum, who is the CTO at Goliath.io and has a long history in the Kubernetes ecosystem. Now is hardware hacker extraordinaire.
Very excited to have you on, Dan, how are you doing?
Daniel Mangum: I'm doing good, I'm excited to be here. I've listened to y'all's back catalog, which I think was six episodes or so, so it was fun catching up on that and yeah, excited to be here.
John: Yeah, we're building the backlog of episodes. We've had Adam Jacobs on, a couple other people. What's been your biggest takeaway from the backlog so far?
Daniel: You know, my biggest takeaway is that most of your guests are probably more interested in open source licenses than I am, but that is a very high bar. I am also interested in open-source licenses, but maybe not to their degree. So we'll see if I can match their intensity.
John: Incredible, I love it. Well, I've known you for a very long time and it's been awesome to see your incredible rise in these various technology realms. I gave you a bit of an intro, but I'd love if you gave our audience kind of some things that you're excited about, what you're doing these days and give us the overview.
Daniel: Yeah, absolutely. I studied computer science and have kind of always been interested in computing.
Very much started out in kind of like the theoretical realm, which then I think took me sort of towards distributed systems and you know, we intersected in an internship in college and I think from there, both kind of ended up going into the Kubernetes world.
I kind of did it a little bit through the infrastructure as code path, I think. And there was a bit of overlap with the first startup that I worked for, they were building a open-source project called Crossplane.
It's a CNCF project that basically did infrastructure as code, but modeled everything as Kubernetes resources. So it was a nice kind of merging there of interest for me and spent some time in that space, started working in upstream Kubernetes, had a lot of fun.
What a great community. I mean, just incredible group of people and it's always incredible in any large software project to see the coordination, everything that goes into releases and things like that. So had a ton of fun with it.
That whole time though, I've always been someone who really struggles with just accepting abstractions. I have a appreciation for abstractions absolutely and what they enable us to do. But I find myself having to understand what's below the abstraction even if I'm not reproducing it.
So you know, despite having kind of this theoretical background, I ventured further and further down the stack and I really think the thing that got me into processor architecture and getting into hardware and that sort of thing was FPGAs.
So I kind of discovered FPGAs kind of towards the end of college and started to play around with them. And this was just like a magical world to me because all of a sudden I could take all of my knowledge of software and then reprogram hardware. And I started learning a lot.
I'd attribute kind of the rise of RISC-V around the same time that I was in college and the couple years after is being a huge educational accelerant for me. So I kind of feel like that I started on one side of like doing distributed systems software stuff and then got into processor design and then like met in the middle at hardware sort of.
So I went like silicon before I went PCB, if you will. But yeah, ventured down that path and then eventually decided that I wanted my role to be a little more oriented towards hardware. So then joined Goliath, which is where I'm at now.
And I've been there for a couple of years and now working with a very full-stack team. We do a hardware, software, firmware, everything, so it's been a lot of fun.
Brian: What's a FPGA, for folks who are not familiar?
Daniel: Absolutely, it's a Field Programmable Gate Array, which basically is, you know, with hardware you have a bunch of transistors and you can imagine modeling that with SRAM, basically.
And so you create a hardware design typically using a hardware design language, an HDL, like Verilog or a SystemVerilog, or there's lots of new alternatives as well. You can do a bunch of stuff in Scala and higher level languages now and that gets translated through a variety of steps.
It gets synthesized into what's called a Bitstream and then that basically gets loaded onto the FPGA. And then you can actually write software that runs on the hardware that you've designed, but you can reprogram as the name implies, the hardware in that case, so you can try out, typically what you'd call like a softcore would be basically, a software-defined processor.
And so you can learn a lot and iterate on things really quickly. A trade off is with that reprogrammability, you do sacrifice some performance as opposed to an ASIC or an application-specific processor. And so there's kind of a spectrum of use cases where FPGAs would be applicable.
John: You mentioned the rise of RISC-V, kind of, you know, definitely in the open-source ecosystem, but also being impactful for your own personal learning in this.
Why don't you give people a kind of an overview of what that is, what RISC-V is and yeah, the impact that it has had 'cause I think for a long time there were these existing hardware players in the ecosystem, right?
Intel, ARM, AMD, et cetera, making chipsets, and now, we have new players playing with these things. Why don't you give us an overview of what that is?
Daniel: Yeah, so RISC-V is the 5th RISC, if you will, as obviously, there's been many more than that. But I think probably late '60s, early '70s, I might have my decades skewed a little bit there , there is kind of this debate going on and you'll hear about it, you know, in various contexts of RISC versus CISC.
So just defining that RISC is a reduced instruction set computer and CISC is a complex instruction set computer and basically, it's exactly what it sounds like. A RISC instruction set, which is like the fundamental operations that the processor can perform. It has very simple operations.
So things like, you know, add, subtract, multiply, divide, load, store from memory, those are the things that a RISC instruction set would do. And then CISC is more complex, so it has higher level instructions.
So you know, you give your software from your high-level language to a compiler and it eventually, through a number of steps gets translated into machine code eventually. And that machine code is basically adheres to whatever the instruction set specification is.
So you know, early on, when you were writing Assembly by hand, which is kind of like the lowest level, high-level code where you're actually writing code that mirrors the instructions that the processor supports.
If you're writing that code by hand, it makes sense to have complex instructions. It's like having robust libraries for your favorite language today. But as we got more and more capable compilers then you know, you started to think about, hey, is it actually necessary to have these complex instructions and what is the performance difference between having really simple instructions that can be executed in a predictable number of cycles and can be targeted by a compiler versus more complex ones.
And so this was a debate that kind of went back and forth. Also there's, you have to remember, there's like the physical implementation of these things.
So there's lots of factors that can improve performance, but basically, if you can retire more simple instructions faster than you can execute a more complex instruction, then you can get more performance out of it.
There's also lots of implications on density and things like that. So anyway, this was this huge debate and there's apparently like very famous public debates that happened between, for instance David Patterson who is kind of known as the creator of RISC, if you will.
He is a professor at Berkeley and he's been very involved in RISC-V now. And Pat Gelsinger who was your boss's boss's boss at one point, I think John, at VMware.
John: Good old VMware.
Daniel: Yeah, exactly. And then obviously, I had worked at Intel and just left Intel, but apparently, there was some very heated debates about RISC versus CISC back then.
So as you might imagine, x86 is a CISC or complex instruction set architecture, which some folks have some opinions about how that maybe has influenced its success long-term.
ARM for example, is a RISC. And so it started out in lower power context and some folks would attribute some of the lower complexity of the instruction set to its ability to perform well in those settings. But there's lots of caveats about 'em.
So anyway, RISC-V was basically, there were some grad students at Berkeley who were working with David Patterson in, I would say it's the early 2000s, I think, or maybe mid-2000s. So this kind of happened over a period of time into the 2010s.
And they wanted to build on RISC I, II, III, IV, that had been, you know, back in the '70s and '80s, I believe. And so they started RISC-V and the idea is that it would be an open source instruction set architecture.
So what does that mean? Someone could argue that like all instructions set architectures are open-source, right? It's an API effectively, right? And RISC-V is just the specification, it's not an implementation of the instruction set.
So what does it mean that it's open-source? Well, it means that you don't have to license it. Intel and AMD basically have the two licenses to make x86 processors.
ARM has a very different model where they don't produce processors at all, they just either license out their instruction set to someone like Apple and Apple goes and designs it.
And the benefit you get from that is that there's software support for that interface 'cause the instruction set is interfaced between software and hardware.
And so because that software support exists, Apple can, you know, produce their M series of processors for example, have them be really performant, but already have software that supports it and compilers and things of that nature.
And then they also will license IP. So if you're maybe not as big a company as Apple, you might take some IP off the shelf from ARM, piece it together kind of into a processor, a system-on-chip and then go that route as well, once again, having the same support for the software.
So from that point anyway, RISC-V, you don't have to license it. So basically, it's enabled a lot of implementations of open-source processors.
So if you go on GitHub, you can look at something like Verilog, which we were talking about earlier with FPGAs and you can see how a processor is actually implemented and see an implementation, the hardware side of that instruction set.
And then there's also been a lot of work on the software side to enable RISC-V for compilers and things of that nature. And being able to kind of see behind that instruction set is why I felt like personally it was really valuable for me because I could understand, you know, hey, this is why this instruction is implemented this way 'cause it has these implications on the hardware side that is usually opaque to me.
So that's kind of my journey with it. And it's continued to have a rise to some degree. It's still mostly popular in like the microcontroller context or in supporting processors.
So for example, I believe there's some RISC-V processors in like Chromebooks and things of that nature and supporting processors on like server racks and things.
But many folks prognosticate that it's going to be, you know, take over the world effectively and we're all going to have RISC-V servers everywhere and all data centers are going to be full of them.
We'll see where it goes, I think that'll take a while, but it's definitely, in my opinion, been a great enabler of innovation.
John: Yeah, that's actually amazing context. 'cause I ran into this on a silly little side project where there was a meme at some point where somebody was trying to iterate like a billion, billion Ints or something in a programming language and see how many IF statements you could have on that.
And I was trying to do it and go and obviously it was hitting limitations. So the next step was like drop in a little bit of Assembly, you know, and I'm trying to do that on a M4 MacBook and I'm like, where are the assembly instructions for this architecture?
And quickly realize that Apple does not want you to do that just on your own. They want you to go through special Xcode sets of things and it's not as open as I was hoping it would be.
So it sounds like that's primarily a licensing thing or why wouldn't Apple want me to do this?
Daniel: So I would expect in that case that they may not provide great documentation, that sort of thing. I'm sure you probably can execute it 'cause like your Go binary for example is getting compiled down to assembly. There might be some issues with like permissions and things of that nature on Apple because-
John: It was mostly that I gave up 'cause I couldn't find the instructions. It wasn't that it wouldn't work, it was that, I couldn't find the right jumps and adds and all that stuff in Assembly.
Daniel: Sure, sure. And you know, to some extent, like they want a different interface than Assembly to their operating system and their machines, which is a choice, so.
John: Yeah, that's great. So shifting gears a little bit here, you had an awesome post recently about hosting a website on Bluesky, specifically their AT protocol.
I'm curious if you could give us kind of an overview of what that was and maybe at a very broad, philosophical overview, give us an idea of, you know, where you see distributed social media going, why this protocol is important and what you see the future of all that as?
Daniel: Yeah, so I think that I would trace that blog post back to this series of talks I gave at KubeCon alongside some other folks called Registries After Dark.
So I spent a lot of time working with OCI registries or container image registries in my Kubernetes and adjacent field time. And one of the things that I kind of like to do is boil things down to like what they fundamentally are and what that means you can do with them.
So for example, in these Registries After Dark presentations, we would do things like turn a container registry into a chat server by having the image that you pull down actually connect back to the registry and push itself back with the new message history.
So whenever you pulled it, you always had the latest flow. And we did another thing where we modeled a registry as a CPU itself and tried to like execute instructions on it and that sort of thing.
John: Wow.
Daniel: It was all quite silly.
John: That's wild. How fast would something like that be?
Daniel: Oh, extremely slow because you know, every instruction was a network call.
John: Yeah, love that.
Daniel: But we had some toy programs that sort of worked.
John: Nice.
Brian: The classic, you asked if you could do it, not if you should.
Daniel: Right, right, exactly. And I think that's the same thing with the kind of Bluesky situation here is I'm, you know, hopelessly addicted to social media, although in a very strange way, like I only love text-based social media, I don't have like an Instagram.
I've certainly never been on TikTok so I didn't get to see John's rise to stardom there. But I do love text-based social media and I love blogs and things of that nature.
So I was very intrigued by Bluesky and interested in the underlying protocol. And when I started looking at the AT protocol, it seemed a lot like a container registry to me. So I thought, well, you know, I know you can do lots of interesting things on a container registry. Surely they're not actually allowing and this is now talking more about Bluesky rather than the AT protocol specifically. Surely they're not allowing me to just upload arbitrary content, right?
You could kind of argue that any social media platform is allowing you to do something similarly, but the way that they access and let you access it in a way they kind of like force you down different channels, you're not going to be able to like for instance, what this blog was on Twitter, for example, but it turns out they are, and that's part of kind of like enabling the innovation.
They're allowing you to kind of put content and address it on a personal data server. And so there's some restrictions around that, some checks around content types and things of that nature, but I was able to circumvent them and effectively just upload arbitrary HTML and then that could be served in the browser because you know, it's served at a URL.
And so typically you'd upload, you know, text or images or things of that nature. So this was basically just saying, I'm going to take that same path, but use my own content types and then just point folks directly to the PDS, the personal data server as opposed to the Bluesky application, which I believe would be considered an app view.
The AT proto terminology has already gotten a little fuzzy in my brain at this point. But anyway, so I did that, posted a blog about how you could do that.
Actually before posting it, I did reach out to the Bluesky team, not because this was a security vulnerability per se, but it was definitely an abuse vector.
And you know, being in a place where I've implemented container registries and implemented services that allow you to upload arbitrary content and in some cases, execute programs remotely and that sort of thing.
Very aware of some of the implications of that. So I just wanted to let them know, hey, I figured this could get a little bit of run on Hacker News. I don't want to, you know, put you on a bad spot. I think what you're doing is interesting.
And they were like, no, this is like, you know, what this is all about? Which I really appreciated that kind of response. And so yeah, like I said, posted it and I think some folks have gone and done some more interesting things with it.
And a lot of people, I've actually done more formalized versions basically of what I did. I know there's kind of like a blogging service that's built on the AT protocol on a bunch of other things. So this was just the hackiest version of it.
John: Yeah, it was fascinating to me 'cause it seemed like the opening of the floodgates almost on what was possible with this protocol and people realizing like, oh, hosting arbitrary content, sure. What can we do with that to build a better messaging service or build some sort of micro-blog platform, I did see that as well.
But that was also my first thought when I saw it pop up on Hacker News I was like, oh, this sounds like a vulnerability, but I think that spin on like using this protocol in the way that it can serve arbitrary content is really powerful, right?
Daniel: Yeah, absolutely. And yeah, I think that's kind of like my philosophy. Like you can just put bytes places, you know? Like, and when that's the case, basically, anything is open to you. So they'll have to do more work to, you know, not have obscene bills in the future, I'm sure.
Although I know they've already made some decisions with their infrastructure in terms of what they're putting in the cloud versus self-hosting, that sort of thing, which should help with things like egress data charges, because I know, you all have obviously worked on cloud services and the egress data is the tax that we all pay for not running our own servers, so.
John: Exactly, exactly.
Daniel: Yeah, it seems like they're thinking about that, so.
John: Yeah, so where do you see it, you know, these sort of open protocols maybe could almost liken this to kind of a RISC-V where it's, you know, open in the sense that people can host arbitrary content or start building their own platforms on top of AT protocol.
You know, Bluesky really is just kind of the face of this thing that in theory, could just be additional personal data servers. Is that what PDS is? It reminds me of sort of the beginning days of Mastodon, but like maybe a little more sane or like easy to kind of grok and think about, right?
Daniel: Yeah, you know, I don't know exactly where it will go. Social media is just a wild thing, right? So I don't know where that side of it will go.
I do think there will continue to be interesting experiments on it. It will be interesting to see... Well, let me take a tangible example.
So my favorite social media, which folks who are listening to this are going to say that's not a social media site, but my favorite social media site is Strava. I don't know if you all have ever used Strava, but effectively, for folks who aren't familiar--
Brian: I'm very familiar, I know where you're going with this. I've heard this pitch from other people before. And I'm still not on Strava.
Daniel: Great, great. So if you never use Strava, it's basically you go and you upload your run or your workout, or whatever your activity to Strava. And when you go to Strava, it has all these great stats, you know, about how you've progressed over time.
It even has AI just like everything now and it gives very core summaries of your workouts and things like that. But effectively what it is, is the people that you follow, you see their activities all in a chronological feed. I'm sure they have ways you can tune that, but that's the default.
And there's only two things that you can do outside of uploading your own activities on Strava. You can give kudos, which is a like, or you can comment.
And this is a hack that they have cracked the code on making the most positive social media experience that you will ever see because I mean, I'm sure it's some of, you know, the folks that I surround myself with to some degree, but like it's basically just a feed of people telling each other they're awesome.
Now, there's not a lot of discussion or anything like that on it, but it is the most uplifting social media experience. You know, if you have a terrible race or something like that and you upload it, everyone's like, you know what, you got out there and you're still incredible and all that kind of thing.
So great social media. So I'm interested in a more open version of that. Strava recently made some changes that were pretty detrimental to a lot of developers that build platforms on top of Strava or the Strava API, I should really say.
So there's coaching services that will pull data from Strava or advanced, you know, statistical visualization services where they can say you'll pull out your data and that sort of thing.
And it's not really that Strava has anything special, it's just that all of like the wearables and trackers and things like that all sync to Strava, which means that no matter what equipment that you have, you can get your data on Strava and then other services can get the data from Strava.
So it's basically this meeting point of this activity data. And so it was made very apparent a few months ago and they changed their terms on their API that that can easily be taken away and that can really damage an entire ecosystem.
So this feels to me like a perfect example of something that would be great to have distributed. Also, there's the added complexity of, if you're uploading activities, this is literally tracking your person going around the world, right?
Like there's lots of privacy concerns around that. And so actually, owning that data yourself and potentially hosting it yourself could be really valuable. All the caveats around, like people don't want to host their own stuff unless if they're like, you know, us and people listening to this podcast, but the larger world doesn't want to do that.
So I'm interested in it for that, but I think it's so convenient to just, you know, rely on a venture-backed company that is going to be willing to burn money and take away all of your pain, right?
And so I'm very bullish on the ideas and for the most part technical design of AT proto and what the Bluesky team is doing, but I'm not as bullish on humanity, unfortunately.
So that's kind of the limiter in my mind on it. I think, you know, technically, sure, we can make it work and it'll be fun.
Brian: So like I'm a Twitter user or X user I guess, I'm not sure how to call it, but like since the early 2010, so like pretty early, when I first was learning how to code, got super excited about like plugging into the Twitter API, Netflix API, like all these like APIs were just accessible as like a brand new developer, today, no longer accessible.
So like I wanted to go back to the Bluesky and like you were hosting websites on this stuff, but like, do you feel like Blue Sky has a chance?
I think they're VC-backed as well, but like with this open protocol, like with this API with like being able to build stuff and like you can now see analytics, you can do all the stuff we used to do on Twitter, you can now do on Bluesky.
So like it's sort of like, see if you had any interest there or excitement around this open ecosystem, I guess.
Daniel: Yeah, I think for the most part I haven't found things built on top of the social aspects of Bluesky, super compelling. But I think that's more of like a personal thing rather than a market evaluation if you will.
I think you know, it ultimately all comes down to can they build a sustainable business? And I promise you all I'm not getting paid by Strava, but I will say a lot of people pay money, I pay money every year to have advanced features on Strava.
And it makes a lot of sense if you're very interested in activity because they just basically have a ton of data, right? And they can deliver value on top of that.
So the kind of like subscription, there's not like a bunch of alternatives to go to instead, as opposed to with, you know, text, video, images. The cheaper one is kind of, it's sort of a race to the bottom there.
So I think the most important thing unfortunately for them is like, how are they going to make money? Which actually, this feels like analogous to the open-source sustainability conversations you all typically have on this podcast. So maybe we're coming full circle here. I think you all tricked me into this actually, but-
John: We did, you've been tricked.
Daniel: No, but from a, just like posting text and seeing what folks are hacking on on the weekends, which is like what my social media consumption looks like, I don't know exactly what value they could provide to me, I guess.
I'm very fine with an RSS feed and blog posts, so I'm probably not their target market. That being said, I think that I probably would if you just told me that like the base service to get that was, you know, a couple dollars a month, I think, I probably pay for that.
I definitely would've paid for Twitter. I never did pay for Twitter, but I definitely would have at a point in time where I felt like it was providing a lot of value. So I don't know if that's an indicator, but at least with the architecture and distributed nature of it, they have more vectors to explore for monetization.
Brian: Yeah, and I feel like, so like my Twitter usage, like pre-Elon, well, actually like maybe like 2018, I ended up like starting to follow more like authors and blogosphere people because I was a Google Reader user heavy, like RSS, that's how I consumed my media and that world kind of shifted on me a little bit.
So then Twitter became the place of like, oh, if I just want to find articles from like the Verge and like specific authors on the Verge, like that's the only beat I want to look at is this tech and like what's happening for developers, like I got that.
And I got excited about Bluesky in the last couple weeks because I found like their starter pack, which had like all the authors that I wanted to follow so I didn't have to get another RSS feed reader 'cause I've always picked them up and then drop them because it was complicated.
And like, they're also got to figure out how to make money. I don't know, I feel like there's a world where Bluesky does have an awesome opportunity, but then I also wonder if like, okay, well, when's the other shoe going to drop?
So like here's example, I dunno if you've used the latest X, but they have Grok and Grok is like basically, take every tweet and you'd be like, oh, what's happening? So unfortunately there was a plane that crashed in DC, I was like, hey, what happened there?
And you'll catch up and you'll see the tweets and you kind of like get through the noise, which is not very, Grok's always good at that, which you can always get the jokes and the weird meme stuff in there as well.
But at least you can get quick context of like, hey, why is this trending? And that was like, basically that's the answer is like Grok, you could find out why things are trending, could be for the wrong reasons.
But I wonder Bluesky, if there's like a way where they could also, like you could opt in your data to be part of like the Bluesky Borg to get your same Grok experience and maybe you get bonused of like stuff will show up in the algorithm based on its good.
And I don't know, maybe I'm pitching the wrong person, but I should be pitching the Bluesky team.
John: I mean, they got to call it Skynet, right?
Brian: It was just right there too
John: Easy, easy.
Daniel: Yeah. I feel like, you know, for better or worse, I very much enjoy when I post something and it gets a lot of traction online. I enjoy the dopamine hit, you know, I'll say it, but early with Bluesky, I didn't really feel like that was possible.
Still enjoying it because it was honestly, reconnecting with a bunch of people that used to interact with on Twitter and that had a lot of value to me personally. But I wasn't getting that high of, you know, having the post trend or whatever, you know?
To be very clear for all listeners, I have a very, very small social media following. So we're talking about like, when I say trending, I mean, like, you know, someone who's not my best friend liked it as well.
But I do feel like more recently with some of the feeds and that sort of thing, there's this possibility of creating something that for an individual on Bluesky, creating something that like escapes your orbit, I feel like.
And if they can get that down, I feel like it can have the same sort of adoption and maybe it is the custom feeds, maybe it's getting, you know, pulled in to some sort of like synthesis of what's going on, right?
Ala the Grok kind of comparison where, I actually haven't used that, but I would assume that it like maybe references, pulls in specific tweets and it's like, hey, these kind of compose the narrative of what's happening here.
Something like that I feel like could be interesting. I feel like I'm more interested to edit from like the supply than the demand side maybe, which I'm not sure how common that is.
I do very much identify with like looking for the Verge articles though, or a big one for me is, I really like sports, but I don't have a lot of time to watch them in my life these days.
And so like the NBA, I'm mostly an NBA fan going and like watching the highlights and things like that, having those curated really well, which that doesn't mean pull from like the NBA account.
It means like, you know, go and pull from like these random Twitter accounts that somehow are NBA thought leaders somehow and piece them together and find that, I think that's an opportunity and maybe something more niche that the distributed nature of Bluesky could enable.
Brian: I mean, with Bluesky and their sort of open protocol sounds like a weekend project. And I'm also, I feel like NBA Twitter is like very different than it used to be.
Daniel: Yeah, it's a bummer.
Brian: Yeah, it's very different. Like if you want to find out what's happening in the trade rumors and who's going to be in the All-Star game, but also then you have this random OF girl also commenting on there, it's like, okay, what's going on here?
Daniel: Yeah, I miss some of the discourse that used to happen. It felt analogous to some of like tech discourse where it's just like very entertaining and you're excited to see what tomorrow will bring. So maybe we can return to that.
Brian: Yeah, nostalgia.
John: Yeah, I just need a Bluesky feed that'll just give me Jokic things.You know, him making silly 'ol shots at the two point line or whatever.
Daniel: Yeah. I forgot that you're in Denver so you get the Nuggets fandom.
John: I know, I'm sorry.
Daniel: I'm jealous.
Brian: It must be nice to have a team that's actually winning games, so.
John: It is nice. Anyways, Dan, we're going to move on to the next thing. Are you ready to read?
Daniel: I'm so ready.
John: All right, why don't you kick us off, Dan with your read this week.
Daniel: Yeah, absolutely. So I really love a substack for, I'm not sure if this pronunciation is correct, but it's Lcamtuf, L-C-A-M-T-U-F.
This person is, you know, kind of niche notable, I guess in the spheres that I run in. But just writes a pretty high frequency, really good post. And one that came out this week was called "PCBs, ground planes and you."
And it basically talks about the evolution of trends in circuit board design and specifically, it focuses in on copper pours, which I think is maybe something if you're getting into PCB design that feels a little bit confusing.
Because you look at a PCB, you obviously, have like components and vias and things like that and you see routing between them and you're like, all right, sure I'm making an electrical connection between, you know, this pen and this pen and it's pretty straightforward.
But then you look and you see this huge pour of copper on there and you're like, what is that doing, you know? Like why do I need to do that? And that's actually kind of the approach that I feel like they take in this post.
So there's a good explanation of... It's very short as well. I don't know if it has like a, you know, however many minutes read on here, but it's on the order of like five.
And it basically explains why copper pours kind of came in vogue, which feels like a funny thing to say about PCB design, but it talks about how they work and why they're used frequently and why we may not need to use them in some cases as well. So I thought that was a great quick hitter.
John: Very nice. At one point I was getting into doing small breadboards for guitar pedals. Because you can implement like a distortion pedal or a fuzz pretty easily.
And I remember people saying like, oh, in a few years, we're going to have our in-home 3D printers for PCBs and everyone's going to be printing their own guitar pedals and these own little things.
Where is it, Dan? Where's my Home 3D printer for PCBs?
Daniel: Well, I don't know if you've ever gotten a PCB made, but this goes back to like, the convenience always wins kind of thing.
But you can get a PCB printed and delivered to your front door in days for, you know, tens of dollars, like a run of a handful of PCBs. And so like why would you even want it in your house?
It's kind of like the... Now, I say that as general humanity, I very much would like it in my house, right? I'm sure you would also like it in your house, but I think that's generally the way it goes.
I will say that there is, I think it's somewhat open source, it's called Opulo, it's a pick and place machine that you could have in your house. So it's kind of like 3D printer form factor.
So maybe for folks who aren't in this, pick and place is basically like you have your PCB with the pads for putting components down on it, but putting your resistors down and all of your surface mount components is quite laborious.
And if you're using a soldering iron, it's going to take you a long time if it's, you know, of any complexity. So basically a pick and place machine is a robot effectively that does that for you.
It looks a lot like a 3D printer. So there is some advancement that way, but I think that's why it hasn't come to fruition yet.
John: Wow, man, I had no idea. I always thought it was a very expensive endeavor, but I guess it makes sense 'cause like I go to conferences and I see people with like PCB badges.
I'm like, oh, wouldn't that be like 100 bucks or something? But, I guess I wouldn't know.
Daniel: Yeah, I mean, the components are, well, it depends on the badge, you know, like you have a display and that's probably, you know, depending on its quality could drive some of the costs, but yeah, no, it's very cheap.
In fact, actually, I want to say it was Lcamtuf, I'm almost certain that I'm pronouncing that incorrectly, but it's also impossible to pronounce.
So I'm pretty sure they also had a post about like getting into PCB design from a while back. That's really good. If people are interested, maybe we can drop that in the show notes as well.
John: Yeah, we'll definitely grab that for the show notes. Let's go on to Brian, what did you have for reads this week?
Brian: Yeah, it's a book that I read actually last month. And I've been doing this thing where I'm trying to read a fictional and a nonfictional book each week and like really just use paper, like get in a coffee shop, read a book.
So I read this book "Datapreneurs," I don't think I've actually said that word out loud. Yeah, Datapreneurs, which is a book from Bob Muglia who's a former CEO of Snowflake. So another name I haven't said out loud, so apologies Bob.
But yeah, so it's basically, there's a lot of historical record on just sort of data infrastructure, mySQL, Postgres and kind of helped, like I actually didn't live through any of that stuff, so just sort of piecemeal, learned about that, and then had a book that I had just got to catch up.
And then it gets to modern time to like what Snowflake Day, how they sort of pivoted to actually the behemoth of what they are today.
And I just enjoyed the read 'cause I felt like it was kind of like this, kind of catch up to like what's happening in that space that I don't work at data and I didn't work in databases, so highly recommend a short read.
And then there was like, I did work at GitHub, so there was like a short ending of a chapter, I think chapter eight where it mentions GitHub and why Microsoft acquired it in... Surprise! Data.
So yeah, it makes a ton of sense, I mean, I lived through that. So highly recommend the book for anybody to, if you're interested in catching up with databases.
Daniel: I can't remember where I was hearing about this, but it was a podcast where someone was talking to one of the early investors in Snowflake. I'm fairly certain it was Snowflake and it was kind of a large investment given their status at the time.
And the reason why they invested is because they had kind of like a back office for the firm. I think it was a hedge fund rather than a VC.
And they had some engineers in their back office and basically like were doing some data analysis or reorganization, or something, right? And they used Snowflake for it and they were like, oh my goodness.
Like this engineer came to the, you know, the hedge fund manager and was like, this is incredible. Like I promise you this is like going to change the game and everything. And I actually haven't used Snowflake, but I've heard it's a game changer.
I don't know, actually, that's an interesting question. What did they kind of say in the book that was maybe the differentiator for Snowflake?
Brian: Yeah, it is basically like lower the bar of entry. So I'm complaining the book or my personal conversations with people, but like, things like DBT and like giving someone a language to interact with the database, it's not just SQL or SQL, like that changed the game.
So then now it's like the democratization of databases was like the beginning of the book. Like everyone could do data, everyone could do Web 2.0 and now what they've set themselves up to is like now everyone's got data, what do you do with it type of deal?
So the data science and now AI and ML, is like where it's driving the future of Snowflake.
Daniel: That's really interesting. I feel like I haven't, like I said, I don't have a lot of experience with Snowflake, but the interface being the differentiator is kind of interesting for a data product, right?
Because like for the most part you think like we have standardization around SQL or you know, like as long as it scales, I'm okay with getting data in and out however is necessary.
But yeah, that's really interesting. Especially, I feel like if you're kind of like target user, if you're more of like a data lake more than you know, like a transactional database or something like that, then the interface becomes even more important 'cause they're maybe not a developer.
John: That last part is huge where you might have the business stakeholders interfacing with the data, product managers, obviously, the developers, AI consumers now with like agents slurping up data and stuff, it's a lot of players, you know, across kind of a flat plane.
I think another piece with Snowflake and I too, am probably also conflating personal conversations with, you know, what I know from Snowflake in the past, the scale, the scalability is huge.
Where maybe, you know, like at OpenSauced we had some Postgres databases sitting around and we more or less scaled those and you know, that's fine, I can do that.
But you know, if we had hit any kind of hyperscale or start to go out to bigger and bigger enterprise customers or something, we definitely would've needed a solution that probably would've taken care of that for us 'cause I don't think anybody wants to be an expert in a scaling databases or managing redundancy across database clusters or something.
Just let Snowflake or some other company en masse, handle your data that way.
Brian: Yeah, it's full circle to like getting your PCB boards delivered versus like, Hey, I've got like everything set up in my basement, come down here we'll make you a board. It's just going to be a couple hours.
Daniel: Yeah, yeah, for sure.
John: My read this week is a bit out there, but you know, I've been thinking about it all week, so worth chatting about. It's something called Roko's? I also don't know if I'm pronouncing that right. "Roko's basilisk," which is a sort of mind virus thought experiment.
So listener be warned, if you don't want to hear about this, now's your chance to turn it off. But what the thought experiment is, is around what could happen with some future super AI intelligence and framing it in the sort of prisoner's dilemma thought experiment and around game mechanics.
So really into kind of the philosophy of it, but the thought experiment basically goes that some super, mega AI intelligence in the future is going to put everybody into one of two buckets.
Either you assisted in its creation in the past and therefore you can go on living your normal life and continuing to serve it, basically. Or you somehow didn't assist in the creation of this super intelligence.
So then you have to be tortured forever, essentially. And it's crazy because, you know, it breaks down like the really, I guess the game mechanics of like, am I using that word right? The-
Daniel: Game theory?
John: The game theory, exactly the game theory of how a super intelligence would arrive at that conclusion.
And the mind virus part of it is basically that, now that all of us know about this thought experiment, we are now sort of knowledgeable about not having helped create the future basilisk, so we're automatically in the bucket of being tortured.
So now we should dedicate our lives to building this super intelligence now so we can land in the other one. I sound a little crazy talking about it, but it's worth a read, it's very interesting.
Daniel: Is the author, I mean, I'm not asking you to ascribe ethics to them necessarily, but is this a thought experiment to them or is it an actual recommend recommendation of like, you know, you should start thinking about this and orienting your life around not being tortured for eternity?
John: That's a great question. No, it's more on the philosophy side. It's not actually like, oh, we should start living this way.
The person who will talk about it a lot, and I also don't pronounce his name correctly, Eliezer Yudkowsky, who is very big in the like AI safety space, essentially.
You know, we can think through this thought experiment almost in like an AI alignment problem where, you know, this could be sort of the direction that something like this would head if we didn't do anything to create AI super alignment where, you know, it's not going to try to put everybody into two buckets based on just game theory around how something like this could go.
So yeah, definitely not like somebody's trying to start a new religion, but more like a warning using this thought experiment as a framework for how AI super alignment could go terribly wrong.
Daniel: This brings to mind, and this may totally hijack the podcast, so maybe you all can cut it later if you want, but I mean, it's almost a crime that we haven't talked about DeepSeek yet, right?
I mean, I know we've all just been holding it back this whole time and we can't be the only podcast this week that doesn't spend, you know, 45 minutes on it. Certainly don't have to do, this isn't my podcast.
But I'm curious, you know, that just brought it to mind. I'm curious what y'all's takes on it. I know you both kind of have had experience leveraging models and, you know, building on top of them and that sort of thing.
I'm curious what y'all's thoughts were around it?
John: That's a good question. We recently had a guest on where, I think we were just briefly mentioning it, and I think it was right around the like, market crash with Nvidia. It's very good.
My hot take the last few days is that it's just been down. Like I can't even get on it. And obviously, there's the open-source models and if I wanted, I could go run the 600 billion plus parameter model with enough hardware.
But I'm so curious what's going on behind the scenes with their service if there's like legitimate problems, you know, on their status page they're saying that there's like a mass malicious attack happening.
So like there could be nation state actors involved at this point trying to bring the thing down, trying to like disrupt it as much as possible. We are living in crazy times. That's what I'll say.
Daniel: How about you Brian?
Brian: Yeah, I mean, I've been running it locally, so I think I logged in once and then I'm like, oh, China, hey, how you doing? I mean, just to test it, but then I'm like, oh, let me just go down the rabbit hole of like, oh, Llama and go a bunch of stuff out there.
So I've been like building applications with it. It's like just as good as anything else. And I'm not like, I'm not a scientist or anything like that, so I'm not like, oh man, look at this. And based on the parameters here, I'm just like, Hey, I'm a developer, it gave me a response.
So I think it's good enough. I think the open source angle of it is like, it's more of like, can we do it or should we do it? It's done, like someone's made this available and the question of like, does OpenAI need $500 billion have the US government to continue to train models and expand its land grab?
Sure, but I'm going to go use the free thing. And now that I've been exposed to the free thing, I'm now using Llama 3.2 and testing that locally.
So on this sort of like advancement of what's happening in the future and like accessible to AI and like should we use AI? I think that's being now broadened.
So now, I'm not a DevOps professional or a database administrator, a data scientist or anything like that. I'm just a developer that knows JavaScript and I'm throwing together some cool little side projects.
John: It's funny you bring that up 'cause that post on the basilisk and what a lot of AI safety proponents would say is that much like a snake in a box, these things are very, very hard to put back.
So you're right, like it's out there, it's done. There's no putting that back in Pandora's Box. One could maybe argue that Meta was the first one to kind of, you know, start opening up the box of open-source AI.
I'm personally a huge proponent of it 'cause I think like in the open is where things seem to have the most innovation possible. You know, if I had to put on like philosophical AI safety John hat on, you know, it's a little scary that these things just continue to get more and more powerful.
I don't know, where's the safety consideration with these things? But my mind's just been on the basilisks, so that's where I go. What about you, Dan? What do you think about R1?
Daniel: You know--
I have a deep insecurity as a AI user in that, I don't find it compelling and I don't like this by myself. I don't want to be contrarian. That's not my goal. I want to like it, I want to like it more than I do, but I just don't.
I don't know what it is, I feel like whenever I encounter really everyone, it seems like raving about the capabilities, which I agree are impressive. I feel like there's some disconnect. So, but I'm going to put that to the side for a moment.
I am very interested in the models from a technical perspective and I thought the technique, so they did like the whole like mixture of experts and kind of like took it to a another degree and you know, that was allegedly inspired by their memory bandwidth constraints 'cause they were using the H800s instead of the 100s. I thought that was interesting.
John: Supposedly.
Daniel: Yeah, supposedly. Right, right, no comments from me on geopolitical trade here.
John: Oh good, good.
Daniel: I really found the compression of the caching and the key value store aspect of it though as very interesting and kind of like weirdly obvious, but also had a lot of ingenuity behind it, I guess.
John: That's been some crazy commentary I've seen as well because these people are basically quant traders, right? Like writing C and C++, so of course to do, you know, a trade, half a millisecond faster than somebody else, you'd need that level of optimization.
Part of me has wondered, you know, 'cause I've been writing more and more Python for AI agents and all this stuff and it feels like you're flying, like you're just writing Python.
But like I know under the hood it's just wildly unoptimized and it's just got to be wasting all these resources.
Daniel: Yeah.
John: So Dan, like how much more can we squeeze out of these things with better optimizations and software from the hardware?
Daniel: Yeah, I'm certainly not an expert on model development. I will say, some folks have refuted that they're just a bunch of quant traders and they actually are a more traditional kind of like, you know, group of AI scientists and that sort of thing that may have maybe spun out, but regardless, like lots of interesting optimizations.
The other big one that folks are pointing to is them using PTX, which is like basically, the assembly language under CUDA. And so that was interesting as well.
I don't think that we are going to be limited by hardware or software in this development. I think that it's possible that...
I actually don't understand the other components as much, the actual reasoning portion of it and the kind of like compression of knowledge, which is like effectively what I feel like these models are doing.
So I don't know what the limits of that are. I imagine that it's going to have to fragment into, which I guess is kind of like the mixture of experts approach anyway, it's kind of like a veneer over a bunch of microservices, if you will.
But I imagine that, you know, in order to continue getting better on certain domains, we'll have more specialized models that is not rooted in any sort of technical analysis that's just like observational.
So that's kind of where I think it will go. The thing that this, I guess this is sort of going back to the Meta conversation about AI, the thing that I think is somewhat fundamental, so like, let's assume we're not going to speed run like the end of the world here, which I don't know for sure that we're not, but like, there's really no use in thinking that's going to happen. So let's put that to the side for a moment.
I think that people are just way more interested in holding people accountable than we think that they will. And so if AI gets, you know, sufficiently advanced at like, we all live in a utopia where everything is done by robots that are, you know, leveraging incredible models and that sort of thing, we're still going to want someone to blame when something goes wrong. So who is that going to be? I promise you, we will figure someone to blame, right?
Maybe it'll be the big companies, right? Maybe it will be the individual lawyer who like, you know, summarizes a case that didn't exist with the AI.
That is going to be the limiter in my mind. Humanity is going to want someone to be angry at when things go wrong or when there's a tragedy, you know, or anything like that.
And so to me, I'm like, I'm not sure if there is a limit on how sophisticated these models will become, but eventually, someone wants a human to get upset with.
And maybe that's small-minded of me to believe that like we have that fundamental need, but seems to be consistent throughout history so far.
Brian: Yeah, it's funny 'cause like I keep avoiding doing this pick 'cause I have a couple chapters left in this Asimov book, "iRobot." But like literally that's the book. It's like, whose fault is it?
And yeah, it's fascinating because you hit right on the head like, we're waiting for the other two to drop in the AI race and like we're going to figure out if it is a large legal case that is thrown out because of some hallucination or, you know, hopefully that's as bad as it gets.
But there's probably other things that could be happening.
Daniel: Yeah, it'll be fascinating. But yeah, like I said, I'm primarily interested in the optimizations that go into the models and maybe eventually, I'm going to keep reading y'all's posts and eventually you're going to convince me that I need to spend more time with these chatbots, but I'm just not quite there yet.
Brian: Yeah, that's that's a good segue to say like, and subscribe.
Daniel: Yeah, right.
John: Exactly. I do understand what you mean, you know, feeling like sometimes it just, it does feel a little bad sometimes, you know, like where I feel like I'm getting kind of a sum average of just a lot of things that doesn't always get me to maybe the level of excellence that I would want for like a personal project or software, or something I'm writing.
But I've found for like, you know, just very generally, like knowledge discovery, it's really nice. Like sometimes I could just ask it about like, what's like a vague approach to something, even at a pseudo code level for like technical things.
And again, it can get me there part of the way, you know, that some average is enough to start. But I do struggle with like the more in-depth ingrained tools that kind of take over.
I've actually been sitting on a blog post that I've been writing kind of on like AI product design and how I'm seeing this kind of fracture and shift where there's like the copilot method, which copilot as a word just gets so overused it seems today.
But really, the traditional sense where it's like you're still in the cockpit, you're still driving the car, but you know, there's somebody alongside of you that can help and assist and guide as needed.
And then the other side of it is like just totally taking over and that was kind of the ChatGPT experience that was prevalent in a lot of products where it's just like, here's a chat interface, it's just going to do a bunch of stuff.
It's very hard for you to take control or for you to have like, your hands on the knobs to actually steer where this thing is headed. The two sort of differences that I give as an example miss, is Bolt.new, just takes over completely and feels a little obnoxious, honestly.
But then, Supabase's, AI product feels very nice 'cause it's just like a sole sidebar, but you can ask it to do things, but you still have full control within the whole interface, right?
So that maybe has been kind of something I've been using on around just how these things feel as you use them that like user product interface, right?
Daniel: Yeah, I think that's really interesting. And you know, I guess a lot of people assume that at some point the models will become so sophisticated that like that relationship will flip.
But right now it is very much like it's most valuable when it enables a human to do more, right? So yeah, I'm not sure what the inflection point is, but everyone seems to believe there is one. So I guess we'll see.
Some people believe we've already hit it, but it hasn't been there for me. I will say that, you know, in the context of like my day job, we had a big AI launch actually last summer and it was not a generative AI launch necessarily though.
So there are lots of applicability. I will say that like, I guess it's really just the chatbots that I'm not compelled by. But one example is, you know, we had a customer who was sending images of wildlife up through our product, basically, we are a...
I don't mean to be pitching my day job now, but we're a device platform. So basically connected devices usually over cellular. So the bandwidth constrained devices, basically send data to the cloud via us.
Anyway, we had a customer who was taking pictures of wildlife 'cause they were basically preventing the wildlife from getting on like runways and roads and things like that. So protecting the wildlife and also protecting humans who were leveraging that infrastructure.
So one of the cool examples was we have like a data processing product and pulling in, I think we were using Replicate, but also OpenAI and a variety of others to do like image interpretation.
That was incredibly useful 'cause that's not something that you can, you know, write a program to do, right? That's like something that is, I guess that is generative to some degree there.
But those sorts of use cases, I'm like, oh, okay, yeah, this is really valuable. Now you couldn't take that as gospel truth, but changing the data medium, that feels really compelling to me.
And I guess that's kind of like Sora or something like that where you're going from text to video. That could be another example, that feels magical to me.
But that's not the thing where people are saying like, oh, this is going to govern us now, so.
John: It's true, yeah. Well, we'll definitely have to have you on for part two where we continue on our discussion of AI/ML
Brian: Yeah, the indoctrination of AI to Dan.
Daniel: Yeah, right, right. Have me on it and you have to get me to being a ChatGPT fan by the end.
Brian: And then also, we will get you your first TikTok on live too as well.
Daniel: Yeah, if it's still here, you know?
John: People want to know about PCBs, man. Yeah, it'd be good stuff.
Daniel: Someone actually told me once that I should post more soldering content and I was like, that can't be true. The algorithm loves it.
John: I'm sure there's a niche for it. I'm sure, people just love to watch anybody do anything.
Brian: Soldering in a ASMR at the same time, I think that would work.
John: Well, you heard it here first, folks, check out Dan's soldering ASMR post. But remember everybody, stay ready and we'll see you on the next one.
Content from the Library
Enterprise AI Infrastructure: Compliance, Risks, Adoption
How Enterprise AI Infrastructure Must Balance Change Management vs. Risk Aversion 50%-60% of enterprises reportedly “use” AI,...
O11ycast Ep. #32, Managing Hardware with Gianluca Arbezzano of Equinix Metal
In episode 32 of o11ycast, Liz and Shelby speak with Gianluca Arbezzano of Equinix Metal. They discuss diversifying the way...
The Secure Developer Ep. #22, Authentication with Yubico’s Stina Ehrensvärd
In episode 22 of The Secure Developer, Guy meets with Stina Ehrensvärd, founder and CEO of Yubico, to explore how hardware...