
Ep. #30, Possibilities with Ty Dunn of Continue
In episode 30 of Generationship, Rachel welcomes Ty Dunn to explore his journey and insights as the Co-founder and CEO of Continue, a company focused on creating AI tools to amplify developers rather than automate them out of their roles. Ty unpacks his background, the motivations behind founding Continue, the trajectory of AI-driven software development, and the philosophy of open source technology.
Ty Dunn is the Co-founder and CEO of Continue, where he’s on a mission to amplify developers through innovative AI tools. Previously a product leader at Rasa, Ty has deep expertise in conversational AI and machine learning, which he leverages to build tools that empower creativity and collaboration in software development.
In episode 30 of Generationship, Rachel welcomes Ty Dunn to explore his journey and insights as the Co-founder and CEO of Continue, a company focused on creating AI tools to amplify developers rather than automate them out of their roles. Ty unpacks his background, the motivations behind founding Continue, the trajectory of AI-driven software development, and the philosophy of open source technology.
transcript
Rachel Chalmers: Today I am thrilled to welcome Tyler Dunn to the podcast. Ty is the co-founder and CEO of Continue.dev.
While studying the intersection of language and computation at the University of Michigan, Ty built dialogue management systems as a software engineer.
Motivated to make them leverage machine learning more, he grew from the first product manager to a group PM at Rasa, where his open source ML framework for conversational interfaces had millions of downloads, 17,000 stars on GitHub, and was used by about 10% of the Fortune 500.
Ty is now on a mission to ensure developers are amplified, not automated. Thank you so much for coming on the show.
Ty Dunn: Thank you for having me, Rachel. Super excited.
Rachel: Can you tell us the story of how you and Nate were inspired to start Continue?
Ty: Yeah, definitely. Really appreciate the nice introduction.
Like you mentioned, I worked at a startup prior to this as the first product manager called Rasa, and when we were there, we were doing research actually with OpenAI on how to use large language models in an era prior to kind of the invention of something called reinforcement learning from human feedback, or RLHF.
So kind of basically the thing that allowed ChatGPT to happen November, 2022, didn't exist in 2021 or in 2020. And so to use these models was actually quite tricky, and so we couldn't take these large language models and put them into production talking to customers, but our thesis was that we might be able to fine tune them to act like our users, so a user simulator almost, and have that talk to our supervised machine learning chatbot so that we didn't have to create bad customer experiences by deploying and iterating on it in production.
Of course, we would do that, but the idea was at first we could deploy it with large language models. And so as we were working on that, got to know some of the OpenAI folks, and at this time, Codex, which is the model that originally powered GitHub Copilot, came out.
It was kind of a access program only. So since I knew some folks there, I was like, "Can I get access? Can some of our developers get access?" So I got access for about five of our developers and we started playing with it and it was really impressive how you could generate... At the time it was only Python code, but we were generating action code for our Python SDK.
And so just kind of given my experience in conversational interfaces, given my experience with LLMs in general and specifically kind of using this model that would eventually go to power GitHub Copilot, I started to really think about large language models as general text manipulation tools, and that kind of thinking got me really started.
And then in parallel, my co-founder, Nate, was really using GitHub Copilot. Maybe this is a little bit after what I just described, and he was really using it as we were working on side projects together, and he got me to use it, but he couldn't use it at like NASA, where he was working at the time, because of like privacy and and security concerns.
But like when we were using it on our side projects together, what we often found is it gave us suggestions that we thought could be better, right? We were like, "Hey, if it uses a new library, it should know to import the dependency at the top of the file, right? If you're going to use something, you need to make sure that you bring it into that file." That was kind of the core frustration, I think, that motivated us.
So kind of have these backgrounds where we're in the information flow, right? We are actively exploring this technology and are aware of what it can do and can't do, and seeing it be used by ourselves, by other people, along with that core frustration of, "Hey, I wish I could do something about bad suggestions or wrong suggestions."
And that kind of inspired us to eventually work on side projects in the direction of Continue, and the last side project we worked on was the one that became Continue.
Rachel: Very cool, and that trajectory kind of sums up for me what the language models are good at and what they're less good at.
You know, that they're not especially good at writing accurate legal briefs with real citations, but they are very good at suggesting possible pieces of Python code that you could incorporate into your product.
That they're good at taking lots of unstructured data and turning it into a little bit of structured data with human intervention and guidance and the trajectory of your projects has gone from, things they're less good at, to things that they're more good at, which is pretty cool.
Ty: Yeah, it is. And you know, I mean, it's only been a few years, so I'm super excited, right?
We're so early in the maturation of this technology, I'm excited what comes in the next three decades.
Rachel: Now that computers can generate high quality code, which people didn't think would ever happen. Lots of programmers are worried that they're going to be out of a job, and you've probably seen these billboards in San Francisco saying, "Stop hiring humans." Are we obsolete?
Ty: No.
Rachel: Phew!
Ty: My view is kind of what I was talking about around like LLMs being general text manipulation tools. I like to think of them as like, it can write high quality code when there's a human very much in the process of deciding when to use it, and when this is a task that this model is capable of helping out with.
So I kind of view 'em as like kind of operations, and you can break those operations down to the step-by-step process that make up that operation.
So like when GitHub Copilot first came out, right? It gave you like a ghost suggestion and you decided, "Do I hit tab or do I not hit tab? Is that a quality suggestion or not?" Right?
And then over time, as we started to integrate kind of more chat experiences, and now edit experiences and multi-file edit is kind of the newest one.
What I'm kind of viewing that as is, it's like increasingly, instead of the model just taking one step forward for you, it's taking maybe two steps or three steps, and that the operations that it's capable of doing for many developers, are becoming more sophisticated and more complex, but it's still super important that a developer at a point in time decided like, "Oh, instead of me manually doing this, right? I know, right? From my experience, or maybe I don't, but I'm going to try it and see if it works, and see if it's capable of creating a quality suggestion here."
I'm going to try it and have it do that, and if it works, then I'm going to do more of that. If it doesn't work, I'm going to probably not use the LLMs for that, or I'm going to try a different way of using it in the future.
And what we've seen is, the folks who are really far in their journey of generating lots of high quality code in the process of them building software, the folks that have really embraced... Haven't taken an open approach, have started to play with it, and started to figure out for them personally, where is it able to automate a lot of what they were doing manually prior to that.
But it very much is a human kind of really driving that, right? The will behind it. I mean, you think about software engineering, it's like, oh, you're given some text, often say, if you work in English, right?
And a lot of what your work is, is to translate that into very precise code that gets merged ultimately, right? And so that's where like something like LLMs are so powerful and I think everyone is so excited about AI in general is if you have something that can really accelerate that process, if it's a general text manipulation tool, maybe at some point it kind of goes to the point that you're able to just give it the same English input that you received, and have it create the same precise output that you ultimately merge.
But in the near term, it's very much you and the model being kind of interleaved, right? Where maybe you begin it, you provide a little bit more context than the English input so that you can guide the model.
It gets started maybe with the boilerplate, you fill some things in, you correct some stuff, but ultimately in the end, like you're responsible for any code you ship, and so you at least need to review it, and maybe one day the models will get so good, but I'm kind of the view that at that point... like so good that you don't have humans necessarily generating much of the code or any of the code.
I still think the humans are involved, like similar to where DevOps went, where you're in charge of kind of monitoring the system at the very least and deciding where to point the system next, right?
Because that's the part that these models aren't particularly great at, right? Is like knowing humans want, right? In terms of like, what do we want next or what is the goal we're trying to achieve?
What is that goal, right? What is the relevant thing for us to do? And so, yeah, I mean, maybe one day everyone just sits around a table arguing about where to point the system next.
But I think we're very much involved in reviewing today and we will continue to review in the future, though that might change.
Rachel: Right, and this comes back to your mission statement about amplification, not automation, that the developers who are using this are like guitar players with an amp. You can hear them further away. Do you want to talk about the amplified manifesto?
Ty: Yeah, yeah, definitely. So it's available at amplify.dev. It's a piece we wrote with maybe a dozen platform development teams.
So for example, a number of folks on the Siemens code, siemens.com team, have put their names and support, anyone can, at the bottom of it. So give it a read and if you support it, support it, but the basic premise behind it is, kind of where that core frustration came from, right?
Of like, if you get a bad or wrong suggestion, you should be able to do something about it. And for that to be the case, like developers need to be amplified, not automated.
A lot of their work is to automate many things, but they themselves, right? Are not being automated. They're automating their work.
And for us, that means the developers are amplified, right? Where they are all involved in not just the process of building software with these AI systems, but also help with the process of building the AI systems that do the jobs they used to do, right?
So I kind of view it like, at some point, a lot of programmers programmed in assembly, right? And new technology, in the case of higher level programming languages came along, right? And those offered sort of amplification, right?
Where you're able to write better software, however you define that. Might be it's faster or higher quality or more reliable or capable of new things, and a lot of folks moved away from programming and assembly.
Some people still do, right? But the vast majority of programmers now program in high-level programming languages. I think we're kind of in the beginning, in the very early innings of a similar transition, and I call this the transition where developers become amplified developers, where in the future, instead of kind of say, writing the API service at their organization, they're now working on the system that writes the API services.
And so they move more into kind of a monitoring, maintaining and improving of that system that does considerably larger amounts of the coding... Generation of the code process, right? And in some ways, this is like brand new, like you were saying, right?
It was unexpected that we would get computers that could generate high quality code themselves, but in some ways this is not unexpected. In many ways, this is just a continuation of what we've been doing in the software industry for decades, right? Of like basically building ourselves better tools so that we can create better software, and then not only better software, but new software, software we never imagined, right?
If you didn't have the transition from Assembly to high-level programming languages, like we probably wouldn't have built some of the awesome web apps that we all use on a day-to-day basis in our life, right?
And so I'm super excited by, if we move to a world where developers are building the systems that write a lot of the software, what is the kind of new experiences that we might be able to create for people?
So that... I mean, people not just software engineers, but all people can spend more of their time working on things that excite them, that involve their creativity, that require their full human potential, and that we automate many of the rote repetitive things that none of us want to do, but need to be done in order for our society to function.
Rachel: Yeah, I want the computers to do the boring stuff. I want it to file my taxes. I don't need it to draw or write, I'm quite happy doing that myself, but I would love it to automate a lot of my administrivia.
Ty: Agreed.
Rachel: And we never thought computers would get good at Go either, but they're wildly good at Go. And one of the interesting stories I like to tell is that since AlphaGo became a Go master, Human Go players have become wildly more creative, which I think is incredibly cool that they're taking bigger risks.
Ty: Yeah, I think Lee Sedol like had that move that the computer... I think it is like move 35 or whatever, right? Where it did a move that human Go players weren't very expert... And probably everybody, like weren't expecting, right?
And then Lee Sedol, the interesting thing is, I think he did a similar move later on that allowed him to win a game or at least put up a better fight, right? That was inspired by it.
And so that's what I'm super excited about too, where it's like, I think when we get new technologies, it reorients how we think about the world, and makes us potentially, like you were saying, better Go players or better programmers, right?
Because it allows us to think about the world in a slightly different way, which is often helpful for unlocking new ways of doing things.
Rachel: It's a perfect example of amplification, you know? It increases the tools at our disposal to solve our wicked problems.
Ty: Definitely
Rachel: Not to get all horse race about it, but I am still listening to engineers I know going on and on and on about Cursor and how great it is. Does that tool have an unassailable lead, or do you think other code gen tools are equally exciting?
Ty: Yeah, yeah. I mean, I'm a bit biased, right? As a co-founder of Continue, but I don't think they have an unassailable lead. I'm very much of the opinion that we're very early in the early innings of this transition.
And so they've done a great job of building an awesome product that... I mean many of my friends that are software engineers get excited about too. And so, you know, I mean, I have a lot of respect for the founders and the team over there that's creating an awesome product.
But I think we're at the stage where we're just really kind of trying to figure out what are the affordances, right? What are the ways that developers want to use these technologies to help them get their job done?
And so there have been a lot of good ideas from... I mean, the Cursor team, from the Copilot team who came even earlier than the Cursor team, from Codeium, from... Many of them have Cs in their name, from Sourcegraph Cody.
So I don't think anybody has quite a lead yet. And I think the big reason for that is, I think we're kind of at the stage where we're kind of building these generic general AI assistant tools, right?
Which if you massage them a lot and you do a lot of human work on top of them, you can get them to be super helpful for you, and that's super exciting, and that's why people are pumped.
But I think the future comes from like, how do we actually take these general tools and enable developers to make them custom to their organization, custom to their environments, to the way that they build software.
And so I think in that sense, like very few people, if anyone, right? Has made that much progress in that direction, and I think that is the one where maybe somebody will be able to create an unassailable lead, but at the moment, we're all just very early in figuring out what are the affordances and what are the ways we can enable people to customize.
And Cursor is very much not, at least so far, heading in that direction. And so in that case, maybe they're very behind in terms of where I think the future is going.
Rachel: From your lips to God's ears. So how does open source play into this world? Is it meaningful to talk about an open source foundation model?
Ty: I think open source in general at the moment is, there's lots of conversation about it, right? Whether it's license changes or trademark disputes or kind of what the definition of open source AI is.
There's lots of things kind of going on that I think are causing us to question, right? Like what is open source in general, but especially in this world. And so OSI has a definition, right?
Where you have to not only include all of the information that's needed to reproduce the model that's been created, but you also need, to some extent, enough detailed information about the data to also reproduce the model, and that's what they're calling open source AI, or what is an open source foundation model.
For me, it's like very much an open question, right? 'Cause like the detailed information, right? Does that mean you actually have to have the data in order for it to be detailed enough?
It's a very open kind of piece, and my entire career after university has been in open source AI, and it's interesting now that kind of these questions are coming up, and so the way that I kind of think about it is like, it's more of like the principles of open source, right?
In terms of say like transparency and the modularity that comes from open source, and the democratization of technology that comes from open source. Many of those things are definitely things we need to talk about, and that need to be important and do have a big role to play.
The degree to which kind of the open source community and the open source software projects that have existed, are going to kind of support the evolution of that term in this world, it's not clear to me at this point.
And so for me, I think like, I can speak to like where we think about open source and why it's so important, it's.... For me, kind of, there's two pieces, I think, in order for us to actually take advantage of the technology, we have to learn together, I think there's... Work together.
I think there's too many people at the moment who are very much trying to build their own... Say in the AI code assistant space, their own code assistant that's going to automate everything somehow. And I just don't believe in that future, right? I think you need to find a way for the ecosystem to work. And historically, o pen source is one of the key ways for an ecosystem to actually work together without having to have someone who centrally coordinates it.
And so in our case, right? We make our VS code extension, our JetBrains extension, Apache 2.0 License, so that people can treat this building block as a foundation that they can kind of build on.
And so that's one piece of it. The other piece of it is, a lot of these tools are... As you use them, you're creating a lot of data that's going to be used to make foundation models much better.
I think that customization we were talking about earlier, it's super critical that you have that data to be able to customize things, and so if you're using kind of a non-open source offering that takes your data and doesn't give it to you, I think that's incredibly problematic for the future.
And so another reason why we're open source is so that the data generated by the use of the interface, by the use of these VS code and JetBrains extensions, is something that individuals can keep, that they can decide how they pool it within their organization and maybe across organizations.
And so I think on those frames, like having the open source extensions like we do is super critical. The extent to which the open source term, the open source principles are going to be applied to this world, I think it's definitely an ongoing discussion and something that I'm not sure I have clear ideas on quite yet.
Rachel: Yeah, I think it speaks to a really deep philosophical argument over zero sum games versus win-win outcomes. And open source is definitely on the side of, give a little, get a little.
And the problem with that, as you know very well, is that your good nature can be taken advantage of. How could a small company like yours protect and defend its open source in such a hotly competitive space?
Ty: Yeah, that's a great question, right? So what Rachel is referencing is, we've had folks in in the past, right? Who have really challenged, right? Us, in terms of taking our code, right? And potentially using it or reusing it or rebranding it, right? And to what extent, you know what I mean?
We don't necessarily encourage forks, but that's definitely something that's allowed by the Apache 2.0 License. And so for me, there kind of comes two pieces, I think I've learned, kind of as I've started to deal with this more in terms of protecting and defending.
Maybe I'll put three on there. One, like, you have to start from like the premise of like, this is not just like protecting your work, but it's a community of people's work, I think that has been really huge for me.
I always try and think in terms of like, "Okay, we have a ton of community of people who are constantly making Continue better, who are believing in it. And when someone kind of comes along and challenges the protections and defenses of that community, you have to like start from the place of thinking through it in that community, and it's not about necessarily defending yourself or your company, but your community."
So I think that has been super helpful for thinking through it, about how to defend it. The second one is like, being really thoughtful about what license you choose, right?
This is something that, you know, I mean when I first got into open source in the ML world, right? I was just like, "Oh yeah, like Rasa was also Apache 2.0," right? And I didn't really understand, right?
Like how important that was and kind of given that experience, and got to learn about like, what is a CLA and I got to learn about... You know what I mean? I got to watch Elastic and MongoDB go through kind of the license changes that they had.
And like as you start to kind of experience, you realize like how important it is to make sure you pick a license that fits with kind of where you want to go, right?
And so for us it was very intentional to choose the Apache 2.0 License to encourage folks to build in and around, and trust that Continue, VS Code and JetBrains extension can be a foundational piece of something they're working on.
And so I think that's very important. And then if people violate that license, so even though Apache 2.0 is very permissive, there are things that you need to do, like you need to keep that code licensed as Apache 2.0 going forward.
You can't just take out that license and replace it with whatever you want. And so you have to make sure to respect the copyright of the people behind that license.
And so making sure that when folks decide to not respect that license, that you actually take the steps to make sure that they follow through. And then the third one for me is thinking about like kind of the trademarks around your open source project, right?
And so making sure that the company has its trademarks, and the logos are trademarked and not only having them, but then making sure that everyone who uses them is following kind of the guidelines and doing it in the way that you want it, because you... From what I've learned, right?
The trademark office requires you to make sure that you're upholding it. And so from the very early days, making sure you're the person who's there and who's making sure that it's used in the right way.
So that... I mean, especially, usually in the beginning we found it's more smaller players, right? That are potentially abusing or misusing your trademarks and your projects and stuff like that, that you're ready for kind of the larger players who might come in the future, who have many more legal resources than you.
Rachel: It's been fascinating to watch as somebody who grew up with open source and like saw a lot of the licenses get created. They really are social contracts and they were always conceived as social contracts among the developers contributing to a project.
And for there to be younger devs coming into the market who didn't have that historical obviousness of what the licenses were created for and to do, and to do things like confusing a fork, which is a big social, you know, rebellion against the creators of a project with the button in GitHub that just lets you fork code.
I mean, those are two different things with the same name. First of all, I must say, you handled that whole incident very skillfully and I was very impressed by the dignity with which Continue comported itself.
But second, I think it was a really healthy and productive conversation to have. I think it reasserted a bunch of, what are usually unspoken norms, about what the contributors to a project deserve, what they are entitled to expect from the leaders of that project in terms of copyright enforcement and trademark enforcement.
As the leader of an open source project, you do have a responsibility to make sure that all of the work that people have shared with you is appropriately protected, and that the assumptions under which they contributed to the work are upheld. So well done.
Ty: Thanks.
Rachel: Onto something a little lighter. What are some of your favorite sources for learning about AI?
Ty: Lately it's been Bluesky, to be honest. Like when I first got into ML and AI, like Twitter was a much... I guess now X, was a much different place, especially in the ML, AI world.
Rachel: Yeah.
Ty: And it was so cool to be on there, right? To be part of that community where there was just such active conversation, and at the time, there was also just in general a super open culture in ML, even among like the top companies and labs, where like the transformer paper was just something that Google published in, I think 2017, right?
That they just openly put out there, which allowed OpenAI to ultimately create GPT and then eventually ChatGPT.
And so that originally was the place where I learned a ton about ML and AI, where like everyone would just put up their papers on archive arXiv, and then everybody in the ML, AI kind of Twitter sphere would share those papers with each other and talk to each other, and you got to know everyone.
And you know, I mean, when you were in town, you could say, "Hey, I'm going to be here, and you'd meet up with people."
What I found recently is, only really the last few weeks, is like more of that community is moving to Bluesky. And so I'm quite excited about that, to follow many of the people that I used to follow, and not have a bunch of other nonsense that I seem to get recommended on Twitter these days.
That said, like Twitter is still, in many cases, if you can kind of see through the nonsense, there's still a lot of like very interesting papers and takes and things like that. Others, I'm a big fan of email newsletters.
So Jack Clark, who was at OpenAI for a long time and then became a co-founder of Anthropic, he's been writing a newsletter that I've probably subscribed to, for... I don't know, eight years or whatever, called Import AI.
That's quite good. So I enjoy reading that one each week. I think there's The Human in the Loop that Andrew at at Heavybit, I believe-
Rachel: That's right.
Ty: Is the editor of, I enjoy reading that one. Turing Post, I think they do a pretty good job. That's another email newsletter that kind of comes in, in my inbox. I'm sure there's more that I'm missing, but these days it's primarily kind of archive and Bluesky and Twitter kind of, for more... Let me go opt into learning some more AI, ML stuff at the moment, or kind of the more passive, when someone writes a newsletter and sends it to me, and like, I'll read that as well.
So those are kind of the two places that I keep up, but in some ways I'm not the best person to recommend, because like I've been in this space for so long that like when I got started, there was like one or two building blocks, right?
And then they just like... They emerged as I was there and so I got to learn about them. So I have a very good sense of like, what is noise and what is signal, where... I mean, if you maybe go on Twitter and Bluesky, it might be really overwhelming to be able to figure out what is worth paying attention to and what is not.
Rachel: The thing I'm really enjoying about Bluesky is the starter packs. When you want to get into one of those communities, when you want to start learning like who the players are and what the conversations are, just being able to subscribe to one of those starter packs is great.
Ty: Yeah.
Rachel: It's been really good for my feed.
Ty: Definitely. It's exciting. I found myself excited this weekend. I actually went and added... Like I own the Ty.energy domain and so I renamed myself to @Ty.energy. I'm like, "This is so cool."
Rachel: A fresh start.
Ty: Exactly.
Rachel: Ty, for your hard work this year, you've earned it. I'm making you god emperor of the solar system. For the next five years, everything's going to go exactly how you would like it to go. What does the future look like?
Ty: Yeah, I'm going to keep it to AI code assistants, 'cause I think we'd be here all day if I attempted to describe the next five years beyond that.
So in the world of Continue, my hope is that we move from kind of this very competitive market that we're in at the moment, to a very collaborative market within five years, where there's now an ecosystem of folks building different bricks that make up really important components of the AI code assistant, that are interoperable with the other components of that system, so that folks can kind of create those custom AI code assistants and maybe even at that point we'd stop calling them AI code assistants and they're just more viewed as just really important AI software development systems or software development systems within organizations, within teams, even just for individuals, that are really critical in helping them build software, and that they're able to take those different Lego bricks and put them together in a way that enables them to get the suggestions they want.
And when they get suggestions that they feel are bad or wrong or not what they want, that they have the ability to adjust those Lego bricks, that they can swap out those Lego bricks. So that they are able to ultimately create a custom ecosystem that's giving them suggestions that they use and that they like, and that allow them to build awesome software.
And that like we move to a future where we're able to build better software, and better is not defined by any particular company or individual even, right? But better is for each of us to define for ourselves, right?
The definition of great software is something that we, or our team or our organization, or in some cases a particular language or framework or library says is great software.
And that we're able to reflect that into the AI code assistants, into the AI software development systems easily, and that that accelerates kind of the automation of the repetitive, boring things so that we can spend increasingly more amounts of our time on the creative things that really engage us, that we find to be worthwhile.
And so I think it's probably the next 50 years, but hopefully we're taking a step towards that in the next five years.
Rachel: That sounds completely amazing. A generation ship is a ship that's flying to the stars taking hundreds of years, and so multiple human generations on the journey. If you had such a ship, what would you name it?
Ty: All right, I've got two answers here. One, I think I already have a generation ship. It's called Continue. It's in many ways metaphorical, right? But I think, in where we want to go in the next five years and five decades, my hope is that's a good name, right?
'Cause in a metaphorical sense, that's part of it. But to give you a more concrete answer and the very non-metaphorical answer, it'd probably be Possibilities.
Rachel: Mm-hmm.
Ty: I think that for me, life expanding beyond Earth and going to the outer solar system, into the universe would be about enabling hopefully a lot of play and possibilities for people, and that we don't impose kind of our current limited horizons on those people, or maybe even the evolution of people, whatever they become, with a name, right? That is too limiting, right?
And so I think Possibilities is a great word that hopefully encourages people and whatever comes after people or whatever people evolve into, to never stop, kind of, expanding their horizons.
Rachel: Those are both beautiful names. Ty, it's been wonderful to have you on the show. Thank you so much for bringing your insights. Love the work that you're doing. Please continue.
Ty: Thank you so much, Rachel, for having me on, this has been fun.
Content from the Library
Open Source Ready Ep. #8, Bridging Software & Hardware with Daniel Mangum of Golioth
In episode 8 of Open Source Ready, Brian and John sit down with Daniel Mangum, CTO of Golioth, to discuss his journey from...
Generationship Ep. #29, Game Theory with Leslie Fine
In episode 29 of Generationship, Rachel chats with Leslie Fine, managing partner at Enjoy The Work. With a background in game...
Generationship Ep. #28, Collective Intelligence with Emily Mackevicius
In episode 28 of Generationship, Rachel Chalmers speaks with Emily Mackevicius about intelligence in all its forms—from songbird...