1. Library
  2. Podcasts
  3. Generationship
  4. Ep. #5, Live from DevGuild: AI Summit
Generationship
23 MIN

Ep. #5, Live from DevGuild: AI Summit

  • Christine Spang
  • Heidi Waterhouse
  • Photo of Paul Biggar
GuestsChristine Spang, Heidi Waterhouse, Paul Biggar, Raiya Kind, Seema Patel
light mode
about the episode

In episode 5 of Generationship, Rachel Chalmers shares interviews from Heavybit’s 2023 DevGuild: AI Summit on October 19th, 2023. This Open Space unconference brought together a community of 200+ to discuss how AI will change the face of software development. This episode features event highlights and insights from industry experts: Christine Spang of Nylas, Heidi Waterhouse of Sym, Paul Biggar of DarkLang, Raiya Kind of Code and Concept, and Seema Patel of Stifel Venture Banking.

transcript

Rachel Chalmers: I'm Rachel Chalmers. I'm here at Heavybit's AI Summit with the amazing Christine Spang. Christine, can you introduce yourself?

Christine Spang: Hi, everybody. My name is Christine Spang. I'm the founder and CTO of a company called Nylas.

Rachel: Christine, what are three things that have surprised or delighted you today?

Christine: One is that there's a lot of people here. I thought it was going to be a bit smaller, but I think there's just so much buzz around the topic right now and I think the reason that I was really excited about coming was because of the way that you guys frame things around nobody has the answers. I think that really just spoke to a lot of people and there's a big crowd.

Rachel: Yeah. Backstory here, Jesse originally conceived this as a regular conference and just got overwhelmed with the amount of signal versus noise and so we're running this as an open space technology conference. It's super cool.

Christine: Yeah. How can you put an agenda together when last week like five new things got announced and everybody's just running as fast as they can to keep up?

Rachel: And the answer to that is you crowdsource it, you let people write their own agendas. It's working great so far.

Christine: Yeah. I've been really impressed.

Rachel: What else has surprised you today?

Christine: One is that there's a really broad set of people in terms of there's folks that are working on developing the technology and then there's a lot of practitioners, then there's folks interested in the go to market side or how do we find industries and niches where the state of technology today is ready to make a big difference in those particular industries.

So I guess I've been surprised and excited to see that there's a lot of cross pollination between a lot of different roles and people having everything from... I'm talking about a reverse engineered GitHub Copilot and here's what happens behind the scenes as to what's going into the context window and how you can think about what is the black box thing happening under the hood with this one particular technology, to folks who are here to learn or really thinking about more of a business oriented mindset.

Rachel: I'm going to make you king of AI for the next five years, everything is going to go exactly how you want it to go. What does the world look like in five years time?

Christine: I think the thing that's most exciting to me is... at this point I'm a technology executive-

Rachel: I'm sorry, condolences.

Christine: What that has meant is that you don't get to do any of the fun stuff, but you have to make all the high level decisions.

What's most exciting to me is that with a lot of the tools that are coming out and being developed today, it's actually a lot easier for someone who has that high level perspective to be involved in the creation on the ground as well.

And so I really want to see all these different pieces come together so that it's not just people like me, but more people can harness the power of computing, I would say. To me, Large Language Models are essentially this one new technology. We like to rebrand AI like every five years or whatever, and it means five years ago it was neural nets or deep learning or whatever.

Rachel: Five years before that it was Bayesian Stats.

Christine: Yeah. Exactly, like old school machine learning and now you say AI and everyone assumes you're talking about Large Language Models or generative AI. I guess Large Language Models for the text part, and then there's images and voice, and all that stuff, and people just say AI and they mean that right now. But it's a label that has change definitions every X number of years.

My company builds APIs and developer tools, and I'm really excited that Large Language Models is essentially like a high level programming language now that every person knows already. It's like meeting people where they are, in that people learn to speak and to use language from the time they were toddlers. So I guess, long story short, the thing I'm most excited about five years, king and queen of everything, is just a much broader swathe and everyday people being able to get things done with computing just by telling a computer what to do. It's like all those Sci-Fi movies that we grew up with where it's, "Computer-"

Rachel: "Enhance."

Christine: Yeah. "Do thing X, Y or Z." And I think that's going to be really powerful in ways we don't understand because there's been this narrow subset of the human population that's been able to tell the computer what to do, and we're just making that much more broad now.

Rachel: We've been talking so long about the next 500 million developers and now it feels like we have line of sight.

Christine: Yeah. This is like a huge jump that I think was counterintuitive to a lot of people, even the people who made it didn't really... There's emergent behaviors and people didn't really know what the results were going to be. But it's also kind of interesting in that when we have explored and done science about the natural world, it's understood that you're not going to necessarily know what the output or the results of a thing is. But this is a thing that people created and so I think it's been shocking to a lot of people to find that even things that people make, we don't necessarily know what they're going to do before we actually make them and just observe what happens.

Rachel: It is super cool. If you had a colony ship to the stars, what would you call it and why?

Christine: Destiny.

Rachel: Why?

Christine: It was the first thing that popped into my head, and it's because I think that people really want to live forever and it almost seems like it must be our destiny to go beyond the bounds of Earth in order to accomplish that. Given what we know about certain stars, they only live so long, eventually it's going to be explode and envelop the Earth and so we got to get out of here at some point. The wheel probably won't help.

Rachel: Happy hour on Trappist-D, everybody. Be there.

Christine: Yeah. I'll see you there.

Rachel: Spang, it's been a delight as ever. Thank you.

Christine: Yeah. It was great catching up.

Rachel: I'm Rachel Chalmers and I'm here with one of my favorite people in the tech industry or, in fact, the world. The amazing Heidi Waterhouse. Heidi, can you introduce yourself?

Heidi Waterhouse: Hi. I'm Heidi Waterhouse. I'm an advisor to startups and I'm coming out of tech writing and DevRel. Basically, I just like making sure people are telling better stories about what they're doing.

Rachel: So Heidi, you said you came cynically today and you're feeling more optimistic now. What happened?

Heidi: I think what happened is that the people who are doing this thing are not just AI boosters, but AI questioners. We're asking hard questions about whether AI is useful in any given application and what we're going to do with it and how it's actually worth its weight.

Rachel: Can you tell me three things that have surprised you from the talks that you've been in today?

Heidi: Yeah. I think one of the things that I thought was surprising was a couple of different talks came around the idea that smaller, more focused LLMs are going to be more useful than the very large, general LLMs. You don't want to train your Terraform LLM on Don Quixote. That's a bad plan. So the smaller ones that are easier and cheaper to compute are going to be more useful. Another thing that we talked about was making sure that we are doing only the toil as what we're taking out.

We want to take out the boring stuff, and not the important human construction of meaning and ontology and semantics. There's a lot of philosophy in what we're trying to do and doing that carefully and mindfully, and then not having to write a bunch of API queries to support that is a way to really maximize human potential.

The third thing that I thought was really interesting is that nobody has really addressed the problem that we can't unmix the paint. When we put data into machine learning, we can never extract it and we have to recompute the entire model.

And so, with all of the advances coming from the EU in data privacy, we're really going to have to think carefully about how compute cost work and what kind of data we ingest, because otherwise we're going to throw away some very expensive models.

Rachel: Super cool. I love the idea of the advantage of working on smaller LLMs because it does seem like a redistribution of power and agency in the industry that we haven't seen, I think, since the early 90s and the days of the open web. Very excited about the opportunities that creates.

Heidi: Yes. There was a session on running LLMs on your laptop, which I thought was super cool. Yeah, let's use these smaller compute resources instead of these enormously hungry, general resources to answer more specific questions.

Rachel: One of my friends came into a meeting recently with an LLM installed on his phone, which was just such an insane power move, I cannot even-

Heidi: I want that phone.

Rachel: Heidi, I'm going to make you king of AI for the next five years. Everything in the industry goes exactly how you want it to. What does the world look like?

Heidi: In five years we are using a lot of natural language to ask questions about how we could do better, and we're doing it in a very specific, hyper-local way. So it's not that we have these giant collections, we have these tiny households where we are producing home grown, home spun wool. With that, we are making the clothes that we want, and we are not outsourcing everything to the point that we don't understand where it comes from.

So the same way that my family has a Discord now, they could have a model and a database for the family that says, "How is it mom makes that again? How is it we make chicken cacciatore? Or what is that thing that she's singing?" Asking questions like that will help us better understand what it is that we want to do. That's not going to be very profitable.

I think the profitable things are when we allow AI to design things that we couldn't conceptualize because we have a lot of constraints on our thinking. I was talking to Sam earlier and he was talking about how computers will design structures that don't make sense to humans, but do in fact fit all the parameters of what we need from them. I'd like to see us do that more with our data, and do more mining of it in a useful, productive way.

Rachel: If you had a colony ship to the stars, what would you call it?

Heidi: I think I would call it Esperanza. I want to be hopeful. I think it's very easy for all of the colony ship narratives to be about disaster and fleeing, but I think I would want it to be hopeful. I hope that this new place will be not just better, but an extension of the good things that we already have.

Rachel: I'll drink to that. Thank you, Heidi.

Heidi: Cheers.

Paul Biggar: I'm Paul Biggar, I'm the founder of Darklang and, before that, CircleCI.

Rachel: Paul, thank you so much. It's great to see you. Here we are at the AI summit. What are three things that have surprised or delighted you so far today?

Paul: I love how deep everyone is into this. A lot of these things are just people like, "Whoa, whoa. What's going to happen to this industry?" Or something like that. Here it's like, "Let me tell you about this alternate model for using weightless neural nets." Okay, okay. We're learning shit.

Rachel: We are absolutely learning shit. This has come up in a bunch of conversations today, do you worry that AI is going to eliminate a bunch of jobs, it's going to put people out of work, it's going to-

Paul: Oh yeah, it's going to fuck everything up.

Rachel: Right. Okay, we're done here.

Paul: Yeah. We've talked about this before, how we're both from socialist countries and we have these feelings about how the world should be and who should benefit from them and-

Rachel: Oh, yours are feelings, mine are policy statements.

Paul: Okay. Mine are a little bit more feelings, but we saw tech do, what? 50% layoffs in the last year, and a lot of it was like, "We think the AI will be able to do this better." And maybe that was unstated publicly, but privately we're seeing a lot of people referring to those conversations. In every industry, like the writers went on strike to basically protect the existence of their industry, and maybe protect the existence of many others industries by going on strike.

I think we're just going to see that for years and years because everyone is just trying to be... This Gen AI thing is really about they're arguing about who owns the models and who owns the inputs to the models. But really they're arguing about who gets to benefit on the culture that we have built up for 100 years, and the people who are being told, "You don't get to own any of it," are angry and you expect that.

Rachel: And a big unspoken factor in all of this is we lost a million people from the workforce over the last three years. That has lead to a really unprecedented shift of the balance of power in favor of labor. That's why you're seeing not just the writers, but the autoworkers and the Amazon workers.

Paul: That's why you're also seeing the bosses, I say it as if I'm not one of them, but the bosses being like, "Well, the power has shifted and we don't like it. We need this AI to let the shift be in the direction that we want it to be."

Rachel: Tech billionaires are writing tragic manifestos. Think about their feelings.

Paul: It's so, so touching. Yeah. I read about Mark Andreesen's thing because I will not read anything that he writes anymore. But it seems like a very sad time, must be very difficult for him.

Rachel: It's a cry for help.

Paul: Poor guy.

Rachel: How do you see this moving forward? I think you and I are both invigorated by the rise of this labor movement. I think software engineers are workers too, and I think there should be coalitionist politics and we should support one another. Do you feel hopeful about this?

Paul: The unions for tech workers have been going for, I want to say seven or eight years now. We've started seeing a lot of movement and that really wasn't there before. Not seeing that many unions actually land, but I think there was just the good times for so long and engineers in particular just like, "We're becoming millionaires. Who cares?"

And then all of a sudden you're seeing the junior engineers can't get a job, senior engineers are stuck in the same place for much longer than they would like to be. They're not getting the mobility that they're looking for, and then less in tech than other places, but there's a lot of, "Come back to work. The happy hours and all the things that were fun are gone. It's back to fucking work. Get the nose to the grindstone."

And I don't know if that is enough to push it over the edge, but I think that once the other industries keep unionizing which I think they will, that it will reach us eventually one way or the other.

Rachel: Yeah. I think how you feel about it depends on whether you want things to be good for a few people or whether you want things to be good for everyone. I call the podcast Generationship because we're all in this together.

Paul: Well, I presume from your accent that you were not raised in America.

Rachel: I was not.

Paul: And neither was I, and things are different here. There's a lot of belief that the people at the top should make the money. That is not the case in the cultures that we grew up in. I think the country is fully shaped around that idea, and tech being centered here, I'm not sure it gets to escape.

Rachel: Well, we'll certainly see how this plays out.

Paul: Fingers crossed.

Rachel: Paul, it's been a great pleasure catching up.

Paul: This was lovely.

Rachel: Take care.

Paul: Thank you.

Rachel: I'm Rachel Chalmers and I'm here with Raiya Kind. Raiya, would you introduce yourself?

Raiya Kind: Hi. Great to be here, Rachel. I'm Raiya Kind. I am the founder of Code and Concept where I use gen AI to help people decode humanity, working with conceptual metaphors and linguistics.

Rachel: So cool. Raiya, we're here at the AI summit. Can you tell us about three things that have surprised or delighted you so far today?

Raiya: Yeah. So first thing I found surprising is how much people care but don't really care about finding a plan of action. In a couple of the circles I've specifically asked, "Hey, that's really great. What do we do about it?" Or, "I see what you need. How can we crowdsource to find a solution?" And there has been very little followup on that which is interesting because people really care.

The second thing that surprised or delighted me is how much philosophy is actually being involved in the discussions. It's not just the hard tech of what needs to be done, or even why, but it's the whole how does this inflect on humanity? What does this say about us as a species and a collective?

Rachel: I do think it's an existential moment for us. It's a moment for reflection.

Raiya: Oh, 100%. The thing that everyone agrees on, which maybe is a third surprising thing, is that this is a cusp of a paradigm shift. That's surprising to me because out in the world I hear both ways. I hear some people say, "Oh no, ML has been happening since the 70s. It's very slow, it's going to be a long time."And then other people being like, "The robots are going to kill us all." So there's a happy medium here I think where it's like, yes, this is monumental and what do we do about it? We're not afraid of it, we're leaning into engage with it.

Rachel: A little afraid of it, but you can lean into engage with something you're a little afraid of.

Raiya: Yeah, lean into the fear. Exactly, exactly.

Rachel: I do think that it's because of the fear that it's incumbent on us to lean in. I think we have the opportunity to affect the outcome of what happens now.

Raiya: 100%. This is the moment. I remember you saying in our panel earlier that your generation is kicking itself for not doing more with the web to make the web a more hospitable place to foster siloed growth, and this is that moment again.

Rachel: Learn from my mistakes, kid.

Raiya: Exactly. On the fear part, I heard this really good quote once that, "Fear is just excitement without the breath." So if we center ourselves, I mean really breathe through this and internalize it, instead of distracting ourselves and dissociating from it and pretending like it's not there we can really do something about it and get excited about this change that we're a part of.

Rachel: Everybody take a breath. Raiya, if you had a colony ship to the stars, what would you name it?

Raiya: So this is going to start very Silicon Valley woo, but I would name it We Are. We Are because it's not just I versus you, it's not a separate thing. We're a collective, we're a species, and we are without anything after because we are not defined by our identity of mother, father child, worker, et cetera. We just are. We're here to be, we're human beings.

Rachel: I love that. I have a story to go with it if you'd like.

Raiya: Oh yeah? I would love that.

Rachel: Descartes says, "I think therefore I am." But the translation of the Swahili word Ubuntu is, "I am because we are."

Raiya: Oh, I love that.

Rachel: Humans exist in community, we're defined by our relationships, we're defined by the network of people around us.

Raiya: Yeah, exactly. We're not made to be in a vacuum and we're not in a vacuum, no matter how much we might think we are. We're all connected.

Rachel: I would like a ticket on your generation ship, please.

Raiya: Yes, coming right up. First class. Everyone is first class, we're all equal.

Rachel: Everyone is first class. I love that. Thank you so much, Raiya.

Raiya: Thank you, Rachel.

Rachel: Hey, I'm Rachel Chalmers. I'm here at Heavybit's AI Summit with Seema Patel. Seema, can you introduce yourself for our guests?

Seema Patel: Absolutely. I'm Seema Patel, I'm with Stifel Venture Banking and I cover the enterprise software portfolio. I work with a lot of companies within the DevOps space, cybersecurity, AI, so coming to this conference was really important to me and of course the response as well.

Rachel: Fantastic. So what are three things that have surprised or intrigued or delighted you so far today?

Seema: I'd say to start off with, I think the diversity of conversation has been really exciting. I was just in a conversation around what are some of the trends, where should capital be funded, as well as talking about some of the more nuanced, technical aspects of the conversation. Secondly, I'd say even the folks here, diversity is a big area of focus for me and so having exposure to different pockets and seeing that there was a lot of international people here as well because-

Rachel: Yeah, because you and I were comparing accents. I'm from Sydney, and you have a really interesting background.

Seema: Yeah, absolutely. I was born in England, raised there for a bit and then grew up for the rest of my life in Zimbabwe, and then have been between Zimbabwe, England and America for several years now.

Rachel: It's such an exciting position to be. As we were talking about at lunch, you get to be here in San Francisco, looking at things emerging and then you get to take information about that home and you also get to bring information about what's happening elsewhere into these amazing conversations in San Francisco.

Seema: Absolutely, yeah. It's really important that some of the conversation here gets elevated and pushed beyond the geographies around, and so that's something that's really important.

Rachel: So that was two things, then I interrupted you. Do you have a third thing that surprised or delighted you?

Seema: Yeah. I think the quality of conversation as well has been... not a surprise, I would say. It's an absolute delight. I came here with that exact expectation and it's been interesting to hear everyone's perspective.

Rachel: Seema, thank you so much for joining us today.

Seema: Absolutely. Thank you.