1. Library
  2. Podcasts
  3. Generationship
  4. Ep. #18, Intelligence on Tap with Shawn "swyx" Wang
Generationship
36 MIN

Ep. #18, Intelligence on Tap with Shawn "swyx" Wang

light mode
about the episode

In episode 18 of Generationship, Rachel Chalmers sits down with Shawn "swyx" Wang to delve into AI Engineering. Shawn shares his journey from popularizing the term "AI Engineer" to navigating the rapid advancements in AI technology. Together, they explore the evolving demands and opportunities in AI, offering unparalleled insights into the future of this transformative field.

Shawn “Swyx” Wang helps build great developer experiences, from demos to docs to dev tools to dev communities. He’s currently working on the Smol AI Company working in San Francisco and Singapore. He’s also a frequent writer and speaker best known for The Rise of the AI Engineer on Latent Space, the Self Provisioning Runtime and Third Age of JS, and The End of Localhost on DX.Tips, and finally on the nontechnical side, the Learn in Public movement and The Coding Career Handbook.

transcript

Rachel Chalmers: Today, I'm delighted to have Shawn Wang on the show. Shawn helps build great developer experiences from demos to docs, to dev tools to dev communities. He's currently working on The Smol AI Company, working in San Francisco and Singapore.

He's also a frequent writer and speaker, best known for "The Rise of the AI Engineer" on Latent Space, "The Self Provisioning Runtime" and "The Third Age of JS", and "The End of Localhost" on DX.Tips. And finally on the non-technical side, the Learn in Public movement and the "Coding Career Handbook."

Shawn also goes by Swyx, which is the initials of his English and Chinese names. A developer, founder, angel investor, primarily active in the AI and DevTools community.

Swyx is a GitHub star, Stripe community expert, helped run the React subreddit for over 200,000 developers. And grew the Svelte Society from zero to over 15,000 developers. He grew up in Singapore but has worked mostly in the US and UK working on developer experience for Netlify, Amazon Web Services, Temporal and Airbyte.

And he's now independently working on Latent Space, the AI Engineer Summit and Foundation, and The Smol AI Company. Shawn, Swyx, thank you so much for being on the show.

Shawn "Swyx" Wang: Yeah, it's a pleasure. And yeah, happy to dive in.

Rachel: In the rise of the AI engineer, which is the essay you're probably best known for, you wrote that we're observing a once in a generation "shift right" of Applied AI fueled by the emergent capabilities and open source API availability of foundation models.

A wide range of AI tasks that used to take five years in a research team to accomplish in 2013, now require just API docs and a spare afternoon in 2023. That's as true now as it was a year ago when you wrote it.

Can you tell us how you came to write this very influential essay? And a confession from me, it did help inspire the name of this podcast.

Swyx: I didn't know that. So, that's a fantastic revelation. To me, it was a realization as well. I was basically talking to a lot of companies and a lot of engineers as a part of my work on Latent Space podcast and The Newsletter.

And I noticed that everyone was trying to find each other basically. Like, the companies are trying to find the talented hackers that would be very well read on the emerging AI stack. And the engineers wanted to work more on using AI and building AI products, but didn't have the right terms to describe themselves.

Classically, the terms to describe themselves would be ML engineer or research engineer. But none of those I saw were sufficient for describing the kind of background that was unique in doing well with the current generation of foundation models and large language models.

So effectively, I was like, "We need to coin a new term. We need to popularize a new term. This is going to be a job title." And quite honestly, this has been a big debate as to whether or not there should be some rising class that is sort of dedicated to specializing in AI.

You know, there's a lot of people who are arguing that all software engineers will be using AI. So, it's ridiculous to try to call a special term as the AI engineer. I ended up thinking this through really, really hard. The essay took two months to write. And I was just researching and talking with a lot of people in San Francisco and abroad, and realized that--

The pace that this industry is moving, the amount of papers you have to read, the amount of libraries to keep up, the terminology that's starting to pile up is basically a full-time job. And if you're not taking this seriously and spending your full-time on it, you will be out-competed by somebody who is taking it seriously and spending their full-time in it and specializing in it.

So, I basically focus on this term AI engineer as a schelling point, as a way for companies to have a shorthand to identify what they're looking to hire and for the engineers to identify themselves and to legitimize the engineers who identify themselves as working on AI because they don't have the same background as the data scientists and ML engineers that preceded before them.

They are going to be more of the product hackers. They're going to be less steeped in the sort of PhD and background knowledge and big data knowledge that you typically expect out of an ML engineer because you don't need that anymore with foundation models.

Rachel: Were there alternative terms that you considered and rejected? A lot of people are talking about prompt engineering. Was that not broad enough for you?

Swyx: Yeah, I had a list of terms at the bottom of the essay. A lot of people were suggesting LLM engineer, cognition engineer, cognitive engineer. There's just a bunch of other terms that people were trying to propose.

Because I talked to a lot of people to try to really figure out this term 'cause I was going to make a serious investment in it. And I think there's just a worse is better phenomena when it comes to naming things.

No name will be perfect for everybody, and everyone reduces to the shortest, most pronounceable title for a thing. And it's valuable to try to popularize that anyway because it's going to happen anyway.

Rachel: How have things changed since you wrote "The Rise of the AI Engineer" for you personally?

Swyx: Well, you know, I started... When I realized I was going to write the essay, I actually also started the conference. And for me, I think I would safely say that I think it's going to be my life's work, my sort of 10-year journey into building this thing up into an industry.

So, it's changed a lot. Like, that that essay has become very famous, incited in a number of hiring posts, and I always get invited for my thoughts on stuff, even though I'm not necessarily qualified to talk about it, right?

You know, I'm just one of many, many people who are all observing this trend and realizing that there is something here and I'm just a participant that has a point of view. But there are many other people who are also going to be playing in this field that disagree with me. And I think that's completely okay.

Like, if you start in an industry, you should never claim to do anything else apart from just like try to do what you can to help it grow, you know? So like, I start the podcast in order to interview people to serve as a source of information.

I started AI News, which is something that is not in the bio because it's so new, which is like the the daily newsletter for people to keep-up-to-date. And then like for me, the conference is the preeminent place to get together in person. To also get the best talks out of the people that I observed doing great work in the industry.

Rachel: And this is your second big career pivot, isn't it? Your third career. You started out as a quant working in finance, then you moved into software engineering. What keeps you moving? What keeps you searching for the next thing?

Swyx: Yeah, it's not like I'm trying to search for the next thing. When I was in finance, I was in sort of quantitative trading. So, they're not that different, right? Like, I'm not trying to pretend that, you know, I went from like baking to tech or anything like that.

They're all kind of similar. They do feel a bit of a pivot because I had to do a lot of work to study and to rebrand myself, and to reintegrate myself with a new, completely new network of people that I considered to be my mentors and peers.

And yeah, the motivation for finance to tech was basically just burnout for finance. I didn't like anyone that I worked with. I didn't admire the people that worked above me. And so, I think it's important in any job to want to look up above your chain and go like, "Do you want to be that person?"

Because if you do well and work hard, you're going to be at least in the company of these people, if not directly similar to them. And I noticed that everyone above me was rich and miserable. So I realized, this is not the way to go.

And also, I had some stress issues that caused me to have heart palpitations and I realized that if I died on the job with a heart attack, then I would have a lot of money in my grave.

And then, from software to AI. Again, it's not that different. But I do think that I have to learn a lot that is not typically covered in the normal web development stack. And that is to me roughly the same distance as it has been from finance into software engineering.

Rachel: Wow. So, that's a big step up from just regular software engineering to AI, adding your knowledge of these LLMs and how to incorporate them into larger applications.

Swyx: Yeah. But at the same time I enjoy learning.

I think that if you're involved in tech in this time, in this moment in human history, I can't imagine working on anything else because we are so special to be alive and to see this happen during this time.

So, I had been looking for something to start, and looking for a broader trend to really sink my teeth into for a while prior to this AI wave kicking off. And the approximate cause of it was stable diffusion by the way. Being able to download a generative image model into your laptop and run it. I did not know that was possible. And so, that was a very, very big wake up call.

So, I realized that even though, you know, I don't have the same like sort of PhD background in some of the researchers, it doesn't matter because the sheer demand of people and companies wanting to do stuff with AI is going to far outstrip the supply coming on because the traditional pipeline is designed for a much slower pace of AI developments than is existing today.

Rachel: Yeah, and this is how platform shifts work. This is what makes me excited about it because this is my third big platform shift, depending on how you count them.

What the platform does is it bakes in all of that expertise and makes it available to a 10x larger group of people. So, it is an incredibly democratizing moment. It throws up all of the existing power dynamics and creates opportunities for people to build new sets of expertise on top of these platforms.

It's really exciting, and I love to be alive and to watch it all happening.

Swyx: Yeah, likewise.

Rachel: Your essay goes on to say, "However, the devil is in the details. There are no end of challenges in successfully evaluating, applying and productizing AI."

These I would argue have changed quite a lot in the 12 months since you published. How have the specific nature of these challenges changed over the last year?

Swyx: In some sense they have changed, in some sense they haven't. So what hasn't changed, for example, is the need to instrument, is the need to make things reproducible, is the need to wield your data really well.

Whether it's for fine tuning or for collecting feedback, or for doing any sorts of other product-based improvements. I don't think that's changed at all. People still need to do evals, people still need to monitor their sort of observability and reliability of their tools, and that's not really going to change.

What has changed, and I think this will segue into something that you brought up in the show notes, which is the share scope of AI is changing. So, we used to only have language models, and now we have multimodal models.

And the dimensions by which we consider something to be table stakes is shifting up constantly. If you look at all the tools that are out there, if you look at all the chatbots that all the big model labs are providing, they're all adding features that did not exist when ChatGPT was launched, right?

So, now you must be able to read an image as much as easily as you read words. You probably also should be able to generate images. You should probably be able to upload a file and read over it. And by the way, we can talk about doing RAG versus stuffing everything in long context for that kind of stuff.

Rachel: Oh yeah. I'm coming to that.

Swyx: And there's just an ongoing list of requirements that are piling up as the sort of basic expectations because the large labs are investing in those things.

So, if you're building on top of these large lab APIs, you might be steamrolled by something that they ship tomorrow.

Rachel: Yeah.

Swyx: Which is something that happens with OpenAIs, sort of GPTs. The sort of customizable versions of the GPT store. And you need to be okay with that. You need to have a strategy for dealing with that because it's going to happen.

But at the same time there's got to be opportunities thrown off because OpenAI doesn't focus on everything. And you also just need to achieve a very high bar with user expectations, right?

Everyone's going to have very high expectations when they come try your product. And then, they try it for a little bit and then they'll leave because the thing that people think they want is not necessarily what they want to use every single day.

So, the product challenges is still the tricky thing, right? You're building on sort of like shifting sands. And also, like people don't really know what they want yet.

Rachel: Let's talk about multimodality. One of the reasons I'm excited about this more default incorporation of images is because my background is in writing, and so I'm acutely conscious of the limitations of language in describing the physical world.

I think opening these language models to other kinds of representation like images, and potentially other kinds of data will create models that are in some sense instantiating other kinds of knowledge, not just linguistic knowledge about our universe.

Do you think this is the year of multimodality? And what will that mean for AI engineering?

Swyx: Yeah, so we had a guest, Logan Kilpatrick, come and speak at our summit last year, and he actually declared 2024 the year of multimodality. And I would say like it's mostly played out.

I would definitely say that because now, the sort of mainline GPT4 model does consume images natively. It's not a separate model. It used to be a separate vision model, and now it's all integrated into one.

And I think that the viral demos that we've seen, so I'll point people to TLDraw's Make Real, where you can sort of draw on a whiteboard and take a snapshot of it, and turn that into actual working code. And it is very indicative of like, where things are going, right?

Like, in the recent sort of Google I/O, they're also demonstrating Project Astra, which is combining vision and voice generation, and all that good stuff together. And I think that's obviously what people want.

I think that's the most interesting thing about general intelligence, that they shouldn't incorporate information from multiple modalities. And yes, it is a lot easier to describe some things without words. Because obviously an image is worth a thousand words.

And it's also surprisingly cheap. I think people don't understand that it wasn't a huge leap going from last year to this year adding all the other modalities. At least for the late fusion type models.

So, there's a distinction between the late fusion and early fusion. Late fusion is where you kind of freeze the image model, you freeze the text model, and you just kind of fuse them together after they've all been trained. And the early fusion models are where they're all trained natively from scratch.

And obviously the early fusion is going to be better and more expensive. But the late fusion is surprisingly good. I would say like, it's Grok from xAI. The Elon version of Grok, not the GPU company. They recently turned Grok into a vision model and that was surprisingly easy because they hired the guy who invented the LLaVA technique.

And we interviewed him for my podcast. I think it's a really cheap and interesting technique for turning any model into a multimodal model. And we're likely to see a lot more of that going forward.

Rachel: And in a sense, it's like you said about the term AI engineer. The worst best, you know? Something that's easy and available will outdo something that's technically superior just because of its availability, of its affordances.

Swyx: Oh, interesting. Yeah. I would support that as well. "Worse is Better" is a specific essay by the way. I'm quoting a specific essay. I forget the author of the title, but if people are interested, Google "worse is better."

Rachel: We'll put it in the show notes. You'll find it people.

Swyx: Yeah.

Rachel: Retrieval Augmented Generation, you already raised this. What do you see as the state of the art here?

Swyx: Probably use a library like LangChain or LlamaIndex. There are a lot of papers and techniques to do RAG. You do want to stick to your fundamentals in terms of not overcomplicating things.

So the state of the art would be, for example, I mean... So, let's say, let's separate fundamentals from state of the art because if you keep state of the art, there's a lot of noise, a lot of details, and we're not sure that will stick around, right? The fundamentals will stick around much longer, but the state of the art will keep shifting.

Fundamentals are that you want to chunk up your documents, right? And embed them somewhere, and then retrieve them according to some techniques. For example, hypothetical document embeddings.

It's open question whether that works or not. You have to tune it a lot. Anyway, those are like, the sort of basic knowledge to understand them. You probably also want to eval them using some kind of framework of understanding what sources of errors come up during RAG.

And the most popular one, the one that OpenAI now supports is RAGAS, which is a sort of four-step decomposition of the sources of errors in RAG. And there you can actually start to determine what issues your RAG pipeline might be having.

Since then I would say that the sort of current emerging state of the art is ColBERT which comes out of Omar Khattab's lab in Stanford. And what's going on there is that sometimes actually, you know, depending on the use case, chunking with RAG is not sufficient.

So, you're kind of, let's say, when you're making a query about a film, asking a query about a specific frame doesn't really help when you're asking about something that happens over time. You need to be able to retrieve mood. You need to be able to retrieve a relationship.

That doesn't quite get captured when you chunk something and you embed it, and you store it in a vector database, and you retrieve with cosine similarity, right? Those are something where like, if people are too distant from their RAG implementation, they don't really understand that this actually is really material.

And if you chunk things wrong or if you ask the wrong kind of question, and send it to a RAG process, it's just not going to get it because you are asking for a specific small slices of information rather than the long evolution of something over time.

So this is where ColBERT probably shines more if you want to follow the RAG path. ColBERT is a sort of late interaction paradigm where you basically chunk token by token and you retrieve them much later.

It's really wild and kind of snazzy. You basically do a lot more work upfront, so you do less work later. And you actually can much better identify which relevant parts of your query are related to the underlying documents.

Rachel: And that totally makes sense. When you think about the context of those two information sources. The chunking is almost up by definition lower dimensionality than what's in the model itself. Because the model's a huge association engines that create links between disparate pieces of information.

And if you do that pre-work on the RAG and create tokens out of your chunks, that's bringing it up to a level of dimensionality that's closer to what's already stored in the language model.

Swyx: Yeah. I wouldn't say tokens are probably say, embeddings, but yeah, that's all broadly, directly correct. And by the way, there's also a lot of fascinating work that is state of the art as well.

Right now, Matryoshka embeddings is something that open AI recently shipped where you can truncate your embeddings, which is super cool. Basically you can get like 70, 80% reduction in your latency in your database storage if you're at a big enough scale where that matters.

You can get reductions just by lopping off the least significant like, 70% of the embeddings that you get back from your embedding model, which is fantastic. I didn't know that was possible. But some grad students published the paper called "Matryoshka Embeddings" that showed it was possible, then OpenAI shifted like, six months later. It's really, really fast.

Rachel: It's embeddings all the way down.

Swyx: Yeah.

Rachel: What are some of the best practices you've seen for AI engineers to accelerate themselves? How do you ingest these techniques and use them to make yourself more competitive?

Swyx: Probably the simplest best answer is to build things. Like, set yourself something challenging to build. If you want to or if you don't have ideas, just clone something existing out there, and you'll eventually find something that you want to do differently and just go pursue that.

And learn just in time rather than just in case, right? Like, there's so many things to learn, there's so many things to keep up of, but you'll never get your hands dirty with stuff unless you actually build with it.

And the forcing function of building forces you to not only choose things instead of knowing a little about a lot of things, just to choose things and go into things deeply, but then also being able to get into the nitty gritty details that people don't really talk about in public. That's the only way you're going to get into that and ask the questions that only real builders are going to ask because you are trying to build.

I will say that is what any software engineer should do anyway. And it's nothing special to an AI engineer.

Rachel: Yeah.

Swyx: Right? Like every engineer should be doing that.

Specifically for AI engineers though, I do think that you do need to stay informed because this field is moving so much faster than regular software engineering.

I do think there is a self-serving bias here. Us being podcasters and me having like a daily newsletter, and me having a conference that, you know, where people come and listen to what's new. Obviously, there's a bit of self-interest here, but I do believe what I say.

In that you want to stay informed, you want to stay up-to-date because things move very quickly. Your underlying platform, the state-of-the-art open source models and the RAG, and prompting techniques that you might want to introduce to your pipeline. They're all changing all the time. And I do think that, you know, to some extent, you as an AI engineer are being paid to solve this for the broader company. Right?

Rachel: Yeah.

Swyx: Not everyone in the company should be up to speed on AI, but they definitely want someone in the company to specialize in it. And if you choose to do this job, you're probably being the point person. Like, if you're not being the point person, someone else will be, and you're not doing your job well.

Rachel: With so many people building and so many new models coming online, have we reached peak ChatGPT?

Swyx: So, this question is interesting the way you phrased it because we have reached peak ChatGPT and we reached it a year ago.

Rachel: Hmm.

Swyx: I don't know if people know this publicly, but I have a post on this on the blog. Everyone was very excited when ChatGPT launched and reached a hundred million users in like February of 2023, right? Like, two months. Let's just call it two months into launch.

So, that will make it January 2023. And everyone's like, "Wow! That's the fastest growing product ever. This thing's going to take over. And every man, human and child on earth is going to use it."

And then, fast forward a year later, 10 months later to OpenAI DevDay, and Sam Altman walks on stage and announces that they have a hundred million users of ChatGPT.

Rachel: Yes.

Swyx: So, ChatGPT has been, and we have website analytics from Similarweb and other sources that track the ChatGPT usage. And basically we spent a year with zero growth in ChatGPT. And I don't think it's going to accelerate that much more.

It's probably going to still keep growing because people are slow adopters, and there's some people who are skeptical, and then they slowly coming board over the course of like three, four years.

But peak hype, we probably reached peak hype with ChatGPT. Like, we're past it. And now, people are sort of diffusing out into different use cases. So, to me this is not a failure of OpenAI, it's an evolution of OpenAI, right?

It used to be, "We'll ship one chat app." And then, everyone will use our chat app pay us 20 bucks for it. Now, the question is much more like, "We'll support these APIs and let other people build chat apps. We are not going to build the chat app. I want you to build intelligence into your application, however you see it. It could be an IDE, it could be like a wearable device, it could be a chat app, it could be a like a little box on your website. I don't care. Just use my generally intelligent API."

Again, that's a little bit self-serving because it means that AI engineers will have a job. So yeah, we have reached peak ChatGPT because ChatGPT itself has provably peaked. But we haven't reached peak LLM, which is probably what you actually meant to ask.

Rachel: You're prompt engineering my questions. I admire that. Have we reached peak LLM?

Swyx: No.

Rachel: And why do you say that?

Swyx: Because we are only beginning to figure out how to put this to work. And this is going to take a journey of let's say, 30 years.

And being so glib as to say that, "Oh, it happened. It's a flash in the pan. And now we move on and our lives don't change because of this," is severely underestimating what this thing can do, right?

We've never had intelligence on tap, which is effectively what this is, right? Every time you used to hire a person, it would take this sort of days, weeks, months long process to hire someone and you bring them on, and they need benefits and they need to be managed, and they need to be respected as humans. AIs don't need any of that. AIs will work in your sleep. They won't stop. And they're dumb today, but they're getting smarter every single year. So, it's up to us to figure out what parts of intelligent work we can hand handoff to AI.

You know, right now it's spicy autocomplete, right? Right now it's retrieval augmented generation. Right now, it's scaffolding out code but not maintaining code. Okay, fine. Like, that's the state of AI today. Every year is going to get better.

So, the best product people are not going to give up because AI cannot do some things today. They're going to look at where things are going. They're going to make something workable today, but they will improve over time, and it'll just get better and better, right?

Like, the ultimate vision for AI is autonomous agents, right? Agents that you can assign a piece of work, go away and come back like three, four hours later and it's mostly done or just needs a little bit of nudging from you to get done. Right. That is a virtual human working for you.

I most recently experienced that with Devon, the sort of autonomous, sort of coding agent. And the experience of working with Devon is phenomenal. Like, I will say, they probably overdid their marketing hype videos because people are thinking that it can do everything.

But for the things that it does do, it really felt like I was checking in on Slack with five junior engineers, because they let you run five jobs at a time. And I was just like, checking in on what they're doing, asking for report. If they're going off track, I would nudge them back onto track.

So, they still need me to be kind of overseeing things, but I'm not doing like hands on keyboard coding anymore. I'm just looking at what code they're writing, what they're planning to do, and nudging them if they're going on the wrong track, which is exactly what I would do if I was an engineering manager managing a bunch of engineers.

So, have we reached peak LLM? No, because that experience is not available for most people, and that experience is not available for domains outside of coding. Like, we just need that for everything. We need that for law, we need that for my personal chores. We need that for everything. Like-

Rachel: My taxes.

Swyx: Yeah. Yeah. We are right now in this age where we're just barely getting to sort of like, everyone having like one AI to one human.

And I think about the proliferation of technology very similarly to how computers and phones rolled out, right? We used to have like, let's say, one phone in the household. Everyone shared the same phone. Then everyone got their personal phone and that was a big deal.

We used to have one computer per household. I may look young, but I even remember that that time when there was the family computer, everyone shared the family computer. And now I have like four computers just lying around me, right?

And I think that's what democratization technology does. Like, it just goes from like a ratio of many humans to one technology thing. And then now, we're going to shift and commoditize to many technology, many of the items to one human.

And so, what that really looks like is we're going to have many AIs per human. And they'll all work for us on multiple topics, and we'll work with them in our daily lives in multiple ways. And that is going to take us decades to figure out what the right paradigm is.

Rachel: What are some of your favorite sources for learning about AI?

Swyx: Yeah, I have a list of good podcasts and newsletters. Obviously, I contribute to that, but I do have a lot of people that I respect and learn from myself.

So, I encourage people to just kind of search my name and search for "Good Podcasts and AI Newsletters." I think about 2,000 people follow that on Twitter as well. And I have this Twitter shortlist of AI high-signal people that if you join my sort of AI News Newsletter, you'll see that. That that is the way that I keep up-to-date on Reddit, Discord and Twitter news.

But then, I think that is the fundamental daily news flow. You also want to have, again, new fundamentals. I think I have a lot of emphasis on fundamentals that don't change rather than getting whipsawed every single day with like, "Oh, look over here and then look over there."

You'll notice that a lot of headline grabbing things happen in AI where it doesn't matter the week after. And you want to try to keep yourself away from that, right? Like, you want to focus on things that last. And I'm really trying to encourage people to do that. So, favorite sources for learning about that? You know, books.

Rachel: Yep.

Swyx: One of people writing today that are, you know, they have put a lot of work in that covers fundamentals. I recently did like an NLP with Hugging Face Transformers book that was recently released by Hugging Face via O'Reilly. And that's really good. That's a really good survey of the field.

And then also, like reading papers as well. I think like, once you have the good fundamentals, then you want to practice that with more cutting edge things. And reading papers and looking at code and running code that is coming out, I think it's a really good way of keeping up-to-date.

Rachel: If everything goes exactly how you think it should for the next five years, what does the future look like?

Swyx: Exactly how it should. I think I already articulated a little bit of that future, right? That we have agents improving our lives in ways that we trust and don't get in our way because we now are unblocked by the number of humans that are available.

The total bound of intelligence is increasing now, where it was kind of stagnating before because the global birth rate is stagnating. Now, we can have intelligence on tap. We're going to be deploying it in the rest of our lives.

I think at least, for example, for a work context, I definitely want to index everything and make notes of everything, and make action items of everything.

I've actually been working on my own boss. I think that a boss could be automated. Like, I call it my small CEO. Because what is a boss except for like, it says directions and ask you for status updates and... You know?

And obviously, it should unblock things for you, but you ask, it can't do that yet, but they could in the future. You never really know. The other version of this is a therapist AI, coach, whatever that is.

And so like, we should have in the next five years, we should have some prototypical versions of this working for the majority of the population. I would say that that is the success criteria for this.

Like, not everyone's going to embrace this, nor should they. But for the people who want it, it should exist for them. And they should be obvious what to choose because we've worked out the downsides enough. Like, there will be downsides, there will be horrifying, really, really bad implementations like Google has recently been accused of.

Rachel: Google search suggesting jumping off the Golden Gate Bridge is a cure for depression.

Swyx: Yeah. Stuff will happen. But like, don't... You know, don't throw the baby out with the bath water. Like, that was a bad implementation of it and they'll fix it.

Rachel: Yeah. Yeah.

Swyx: And so like, do you want to be on the side of people who just criticize things while they're improving? Or do you want to just join in and help to build it, right?

Like, I think a lot of the social media bias is towards just laughing at Google. But actually they're trying things. What are you doing, you know? And I'd much rather have that attitude.

Rachel: I don't think those two extremes are mutually exclusive though, are they? I think you can be very critical of the potential harms of AI while also wanting to make it as good as it can possibly be.

Swyx: Yes. I do agree with that.

The reason I care about diversity and inclusion is not because I want to be woke or virtue signal. It's because if you genuinely believe that technology is as powerful as it is that you believe it to be, then it should be designed by the people who are going to be affected by it.

And mostly it's right now, a bunch of white guys and Asian dudes, right? Like, let's just call a spade a spade. And we're not going to know like, what people who don't look like us need. We're not. So, we should include them in the process, like just mathematically.

'Cause like if you believe that this has power, then you know, with great power comes great responsibility. And I feel like this is still not well appreciated by the Silicon Valley tech scene. And it'll forever be so, right? This'll be a never ending task, but-

Rachel: Not forever. Not forever.

Swyx: I'm doing my best. I'm trying to get that message out there. If things go the way I like it to for the next five years, that should be in there too. That it becomes more diverse and inclusive, because hey, if this thing's going to actually take over the world and actually take over humanity and serve as well, it better be more diverse.

Rachel: Yeah. Swyx, one last question. If you had a generation ship on a journey to the stars, what would you name it?

Swyx: Enterprise.

Rachel: Classic for a reason. It's a good one.

Swyx: I'm not a huge Trekkie 'cause I never really found the shows that interesting, but I like the idea that humanity has solved all their problems, and now our sole job is to go out there and see what's out there. And that is what an abundance mentality looks like.

Rachel: Yeah, absolutely.

Swyx: That you're just going out there and observing, and hopefully not intruding too much. Even though every episode, they always somehow managed to break their own rules.

But the USS Enterprise, I think, you know, it'll be iconic. If we had a generation ship to the stars, then we probably should name it after the first sort of cultural artifacts of humanity that figured this out.

Rachel: That's a very cool name. It's been a delight to have you on the show. Thank you so much. And good luck with everything that you're working on.

Swyx: Likewise.