Ep. #139, Navigating Apps with LLMs featuring Matt Dupree of ATLAS
In episode 139 of Jamstack Radio, Brian speaks with Matt Dupree of ATLAS. This talk focuses heavily on LLMs and how they're elevating UX. Additionally, Brian and Matt explore AI hallucinations, tips for predicting where LLMs might be most impactful, and insights on why most code in future apps may be generated by AI.
Matt Dupree is the founder of ATLAS, an LLM-powered app guide. He was previous a data science engineer at Heap and has a background in philosophy
In episode 139 of Jamstack Radio, Brian speaks with Matt Dupree of ATLAS. This talk focuses heavily on LLMs and how they're elevating UX. Additionally, Brian and Matt explore AI hallucinations, tips for predicting where LLMs might be most impactful, and insights on why most code in future apps may be generated by AI.
transcript
Brian Douglas: Welcome to another installment of Jamstack Radio. On the line we've got Matt Dupree. Matt, how are you doing?
Matt Dupree: I'm doing great. How are you?
Brian: I am doing fantastic. It is a wonderful, gloomy, overcast day here in Oakland, California and I'm on my first cup of coffee so I'm just getting started.
Matt: Okay, yeah. I'm only at a half cup of coffee. I try and minimize my caffeine intake so I was like, "I think I only need half today."
Brian: Normally I don't do coffee till the afternoon, I've been doing tea in the morning. But we're not here to talk about me, we're here to talk about ATLAS and what you're working on so do you want to give us a quick intro of who you are and what do you do?
Matt: Yeah, sure. So who I am in one sentence, wannabe philosophy professor turned wannabe tech entrepreneur. I've been saying that for a long time, but I guess the wannabe tech entrepreneur part is not as true. I am doing it now. Background was in philosophy, left philosophy, worked at various startups for the past decade, mostly as a programmer.
But also as a product manager, tech lead, I was the CTO of an influencer marketing startup for a little bit. Then, most recently, I was on the data science team at Heap. Heap is a product analytics company, it's kind of like an Amplitude. That's the bigger one that people are familiar with. Heap actually just got acquired.
Brian: Oh, I didn't see that. Who'd they get acquired by?
Matt: Content Square. But, in any case, yeah, joined Heap originally on the engineering team but then got my way down to the data science team there and got to play around with a lot of machine learning stuff and AI stuff, and data science. So that was, yeah, my background. Then left Heap about, let's see, wow, almost 18 months ago now to start working on ATLAS, so that's my background.
Brian: Okay, excellent. Yeah. Well, then ATLAS is your first, I guess you could say now you're a tech entrepreneur?
Matt: Yeah, I guess. I've had other failed attempts at starting something, so I guess I was still a wannabe tech entrepreneur at that point. I tried something in 2016 where I quit my job and did a little accelerator program, and ran out of money pretty quickly and realized the idea wasn't good. Then actually right as I joined Heap, me and a buddy built a Waze for grocery availability thing for the pandemic.
My buddy and I were running around looking for toilet paper, going to a bunch of different stores, and were like, "This doesn't seem like a good way to not get COVID." You could check like five different stores for the thing you're looking for. So the idea was people could just report if there was toilet paper at a particular location with a particular time stamp, and then instead of having to go to five different stores you could just be like, "Oh, so and so just said that there's toilet paper 10 minutes ago. I'm going to go there."
And that thing took off like crazy as you might imagine. The news covered it and stuff. But yeah, Apple wanted to shut it down. "This is a COVID app and you're not a doctor or any kind of medical organization,"and we're like, "What? This is not a COVID app."
Brian: It's a toilet paper app.
Matt: It's related to COVID, but anyway, they didn't care so they shut it down. Blah, blah, blah, and moved on from that. So this is maybe my third attempt, I guess.
Brian: Okay. Man, that's unfortunate because, yeah, it's kind of like the... I think of Waze, I don't know if they still have this as a feature but if there's an accident or a speed trap, you tell other people by just reporting it, so kind of like Waze for toilet paper is what you built.
Matt: Yeah. Waze for toilet paper/anything you might need during the pandemic. This is what we told Apple, is it's not necessarily pandemic, it's any time there's a shortage of any good. So I live in Florida, there's hurricane, if there's a hurricane there's a similar dynamic. Stuff goes missing from the shelves, you need it, it'd be useful to know what's around. Yeah, that was the idea.
Brian: Yeah. So you're on a new idea, which is ATLAS, so can you explain what ATLAS is?
Matt: Yeah, I can. Actually, I'm right in the middle of changing how I talk about what we're doing, and so, yeah, I'm going to try out this new way of framing things and you can tell me if it doesn't make any sense.
Brian: Yeah, we heard it first.
Matt: Yeah. Sorry that JAMstack is the guinea pig on this. But, yeah, I think the new way that I like to frame this is basically we think that LLMs are going to change how people interact with software. Mobile happened, and that changed the way that people interact with software. The companies that really took advantage of that UI paradigm shift, they did well, there were some startups that took advantage of that and were able to compete against incumbents.
So an easy example of this is Tinder, the swipe left, swipe right. That was kind of a simple UX change, but it was just a better user experience and allowed them to take on the big guys. We think that there's a similar thing that's going to happen with LLMs where we all know that LLMs are really powerful and we don't think that UIs are going away. Not even OpenAI thinks that, by the way.
So we don't think UIs are going away, we think that there's this LLM powered UI that's going to be built in the future and we're helping companies build that.
So that's a little philosophical, right? Unsurprising from a philosophy guy. Concretely, what does that look like? So an experience that we're really focused on right now is let's say you're using a piece of software and you don't know how to do something. The experience that we're building is you can type in what you want to do, just describe the task you're trying to accomplish and then we will just take you to the part of the application where you can do that thing.
We'll just kind of teleport you there, and we think that this is just a fundamentally better experience than hunting and clicking for the place where you want to perform the thing. So let's say we just did this with GitHub. We did a little public demo, and let's say the task is enable two factor authentication, they're pushing that right now. It's going to be required soon.
Let's say you saw the banner that's like, "We're requiring this soon, you should do it." And you X out because you don't want to do that right now. But we're getting close to the end of the year and now you want to remember how to do it. Right now your options are you can look at the docs or something like that, or maybe there's some YouTube video that explains how to do it.
Or, with ATLAS, and this is what we just demoed is you just type, "Enable two factor authentication." And then we will jump you to the page where you can do that. So it's don't look at the docs, GitHub actually has really good UX copy and progressive disclosure so you can hunt and click around and find the thing pretty easily, but we think it's just fundamentally better.
It's a fundamentally better way of interacting with software to just say what you want to do and then, boom, get transported to the place where you can do it.
Brian: Yeah. I guess going back to the Waze for toiler paper, but what if there's not a way to plan to do it? So between the last time we talked because we talked before we got on this podcast a few weeks ago, probably, we actually revamped quite a bit of our docs to have a pave... What we have is guides, so we have guides now, like, "Hey, you're this persona or user, you're trying to do this thing. We know people ask for this, so here's a guide."
But the reason why those guides exist is because people keep asking us, "How do I do this thing? I saw on the front of the website I could do a thing, and I thought it was this. How do I do this?" We turned out a guide just basically saying, "Okay, do this thing." But it sounds like instead of us pointing people to guides, we could point them to the site and walk them through the process. So is that interactivity in the actual app itself?
Matt: Yes, yes. Thank you for asking that question. That's a good clarifying question. So yeah, this is all happening inside of the application because we think this is another way in which I think the way we interact with software will change. So if you think about... let's do like a map analysis piece, so let's go back to before Google Maps, before Apple Car Play.
If you're trying to get somewhere in the world and you're going on a road trip, it's you and then somebody else is in the front seat being the copilot. They've got this big, paper map that's strewn across the dashboard and it's kind of this clunky experience. The map is divorced from the experience of driving and you need a person to translate between what they're seeing on the map and what they're seeing in the world, and it's not great.
Fast forward that to now with Google Maps and Apple Car Play where you have a map that is updating in realtime based on your location in the world and it's just really easy for you to glance at it and know where you're at and where you're going, so it's a really streamlined experience. That's kind of the vision we have too.
There's a similar shift that we think will happen with LLMs and UIs, where people, they won't really want the guide that's separate from the app that feels a little bit clunky. They'll want the streamlined, in-app experience where you're getting this guidance, it knows where you are in the app and knows where you want to go and it's in realtime helping you do what you need to do.
Brian: Yeah. This is a weird flex, I drive a Tesla and I use the Tesla map thing.
Matt: Not weird, Teslas are great.
Brian: Yeah. But what I don't like about the Tesla is I think Google Maps and Apple have much better map software, I'll say that right now. I know Tesla uses maybe Google's API or somebody's but what I'm getting at is I constantly miss exits. So if I'm driving and like, "Okay, this exit is coming up. Let me prepare you for it." What I think Google does a great job of is it gives you a head up, Waze does this as well, it's like, "Hey, get in this lane."
I don't get that from Tesla, again, my wife hates it all the time, I miss the exit, we got to turn around, do the three left turns and eventually get to the right place. So it sounds like with ATLAS you have interactivity and it's going to get you in the right lane at the right time, so I guess my question is how do you interact with someone's app then? Am I integrating this in my APIs or is it an extension? What's the deal?
Matt: Great question. So one thing I want to clarify really quickly, I like this analogy that you have here, but at some point all analogies break down. One thing that we think is going to change with LLMs is... It kind of breaks down to the moving in the world analogy, when you move in the world you don't just teleport from one place to another.
There's a bunch of intermediate steps, and that is true when we interact on the web. For the most part, if we don't know where we're going, we have to perform a series of intermediate steps and if the app is well designed we have really nice UX copy and sign posts to get us to the right place. That's the best that we can do right now without LLMs.
But with LLMs, if you have a map of the software and you have a description of where the user wants to go, the user doesn't have to do the intermediate steps anymore. They don't have to do the thing where they're like, "Take this exit and turn here," the click-by-click directions. That can go away, you can just go directly to where you want to go to, right?
Just because we're traveling digitally, we're not moving in the physical world so it's a skewmorphic thing that we have in the web where it's kind of modeled off of moving through physical space. But that can go away with LLMs and these maps of software. Does that makes sense?
Brian: Yeah. It goes a way, but then I don't know if we lean into this skewmorphisism. I know everyone has their chat interaction when it comes to LLMs of, well, not everyone. But ChatGPT gave us the mental model of, "Okay, you have an LLM. How do I interact with it? Oh, chat." And now we have with voice that OpenAI is the voice companion as well where you're like, "I'm just going to tell you a thing and you talk it back to me." Now we're at a place where... Sorry, we're verging off ATLAS but we'll get back there in a second.
Matt: No, we don't have to talk about ATLAS. This AI stuff is cool, let's talk about AI.
Brian: So like the Alexa and Cortana and all these other devices that you talk to, it felt like they all had the thing that we're actually building into and even a better use case. But it doesn't seem like the investment is going back into those devices, despite the fact that now we're kind of just rebuilding those devices with our ChatGPT and now we talk to it.
Matt: Yeah. I think that chat interaction, so let's look at part of our vision for LLMs and the future of UIs, is a lot of people think UIs will just go away. There's going to be a lot more of this chat interaction, and we think that that's pretty overstated. Actually, if you remember there was a demo that OpenAI did for TED and this was a while ago, this was when they were talking about ChatGPT plugins.
They demoed this feature where you could ask ChatGPT to suggest a meal and then it would give a list of recipe ingredients, and then it would hook up to the Instagram API and you would get dropped into their UI and it had all the ingredients in your shopping cart and you could click checkout, right?
Then Greg Brockman, he's like, "See, but I don't think UIs are going anywhere because if I want to change what's in my cart for insta-cart I just want to tap the plus button. I want more quinoa, or I just want to remove quinoa." I don't want to type out or even say, "Add more quinoa."
It's just faster to be able to visually see what's in your cart and then, boop, tap the plus button and you're done. So I think there are so many instances like that where a user interface where we can visually see what's going on and interact with it, that's actually going to be the superior way of interacting with the software.
Maybe not all the time, but we are visual creatures, the neural pathways for vision are older, more primitive and faster than the neural pathways for language and so I think it's just going to be faster and more convenient in a lot of cases for us to use UIs so that won't go away. I don't know if I got on a soapbox or a tangent there.
Brian: No, no. I'm intrigued and honestly I agree as well. We're definitely inherently visual people. I think of the guy on TikTok, I forgot his name, but he was number one for a while, he's an Italian guy. He only interacted without speaking, so the whole gimmick was he would look at these random DOI, build a thing, but then he would just show the easier way to do it.
But he never spoke in any of his videos, it was just non speaking comedy and he rose to the top charts and the fact is he doesn't speak English. So he only speaks Italian and he was able to break into the US TikTok just by not talking.
Matt: Yeah, leaving language out of it and being visual. That's super interesting.
Brian: Khaby Lame, that's his name.
Matt: Khaby Lame?
Brian: Khaby Lame. Yeah. He's been in some Superbowl commercials, I don't even know if he still does TikTok, to be quite honest, because I'm not as into TikTok anymore. But yeah, he exists. You had mentioned something in passing of when folks go to use a piece of software, a dev tool. My use case is I just go Google and there's something on YouTube, I have a quick walkthrough.
My example is I was using TablePlus which let's you embed your database and then you can do some SQL commands and play around with it. They have a feature where you can add an OpenAI key and then it generates some SQL for you, and I'm like, "Ah, I know it's a thing." They have no announcement, it was like a tweet that they were like, "Oh, we have a thing."
But there's no guide, no documentation on how to do it and one of my coworkers showed us on a call and I'm like, "Dude, how did you do this thing? I'm looking everywhere, there's no docs, there's one tweet that mentions the thing at all." And he's like, "Oh no, it's this one tab up in the right hand corner, it's called Assistants and then you add a key."
It's not very intuitive, I get it, but there's tons of tabs so how would I have known that Assistant tab? And how did you find this out? But then on YouTube, no one had a video of it, no one explained it. They've only had it around for about a year, but it seems like it's been very much beta or even alpha which is part of the reason there's no docs. But yeah, it was just mind blowing, if I had known that tab was there.
Matt: I love that example because I feel like this happens to me several times ago where whatever the software I'm using, I wish it had ATLAS because I can't figure out how to do the thing. I love the example because... so Assistants is the copy that they have on the button, the UX copy there. One of the things that I have experienced when I was at Heap is UX copywriting is really freaking hard.
Putting the right text on the buttons and tool tips and app sections so that people can scan the screen quickly and move in the right direction towards the task they want to accomplish, it's crazy hard. We did this user research thing where it took us weeks for us to figure out that... We call our data visualizations graphs but people think about them as charts.
That's the language that they use, so we need to update all of our buttons so that when they're scanning the page they're just like, "Oh yeah, I want the chart." That's what they'll see, and that was the problem with, in your case, Assistants didn't map onto your vocabulary of the task you wanted to accomplish.
Again, this is where LLMs are really interesting, LLMs are really good at language and translating between your language that you have for describing your task and maybe the language that's used in the application.
So I think UX copy is going to be really important for new users who don't even know what they don't know. They don't have a task in mind that they want to accomplish. They're just clicking around and exploring, and so the practice of UX copywriting is still going to be really important for them, but for people who have already used the application and they have a task in mind that they want to do, people like you, if you could just type in, "Generate SQL," or whatever and then get dumped to the page where you could do that. That's going to be the way that people use software.
Brian: Yeah, and that would be a magical experience because I was Googling a bunch of random stuff like, "TablePlus OpenAI, TablePlus Chat generate SQL," looking for other tools that did the same thing because I know there's quite a few tools out there that are starting this process of building this. Man, I could not get to what I needed to get to just by Googling.
I don't think I tried out ChatGPT to be like, "Hey, how do I generate SQL?" So basically the use case is I actually used ChatGPT, added my schema as like, "Here's my schema, generate SQL to solve this problem,"and it was a great experience but I needed it closer to my pain point. I'm thinking out loud because if I could Google my way into like, "I need to solve this problem." And maybe there's an ATLAS link that gets me a dashboard that shows me a bunch of different roadmaps.
Matt: I love that. Almost like a deep link or something like that that you could get to from Google. I've definitely thought about this because one of the things we're trying to figure out as we're super early, but a key thing we're trying to figure out is how do you invoke ATLAS inside of the application? Because how does it get surfaced?
The muscle memory for everybody is just to go to Google, right? If you don't know how to do something in a piece of software, you're going to Google immediately and so it's a hard problem to change the behavior there. So I had this exact thought, I was like, "What if we can keep the behavior the same?"
You still go to Google, but we somehow augment the Google results so that you can get these ATLAS deep links back into the application that will show you how to do the thing so we could do it as a Chrome plugin or something like that. But I think you did ask a question earlier about how does this even work? How do we interface with the application? Right?
Brian: Yeah. So I know you're super early, but what's the interaction? Am I signing up to ATLAS and I'm adding it to some end points or into my UI itself?
Matt: Yeah. So right now it's not API based, so basically we build a map of your application, this is in the native ATLAS and so essentially what's happening is there's a crawler that's clicking around your application. With those interactions, it's building a map of all the different destinations in your application of what's possible, how to move between different places, and then that's the thing that's processed by the LLM for getting you to the right place.
Brian: Yeah. So I guess what stage are you at right now? Can folks use it? Can we sign up today or we have to get on the waiting list?
Matt: Unfortunately not. Yeah, unfortunately there's a waiting list. We actually just this Friday, we demoed this working on top of GitHub. I'll put the link to it, but you can see how this works on top of GitHub and if you want to try it out, join the wait list. But we're hoping to have a self-serve launch very soon.
Brian: Excellent. So I did want to ask the question, we kind of already started talking to this, but the feature of LLMs and we now have a paved path thanks to OpenAI and a few other folks who are quickly iterating and taking tons of funding. Are we moving into a world where now we can have large language models of our niche experiences and how is this going to surface in applications moving forward? We have ATLAS, so onboarding, checked, but what do you see the future of LLMs really impacting? Future development?
Matt: Yeah, the future of development, obviously you have things like your Copilots or whatever, where you're getting lots of code generated for free. I think I saw that where it was like the CEO of GitHub was like, "80% of the code in your typical SaaS application is open source. We think that 80% of the code in future applications will be AI generated." This is the next step for them, so I think there is something to that and I think it's one of the more promising areas of LLMs. Do you know Airplane?
Brian: No, I don't know Airplane.
Matt: So Airplane.dev. It's similar to Retool, but the founder of Airplane was tweeting about... He had this framework for predicting where LLMs will be most impactful, and I think the framework is a little bit old, but even with some of the recent advancements it still applies. Basically, he says if there's a workflow where you can leverage something like GPT or an LLM and you have an opportunity to edit or review the output before going live, any workflow like that is going to be disrupted by GPT.
So coding is an obvious example but there are others like copywriting, UX copywriting even, right? Any instance where you can still be a human in the loop, it's just going to be transformed by GPT. I think that's even more true with some of the custom GPT stuff we saw come out with Dev Day and as these models get better. I will say I am bearish on some of the improvements in reasoning capabilities of these LLMs.
I know that there's some progress being made there, like OpenAI is, I think, hiring the big guns to get people to work on this kind of stuff. But I think I'm a little bit bearish on whether we'll be able to make drastic progress on the reasoning problem. An interesting person to look at there is Gary Marcus. He's pretty articulate about why the existing approach may not work.
I think he may be a little overly confident in the declaration that it won't work, because the fact is there's a lot we don't understand about the human brain. For example, I was just thinking hallucination for humans. Hallucinations are a problem for LLMs and it's also a problem for humans. When we're hallucinating, there's just certain parts of the brain that are not working properly but it's not that there's something fundamentally different that needs to happen with the brain for those to go away.
It's just that certain things are showed off. You might be able to say something similar about LLMs where it's like, "Need more neurons so that the hallucinations don't happen."
Brian: Yeah. I'm finding that folks, as the LLMs niche down into specialization, there's a way to combat against hallucinations and have some integrity on the data. Candidly, we're working on something in the open source space that's going to be surfacing data within open source, and we're working on our expertise for open source. The hallucination thing has come up, and we have a really clever way to circumvent that stuff so stay tuned in February when we actually launch this thing.
Matt: Nice.
Brian: Yeah. I'm just fascinated, but also I'm appreciative of all the work that's been done at all these larger companies, and they're all large now, they take a ton of money, because I get the benefit. As developers and folks that build things, we get to reap the benefits and be close to the solutions so if I'm installing an npm package OpenAI and adding my tokens, I'm off to the races and I don't have to do a six year PHD stint of learning ML. Now I'm standing on the shoulders of giants.
Matt: Yeah. There's actually two references in connection with that that support that experience. One is there's a VC, I think it's Theorem Capital or something like this, something similar, where they published a post called AI Is Having A Twilio Moment. This is a little bit old, you might have seen this. It's exactly what you just described, Twilio made it easy for ordinary developers to create these SMS experiences and now something similar is happening with AI.
Then the other thing that I saw was there's a big AI guy, Santiago, that's actually based in Miami. He was saying on LinkedIn, he's like, "I actually don't recommend that people study math anymore if they want to get into AI and machine learning. That recommendation was relevant 10 years ago. It's not now." He basically says start building something that you care about, there's going to be gaps in your knowledge, mathematical gaps.
Fill the gaps and then get back to building. That's his advice, and I think that seems right to me. I broke in, I have a mathematics minor and that was enough. Have a math minor, got out a couple of data science textbooks, and took a couple of MOOCs on machine learning and that was enough for me to break into data science at Heap and I'm off to the races, building stuff.
Brian: Wow, amazing. Cool. Well, I feel like there's definitely a paved path for me now and I've been doubling down on this knowledge and trying to be well educated, but also not have to be well educated in this space. Yeah, just super happy to reap the benefits. I did want to transition us to picks, so these are your Jam picks. I appreciate you coming in and sharing a bit about what you're working on at ATLAS. Folks, JoinATLAS.ai is the URL.
Keep an eye on that URL, and perhaps you'll be able to embed some pretty wonderful onboarding experiences into your applications pretty soon. Matt, Jam Picks are things that we're jamming on. These are things that could be music, could be food related, technology, all of the above is on point for this portion of the podcast. If you don't mind, I'll go first.
I've got a pick that I also mentioned to you before we hit record, which is Detroit-style pizza. It's getting cold out here, so I've been baking a lot more. I was watching a YouTube video and learning about pizza rolls, not Tostino's Pizza Rolls but pizza rolls in West Virginia. Turns out it's a coal miner go-to food. Folks who are in that region are probably like, "Yeah, of course. Tostino's are underwhelming."
But I made them last weekend for my kids, which are just if you think of pizza rolls, mozzarella, pepperoni, wrapped into basically pizza dough. An amazing little snack, so now I'm on the kick of making my own Detroit-style pizza and the history of this which I was kind of explaining this before we hit record on the podcast.
Detroit, oil pans, Motor City, they make focaccia bread and pizza within these thick steel pans. I don't think they're actually the same pans you catch oil in, but it was the same sort of material.
Matt: Hopefully not.
Brian: Yeah, they're definitely cleaned out. Actually, I think the pans were actually not oil pans, they were pans that carried all the machinist's parts so that's what they originally were. Amazing pizza, it's definitely got a huge uptick in the US, at least in the West Coast where it's a combination of Mike's Hot Honey, it's the other combination, combo piece that everyone uses. So I am now making that stuff and I'm looking forward to slinging pizzas during the Christmas season.
Matt: Nice. Yeah, I love that pick. We were talking about how I just recently discovered Detroit-style pizza and I was born in Detroit. It's becoming a thing now and, yeah, it's pretty neat.
Brian: Yeah. Also, Little Caesar is based out of Detroit and the guy who created it was a former Detroit Tigers baseball player.
Matt: No kidding? I didn't know that. Yeah, so that was my first intro to Detroit-style pizza is when they did this Superbowl promotion where they'd have like a 10 yard pizza or something like that. Quite silly, but it was basically Detroit-style pizza. I did have another pick which is relevant to what we were talking about with onboarding.
I did a talk at Heavybit years ago, back in the day about onboarding and we actually talked about GitHub and some of the pain points of GitHub's onboarding and how it was really guides created by community members. GitHub actually created a whole team after that, they call it the New User Onboarding Squad and they just helped that experience of giving people a little bit of breadcrumbs to understand how to get unblocked.
Timely, since you just recently did that stream on GitHub as well for this two factor authentication stuff. But GitHub is a power tool and there's always going to be a thing that no one knows how to do or you didn't know existed in the application. I think even ATLAS could help unveil some of the things that shipped in the last 10 years that maybe you didn't see or weren't aware, or could increase your productivity.
It's more of, what is it? An Easter egg hunt of like, "Hey, here's a cool thing you didn't know." I imagine a lot of other companies would benefit from that as well. Yeah. That's another thing that we're thinking about, this is how LLMs would change UIs a little bit. Right now you ship a new feature, the state of the art is almost spamming your users with some sort of feature announcement banner.
They log in, it's like, "We shipped a new thing!" You can try and segment your user base so that you're not spamming them, but the best way that you can surface features that are salient for them is to actually look at their previous usage, look at what they've interacted with before.
One of the things that we're thinking about is look at all the usage data, look at everything they've interacted with before, feed that to an LLM and then use that as a filter on whether you actually announce a particular feature, whether you hit them with that announcement banner. That's something that we're thinking about.
Brian: That sounds like a million dollar idea.
Matt: The other thing that I wanted to quickly say is onboarding is kind of the right way to think about this, or it's how I was thinking about it even when we spoke a month ago. But I think it's a little bit broader than that. Think about the physical world, if you could teleport in the physical world, you wouldn't drive, right? You wouldn't take these intermediate steps.
Even if you know how to get there, it's not really about onboarding, your first time there, it's just you use teleport, right? And I think there's something similar you can say about ATLAS where it's not just about the first time you need to get to that two factor authentication screen.
You know where it is after ATLAS shows you, but you don't really want to take the intermediate steps to get there any more. You just want to go there, teleport immediately. It's this deeper way that we interact with software that's challenge, I think. So I'll finally get to my pick now.
Brian: Yes, please.
Matt: Well, you had two. Can I have two?
Brian: You can have two. Feel free, unlimited.
Matt: I mentioned Gary Marcus earlier. Like I said, I think some of his stuff is a little bit over stated or over confident, but I think he's a very important voice for anybody who's thinking about LLMs. He's the skeptical voice, so you have some people are really hyped and then some people are maybe a little bit too skeptical maybe.
If you're listening just to the over hype you're going to not be balanced, you're going to be on the hype side. So counterbalancing that with a skeptical voice I think is incredibly useful, so anything Gary Marcus says is really interesting. He actually founded a couple of companies that are focused on machine learning and AI, one of which sold to Uber.
He's a cognitive scientist guy at NYU, he understands how the brain works and stuff, just an accomplished dude that says interesting things. So that's one pick, Gary Marcus. The other one, I have a three year old, she's getting into Pixar Shorts. You know the little six minute things before Pixar movies? One that I had never seen before is called La Luna, and it's really good.
Brian: I'm familiar.
Matt: You're familiar? Yeah, it's really nice so I recommend it. Especially for people with kids, it's a good thing to watch with your kids, I think. My interpretation, maybe you could weigh in here if you've seen it, it's about kids needing to find their own way. It's a metaphor for, "We're adults, we're parents, we have this generational way of thinking and the kids need to find their own way."There's new stuff that's going to happen to them and they need to figure it out, and not necessarily listen to us. I think that's a cool message for kids.
Brian: Yeah, it kind of sums it up. But I think a lot of the shorts recently have been in that vein. Pixar is actually out here, I'm in Oakland but Emeryville is the next city over and I have friends of friends of the folks who have been writing a lot of the new shorts and the new movies.
I think what it is a lot of the millennials, because I assume you're a millennial or within the range, a lot of the folks are millennials who are coming of age and have stories to share and that's what I appreciate about Pixar itself. It's all about pulling at the heart strings, but also relevant commentary about life stories that aren't always going to be like Another Teen Movie.
It's movies for kids telling these stories, so I've been super impressed with what they've put out. Well, Matt, I loved this conversation, I loved your picks too as well. I think we could talk probably for another hour. Perhaps we have part two some time in the New Year when you guys go live and folks can start embedding this into their products. Thanks for chatting.
Matt: All right. Thanks for having me. This was great.
Brian: And folks, keep spreading the jam.
Content from the Library
Generationship Ep. #24, Nudge with Jacqueline-Amadea Pely and Desiree-Jessica Pely, PhD
In episode 24 of Generationship, Rachel Chalmers speaks with Dr. Desiree-Jessica Pely and Jacqueline-Amadea Pely, co-founders of...
O11ycast Ep. #73, AI’s Impact on Observability with Animesh Koratana of PlayerZero
In episode 73 of o11ycast, Jessica Kerr, Martin Thwaites, and Austin Parker speak with Animesh Koratana, founder and CEO of...
The Data Pipeline is the New Secret Sauce
Why Data Pipelines and Inference Are AI Infrastructure’s Biggest Challenges While there’s still great excitement around AI and...