1. Library
  2. Podcasts
  3. O11ycast
  4. Ep. #73, AI’s Impact on Observability with Animesh Koratana of PlayerZero
O11ycast
39 MIN

Ep. #73, AI’s Impact on Observability with Animesh Koratana of PlayerZero

light mode
about the episode

In episode 73 of o11ycast, Jessica Kerr, Martin Thwaites, and Austin Parker speak with Animesh Koratana, founder and CEO of PlayerZero, about the future of observability with AI. Discover how PlayerZero leverages LLMs to democratize institutional knowledge, improve software quality, and make problem-solving more accessible to all engineers.

Animesh Koratana is the founder and CEO of PlayerZero, a company revolutionizing software observability through the use of AI and LLMs. With a background in data-intensive systems, machine learning, and AI, Animesh began his journey with PlayerZero as a research project at Stanford University, focusing on the intersection of big data and modern software development.

transcript

Animesh Koratana: When something breaks and your customer calls you and says, "Hey, something is broken." How do you go from that to what broke? Why it broke? What do I need to do in order to fix it? Hell, what does the actual code command look like?

And then if I were to take this command and put it into production, what might break because of that, that entire process is what PlayerZero is building autonomy within.

Observability data is obviously just a really important part of that entire workflow because it's the connection between what the user's saying and what actually happened. We plug into, you know, a bunch of different Observability platforms.

But the one that I think, Austin, given your background, OpenTelemetry is one that most of our customers really enjoy, kind of integrating with in a really tight visibility into what's happening in the backend. And also, you know, some visibility into how that connects to the code base even.

Austin Parker: I think that's one of the really cool things with OpenTelemetry, right, that it's out of the box designed around this idea of like semantic data, right, and semantic conventions, and having super structured data with a lot of normalization, a lot of metadata about like, Hey, this is what this thing actually means.

And that's kind of catnip to an LLM, right? Like, they do really well with structured data.

Animesh: Well, so yeah, I mean, I think what's really beautiful about OpenTelemetry, just like you said, is it creates a shared language for how to understand, right, what is actually happening in a way that, you know, you can almost like parse it out to represent a two language models in a very specific way.

But I don't think that actually solves the cardinality problem in the observability space, right? I think that's probably where language models create the most leverage to explain that a bit, right?

I mean, in like any modern architecture you have, you know, tons of different microservices, you have many different services talking to each other, right? And the metadata that is naturally produced as a function of those systems being online, right, is just so diverse, right? That it's high cardinality, right?

By nature, it creates a really interesting workload on anybody who's on the other end of it trying to parse through that, and trying to make any meaning out of it. And I think language models are particularly well oriented to work through that high cardinality data through what others would call intuition to actually kind of make some sort of meaning out of it and to get things into production.

Jessica Kerr: We love talking about cardinality here, so I want to ask you more about that, but first,

Animesh: Yeah.

Jessica: Would you introduce yourself?

Jessica: Yeah. My name is Animesh Koratana. I'm the founder and CEO of PlayerZero. Been building this for about three and a half years. Started off actually as a research project back when I was at Stanford.

So I used to work in data intensive systems, basically at the intersection of machine learning, AI, and really big data systems. So Apache Spark and you know, Mesos, that sort of stuff. And, you know, I have a particular empathy for the work that development and engineering teams do every single day.

My dad was also a founder of a company. He was a CTO of a company. Started it when I was in elementary school, and I watched it grow until I was, you know, eight or nine years old.

And I got to help him run support, run his engineering team from when they were two people to 50. And it was just a super fun experience. And so I just have a deep empathy for that whole journey.

Jessica: Thank you. So you said that you're using an LLM to understand that the high cardinality observability data, but you also mentioned that the data was diverse.

Usually we use high cardinality to mean fields with a lot of values, customer ID, IP address.

Animesh: Yeah.

Jessica: If you have enough microservices that might be a service name.

Animesh: Yeah.

Jessica: Is it more than that? Is it also the variety of attributes you might see?

Animesh: Yeah, it's a variety of attributes. I mean, I think I used the word high cardinality here roughly to mean that there's many possible branching points for ways to connect to discrete data points, right?

Like, there's many different ways that you can basically join two discrete data points coming in from this in service.

Jessica: So it's interesting.

Animesh: Yeah, I mean, it's super interesting. And the path to make it interesting is specific to the data, right, that you're sitting on top of, right. specific you mean even situationally.

Jessica: Right, right, 'cause you're not just observing the same thing everybody else is. This isn't infrastructure, this is your application. You also use the phrase "make sense of it."

Animesh: Yeah.

Jessica: And that can be hard.

Animesh: Well, I think there's this like really interesting pattern.

Whenever you see data sets that are defacto high cardinality, there tends to be a large amount of knowledge work or intuition required to make sense of it.

You can imagine, you know, a table, let's kind of take it out of observability data for a second, like, if you had a table of every single user who purchased an iPhone, right? And every row there was a transaction, and every column was some different demographic of the user who bought the iPhone, you could have, you know, tens of thousands of columns, right?

You could have age, you could have gender, you could have, you know, zip code, you could have, you know, the square footage of their house, like all these different things.

And you'll notice, right, at some point, as the cardinality right, of that table increases the people who can make sense of it to say, "Hey, you know, like 18-year-old, you know, high school students who live in, you know, New York in this particular suburb just for some reason really like to buy iPhones."

And they're able to story tell a little bit more, right, about why certain combinations of these particular attributes create the emergent properties that we're really looking for.

Jessica: Such as all their friends have an iPhone, so here we have a network of people. Yeah. Whereas like, just, if you just did statistics, you'd get, well, people who were born on a Tuesday in November.

Animesh: Yeah, exactly, exactly, right?

There's a story behind the data, and like as you start adding more columns to that table, us as humans right, have like basically attributed the ability to tell stories on that data to intuition.

Jessica: Which comes out of our immense background of stories that exist.

Animesh: Yes.

Jessica: And LLMs have that background too.

Animesh: Exactly, right, and so that's why LLMs, I think, are really well oriented to making sense out of observability data, right?

That among other things. But I think that is, you know, implicitly a thing that language models have some sort of context about.

Jessica: So, what kind of things can they say? What have you seen your agents say about problems that happen in software?

Animesh: So we give our agents context even outside of observability data, and that actually ends up being really interesting context to be able to tell stories within the observability data itself.

So when a customer plugs in PlayerZero, right, they're plugging in a couple of different things. First they're plugging in their code base actually, right? So that gives us, you know, essentially the blueprint to the entire application.

How is it built? What's built, right? The architecture is really kind of, you know, static image of the ground truth of the application itself. We also plug in analytics data, right?

That gives us a sense of, you know, what users are doing, right? What clicks are they making, and, you know, pages are visiting stuff like that, and the user's identity, we plug in tickets, right? So these are project management, right?

Or even external tickets, right, from like ServiceNow, Salesforce, stuff like that. And then at the end, right, we plug in observability data, and that context, what we found, right, tends to kind of bootstrap our models as understanding of what are the stories that are told around this application historically.

And that framing of, you know, eight months ago now, this type of defect had this critical path helps us better understand how to tell the stories in the future. So in the moment that a new, you know, exception happens, or a new, you know, span has a high latency, or whatever, right, like if I wanted to go and tell the right story around it, and we can put on our like senior engineer hat, right?

And like, what would they do, right? Like they would have that context about why we built it the way that we did, and eight months ago, this is how it broke, and all these different things. And that story will be that much more meaningful because that senior engineer told it.

Martin Thwaites: And that is something that we see quite a lot where they've got that institutional knowledge inside of key individual people in an organization, where they've pulled all of those data sources that you've talked about, whether it's understanding not just where the code is now, but how the code has evolved, how the users have evolved.

Like we onboarded a customer three months ago. We have scaled up in the last four months.

Along with all of that tickets, they've lived through the pain of using those project management systems and understanding where all of those tickets go, and what's on those tickets, and why those tickets are there in those investigations.

And trying to pull all that data together, and trying to understand all those data sources. That's one of the problems and the reasons why senior engineers, tenured engineers in an organization are so valuable.

And if we can help junior engineers, we can help the newer engineers on the team get up and running quicker, that's a really key piece to making observability, and making a smooth software experience.

Animesh: Absolutely, the institutional knowledge problem that you just articulated is, I think, one of the deepest problems that especially even enterprise software teams face, right?

That knowledge is so concentrated in such a few number of people, and then taking that out of their heads to make it something that is operationally actioned on day to day.

And, you know, I think another angle to this actually is, and Austin, maybe you're contributing to this, right, with your push in OpenTelemetry, observability data by itself is becoming a commodity, right?

But the connection from the observability data to anything that is useful and anything that is action is that institutional knowledge, is the story that people can tell around it to inform and to motivate the next step forward in their application.

And so if we can take the bottleneck of senior engineers, that institutional knowledge out of that particular transition, from observability data to action, then all of a sudden that same observability data is, you know, a 1000 times more useful to the same organization.

Martin: And you can free up those senior engineers to do some more important things than being the person who handholds an investigation or an incident. Because everybody can have that knowledge at their fingertips. So you actually become more productive, as an organization by being able to democratize that information to everybody.

Animesh: Absolutely.

Austin: One other thing that you kind of got at like this idea of commodification is that, you know, OpenTelemetry in a lot of ways is really about giving developers this really highly structured, highly semantic way to kind of encode how their applications work, right?

Because right now, like if you want to know, you know, you can take a system diagram, or you can take feeds of data about RPC calls, or network flow logs, or whatever and throw those at an LLM and have it kind of reverse engineer that, and to say like, oh, these are the connections between things.

Or you can do it, you know, through various heuristics, other algorithms, but you don't get the why, right? And you touched on this earlier about pulling in ticket data, and pulling in all these other elements of context to describe that to the LLM, and describe that to engineers or whoever.

But OpenTelemetry actually gives you this really powerful way to, as a developer, to really say like, "Oh, not only does this talk to that, but this is why it talks to that, right?"

When the code does this or when it talks to this other service, or it makes this database call, or it goes to this caching layer, right? This is why it's doing that. This is why it's important. And I like to joke that we used to have a name for this process and that was QA.

Animesh: Yeah.

Austin: Like your integration tests and your test automation and stuff, like that was how you told humans, "Yo, this is what the code is doing. I need to tell you this so that you can test it and make sure."

Now with OpenTelemetry, we can kind of do all that as we're writing the code, and put in this sort of semantic layer of like, ah, this is what the system is actually doing.

And then we can build tools like you all are doing on top of that data to validate it, to ask and answer questions about it, dah-dah, dah-dah, dah-dah, dah.

Animesh: Yeah, now the code-base context is, you know, when we talk internally at PlayerZero, right, like we basically think of it as two centers of excellence, right?

Like one is how well can you understand the code base? And two is how well can you understand the customer, and what happened to them.

And if you can do both really well, then you basically have the bridge built there. And observability data, I think, sits right in between, right? The bridge from what happened to the customer to the code.

Jessica: It is that relationship that we're trying to really understand between the customer, probably a person, maybe software, and the software that we wrote.

Animesh: Yeah.

Jessica: Okay. I have to ask, can your agent look at a trace and explain it?

Animesh: Yes, we actually have a button in our kind of viewer, and it just says "Find in my Code."

So this, this agent has a tremendous number of different kind of action spaces that I can action within. But yeah, I mean this is actually one of our users' favorite things to do in our product, which is, find the moment where a user actually ran into the problem, and then say, "Find that in my code."

And it'll go and explain the entire action critical path of how that came to be, how they got there, why they got there, and then where in the code should look in order to fix it.

Jessica: Wow, okay, so in the trace, there's a Find in my Code. Is that like, so "Find this span."

Animesh: No, so you're looking at a trace and that trace has an identity attached to it, right? And this is also, you know, a kind of a piece of how people instrument PlayerZero into their systems.

But you can actually, from that trace, jump directly into PlayerZero's, we call it a player experience that's able to go in and pull all these different data sources to say, "How do I explain this trace, given what I understand about my customer and what I understand about my code base."

Jessica: So in OpenTelemetry, a trace is typically one incoming request, and then it's made out of many spans?

Animesh: Mm-hmm.

Jessica: And each span is created at one point in the code added to at various other points, and then ended somewhere.

Animesh: Mm-hmm.

Jessica: Usually in a library?

Animesh: Yep.

Jessica: But there is a section of code that is executed within any particular span?

Animesh: Mm-hmm.

Jessica: So what was the name of this button again?

Animesh: Find in my Code.

Jessica: Find in my Code, okay. So can that take you to the code where a particular span executed? 'Cause a trace crosses services, a trace executes code all over the place.

Animesh: Yeah.

Jessica: But a span is generally a unit of work.

Animesh: Yeah.

Jessica: Does it take you to a place for a particular span?

Animesh: It takes you to an understanding of that trace.

Jessica: Okay.

Animesh: If you want, you can get down to the specific code in the span.

Jessica: But it's looking at the wider picture of the whole trace?

Animesh: It's looking at the wider picture of the whole trace. You know, a lot of times when people are in this kind of debugging workflow, right?

I mean, I think, tracing and spans, while these are technical terms right, that we use to kind of like talk about specific data models here, at the end of the day, a user is looking at a moment in time, right?

And this moment in time happened in this server or happened to this user at this time, and I want to understand it, right? And I want to understand why it happened. I want to understand, is it isolated? Is it connected to other things?

Jessica: Should I care?

Animesh: Should I care? Exactly, right, and I'm trying to understand what I need to do about it, and that is essentially what that button achieves.

Jessica: That's really cool.

Martin: So essentially what that Find in my Code button is going to do at this point is it's going to take a load of data that's disparate and try and tell a story.

Animesh: Yeah.

Martin: So that the engineer has some kind of idea of it's not going to fix the problem for them, but it's going to help signpost. It's going to do what a senior engineer would've done, or a staff engineer, somebody with a ton of context is going to say, "Go look behind the counter over there. That might be where the problem is."

It's that idea of like, here's a 1000 dashboards where that senior engineer can just go, "It's probably the one on the right over here."

Jessica: So what happens when it's wrong? How do you call bullshit?

Animesh: Yeah, yeah. You know, so there's a couple different perspectives to this. When it's wrong, right, it's usually pretty obvious that it's wrong. But we find that to be usually like one in 20,, one in 30, or so.

I would say, you know, maybe five times out of 30 it might be right, except not exactly aligned with what you add in your head. And so that I think is a much more common case, right, that somebody needs to be able to address.

And for that, right, usually, you know, it's prompting, right, or being able to ask follow up questions, and say, "Oh, like I didn't actually want you to look here, look there instead."

But, you know, I think there's this game of nines basically, right, and we talk about that in the reliability world. And I think there's also something similar in the LLM world in terms of, you know, how accurate and good these things become over time.

Jessica: Well, it doesn't have to be right to be helpful.

Animesh: Absolutely, yeah, and, yeah, I think that's just really well said. I think 29 out of 30 times, right, it is helpful, right. And I would say 1 out of 30 times it's sometimes off base.

The other angle that I would say here, and just to kind of double down on what you said about "doesn't have to be right to be helpful" every time, is this idea of like asymmetric upside, right? Like I think the best products in the world kind of have this.

Jessica: Oh yeah, you're right. I mean, what's the worst that can happen? It's full of shit, and you're like, "You're full of shit," and you go on about your business. That was not a high cost.

Animesh: Exactly, yeah, and so like, I mean, Google's a beautiful example of this, right? Like before Google, like did people just go to the library, you, right, like spend a week looking through books trying to find the answer, right?

And like, if that was it, then if Google's wrong, right, you will have lost 30 seconds of your time. But if it's right, right, you will have saved a week.

Jessica: As long as you know that it can be wrong.

Animesh: Yeah, yeah.

Jessica: If you're convinced that everything ChatGPT says to you is true, then wrongness has a cost, so have some context, yeah.

Animesh: Yeah.

Austin: We had these people called librarians that would ask questions to, and they would tell us.

Jessica: And that's the trick, that's where your senior experienced engineers come in, is they have to do less of being the card catalog, and more of being librarians, yeah.

Martin: Yeah, we've got to remember that, you know, a lot of senior devs are full of shit a lot of the time as well.

Jessica: I think that's true. That's true, and they take the pushback way worse.

Martin: Exactly. Yeah. You know, when your LLM turns around to you and say, "I'm not happy, I am not going to answer any more of your questions, I'm going home."

That's when I think we'll have, you know, achieved AGI. You know?

Animesh: Exactly. No, but these things are getting better every single day, right?

If you would've asked me this like eight months ago or said one out of 10 times right, you give some sort of BS answer, now it's like one in 30, one in 40, and I think now we're at a place where our code base understanding at least it's like is one of the most sophisticated ones in the world. And fast forward another year, right, and we're hoping it'll be one in a 100.

Martin: And the thing is, you are using a lot more in your data to actually answer those questions than probably a senior dev does, at least consciously anyway.

Animesh: Yeah.

Martin: You know, so you've probably got access to more information like tickets in a project management tool. You are probably scanning maybe a 100 tickets, where the dev has probably only been involved in say, 20 of those.

So you've actually got more information than a engineer would have anyway. So you're already a sort of level ahead of what that senior engineer might be able to do.

And the thing is, we're not replacing senior engineers by doing this. We are giving them tools that make them more effective.

Animesh: Yeah.

Martin: And that's, you know, one of the the key things that I hate about the narrative around AI is that it's replacing things. And from what it seems like you are describing here, this is a tool that helps people do things better.

I mean, we talk a lot with Honeycomb that we don't build robots, we build mecha suits to make things easier. And it sounds like that's a similar thing that you're doing with all of that data.

Animesh: Yeah, no, absolutely. We're imagining a world in which if you were able to disintermediate senior developers from the path of defect resolution and making your software better over time, what would that ideal kind of world look like?

Jessica: What do you mean by disintermediate?

Animesh: Meaning, like, exactly what we've been talking about so far, right? It's just taking them out to the critical path, and saying, "You don't have to be the one, you know, leading the war room every single time something goes wrong, right?"

So that senior engineer doesn't have to be the bottleneck to solving a problem.

Jessica: But some engineer is going to be involved.

Animesh: Totally, yeah, yeah.

Jessica: Okay, so we're letting more people do the work because you don't need years and years of experience slowly accumulating.

Animesh: Exactly.

Jessica: Okay, so let's talk about how PlayerZero got this good over the course of a year. You were talking about how do you use observability and then your own tool on your own tool?

Animesh: Well, we use it every single day. So I guess, yeah, there's a couple of questions within that. I mean, like, one is how did we get here? And then, I think, two is, how do we use our own tool every day?

You know, I've been very enamored with this idea of quality and software engineering for a very long time. And kind of thinking about what that truly means.

Jessica: What does quality in software engineering mean to you?

Animesh: Yeah.

Jessica: 'Cause there's no global definition of that.

Animesh: So quality, I mean, I think the standard definition is a lot of like, you know, we write good code and that code has fewer bugs.

The way I see quality is more of a process than a property of the code. So when I say quality, what I mean is thinking about the loops that we've built into the way that we work every single day to learn from the past to be better in the future.

And, I think, among every single world class engineering team I've ever worked with, that loop has been tighter than anything else I've seen, right? Like, that's what they're really good at doing.

And so figuring out how to take that loop and make it a part of an engineering process that doesn't have to be this heavy lift, or it doesn't have to be this aspirational thing, right?

Everyone wants high quality software, everyone wants a product that doesn't have bugs, right? Everyone wants that. But how do you do it in a way that is the path of least resistance for engineering teams to adopt has been this like notion that I've been very, very interested in.

And then we had a very kind of windy path, right, to figuring out the product that is PlayerZero today, and, you know, going through QA, going through support, talking about account execs, right? Like figuring out like what does quality mean, and then what that process really needs to look like has been the story of PlayerZero.

But I think what we've settled on and what we've found today is, you know, in a startup finding a path that is low friction enough to have a team of, you know, 10 or 15, use this thing every single day to embody that loop, right, that we were talking about earlier.

I think it's a testament to the kind of maturity of the product, and how excited we are, right, to kind of take this to the rest of the world.

Martin: I think the first step in understanding whether you've got a book free piece of software is to understand that there will always be books.

Animesh: Exactly, yeah, it's feudal to think that you can design software without it.

Martin: And, you know, you've got to understand the key is how do we handle those? How do we revert them quickly? How do we get that investigation side down to an art form?

And that I think is where your tool is going to really help is those resolutions. So it is not about bug free software, we're never going to get there.

Animesh: Yep.

Martin: But if we can help understand why things happen, if you can use the real time data from production to bring that into the development cycles, so that they can make better software, it's never going to be bug free. And I hear people who say, "Let's write bug free software," you know?

Animesh: Yeah.

Martin: We're never going to get that. But what we can do is we can empower engineers with more information, so that they can not repeat the same mistakes. They cannot make, and it's not stupid mistakes, but simple mistakes.

They can have that information that goes, if you change this thing, you know, it's connected to this other thing over here.

Jessica: Oh, yeah, you mentioned that one of the things as part of the story was here's a suggested fix and here's what might break.

Animesh: Exactly, yeah. One of the coolest parts of how we close the loop in this entire defect resolution process is understanding the risk of code changes, and understanding the risk is a function of understanding what has happened in the past, and being able to index and then quickly retreat-

Jessica: And where the connections are.

Animesh: Exactly, right, so like, if a developer changed a line of code, right? Let's say we're changing the query that goes and pulls the user, right, when you're logging in.

Jessica: Ooh, that sounds impactful.

Animesh: Of course it's impactful, right? And, you know, a senior engineer will look at that and immediately, right, all the alarms are going off, right?

They're like, "Okay, like, hey, eight months ago, right, I changed this in that way, and, you know, here's what our customers complain about afterwards. I remember dealing with three tickets like that, you know, here's the span, right? Like I remember debugging, right? It seems like when we changed it that way last time, right, the average latency for that particular span went from-"

Jessica: And this piece of code executes 8,000 times a second? And impacts every user everywhere. And you just changed it to help one user, and-

Animesh: Exactly.

Martin: I always love that bit of, the meme that goes around the internet every now and again, where there's a bit of code, and there's a comment above it that say, "Increment this counter when you've tried to optimize this again" because it's something like 465, because everybody thinks it's fine, this thing's well inefficient, I can just change it like this.

And you don't realize that actually, no, it's actually in a really tight loop. Or actually, no, this, I think, the actual code was to do with like the CPU cycles, and it was specifically hitting each one of the CPU cycles, or something like that.

Animesh: Yeah.

Martin: But that idea of if that information already existed in your code base, that somebody sat on your shoulder, that senior dev, or that seasoned engineer that's been there essentially for 10 years, because you've got 10 years of information, that's sitting there on your shoulder going, "I wouldn't if I were you."

It's like, "Have a go, I'm excited to see what you're going to do with it."

Animesh: No, exactly, like we can bring this context and this learning into the code review process even, right? So that way the software that we put out gets better every time we put out software.

Jessica: Right, so the trick is not to write good software, it's to write better software.

Animesh: Yeah, better than yesterday. Better than yesterday, every single day, and that's how you win.

Austin: So I want to actually kind of take this in a slightly different direction, but not really.

Animesh: Sure.

Austin: You know, we're talking about writing better software, and we're talking about, you know, improving quality and all these things. I think there's something that's a little under explored here, and it's the value of quality communication in the development process.

So everyone's probably seen when Google launched their AI assisted answers, the LLMs and most forms of machine learning don't really, they understand context, they don't understand jokes, right?

Animesh: Yeah.

Austin: So that's how you get, you know, glue as a pizza topping, and all this other stuff.

Jessica: But that's funny.

Austin: I mean, it's funny to you and me, but, you know, the LLM doesn't know. So the LLM he is like, "Oh, this is the most highest ranked thing, so it must be true."

You know, this is something that semantic data helps us, obviously, on the software side, right? Because we can say, you know, going back to the OpenTelemetry stuff earlier, there's a ton of stuff in OpenTelemetry that's about really quantifying, like what does this connection mean?

What is a success, what is a failure encoding and mapping all this stuff back to, you know, human concepts.

But as soon as you step out of telemetry, and you start getting into things like tickets, or you start getting into things like account notes, and pure human-to-human communication, I find that the quality of that communication might be great for a human, but it might be really poor for an LLM.

So how are you all thinking about really helping people write better, you know, how are you helping people think about how to have sort of internal technical communications, or technical adjacent communications that are more useful for these sort of AI assisted, you know, development tools?

Animesh: So, just so I understand the question, you're basically kind of poking at how do we make language models a part of the places that developers already live to create better communication around what is being done?

Jessica: And get developers to include them?

Austin: Yeah, I think it's, so here's a pretty common example, I think, right? Think about a small startup or even a medium-sized startup. But when you think about sort of the flow of information, you start out with field teams, right?

You start with salespeople, you start with customer success, and they're going and they're talking to users, they're talking to prospects, they're distilling that feedback, and they're making notes about it, right?

Those notes live in Slack, they live in Salesforce, they live in Zendesk, they live wherever.

Jessica: They live on my desk in my physical notebook.

Austin: Right, and sometimes they just live in your head. And then that feedback gets pushed down, and it goes into PMs, and it goes into managers, and, you know, they turn that into Jira tickets, or GitHub issues, or whatever.

And those are all like inherently lossy steps, right? Those translation steps are lossy from what you start out as with like, hey, this is my feature request, or this isn't working like I expect it to, or can you do X, Y, and Z?

And then it gets down to a developer that gets this as a ticket and it's like, okay, well I need to go change the color of a button, or I need to do these things.

Animesh: Yeah.

Austin: And, because there's all these different audiences, because there's all of these different translate, you know, these human-to-human translation steps, like, it's very easy, I feel like, for the logical context chain to get broken.

So how do we, you know, how, when we're thinking about AI, when we're thinking about LLMs, yeah, like how are we getting the LLM kind of into the process at each step so that it's able to synthesize those insights and learnings and feedback into things that are both useful for other people and also useful for the LLM.

Animesh: You know, I think, one of the first things that comes to mind, right, is this is why agentic systems are so exciting, right?

Because so far, I think what we've talked about in this conversation has been exploring the different data services of what we can give language models, but the action surfaces, right, are almost equally important in order for these language models to be able to kind of fluidly move between the places the developers already live, in order to be able to participate in the creation of a ticket, or participate in the development of that ticket itself, right?

Or similarly, right, like learn from the kind of raw information that's in the code base, try and be able to create documentation, and like, this is where the action surface of these agents are really important, just as like a prerequisite.

The second thing that we think about a lot, and this is I think a challenge in a lot of enterprises is like, we haven't till date documented our tickets really well, right?

You know, we have a customer, right, they have hundreds of thousands of ServiceNow tickets, right? And most of them are like, you know, five words, right? Customer got stuck on login.

Jessica: And everybody's like, "Oh yeah, that again."

Animesh: Yeah, exactly. Right? And like, everyone kind of knows like there's this like shared knowledge, but there's imperfect knowledge in these systems of record that enterprises tend to rely on.

And I think, I mean, even startups, right? Like I think you would be hard pressed to find a startup that spends style on documentation. And so, yeah.

But this also kind of like brings up the, like this whole idea of like documentation on demand, right? And documentation that is informed by their raw sources as opposed to their derivative sources.

Meaning like, you know, rather than informing what and how something works based on the documentation that somebody wrote after interpreting the code, if our fundamental understanding of the code base itself, right, is good enough, then you should be able to generate the documentation that you need in context, right?

Jessica: The same way we do with API docs, right? We generate those from the code.

Animesh: Exactly, and I think language models are essentially kind of decreasing the barrier to having these, you know, automatic, on-demand generated documentation for not only just like how the code works, but also what's happening and what has happened, and doing that in close to real time.

Jessica: Right, 'cause that's what the LLM does is it doesn't just give you the code you need. It writes a fricking blog post every time.

Animesh: Exactly.

Jessica: Yeah. So at least it can populate the tickets it helps with a thorough description, which will help both people and maybe itself.

Animesh: Yeah, well, there's just a ton of glue work that needs to happen for a high functioning engineering team. And this is part of it, right? It is just, you know, documenting what has happened, so that way the people who come after have a good set of shoulders to stand on.

Jessica: So it's like if you had an intern who actually wanted to go into library science, and just loves documenting everything that they find.

Animesh: Well, we call this scaling human attention, right? Which is like, language models are some base level of intelligence, right, which lack a lot of the context of the business, but have a lot of context about things that we can give it, and they're unconstrained by the limitations of time.

Whereas we as humans, right, can only look at so much context, and so much time, and create so much output given its-

Jessica: We can only read so much of what they spit out, whah!

Animesh: And so like, you know, this kind of going back into the technical world, right?

Like there's very material impacts of this kind of scaling human attention concept, and even how observability data actually needs to be processed and stored and managed because you're no longer in a world where, you know, you can go stick this in a large aggregate, you know, a lab database, right?

Like, we actually need to like go and do more stuff to it because there is more attention being given to every row and every record, right, that is actually coming through. And there's just, it's a very new world in terms of the workloads and in terms of-

Jessica: Okay, so you're saying maybe we can have it do some of this stuff that we don't have time to do.

Animesh: Exactly, yeah, 'cause we can scale up the attention that we would need in order to do that task to the magnitude of data that's coming in.

Jessica: Yeah, so speaking of limited human attention, it's about time to wrap up the podcast. Animesh how do people find out more about you, about PlayerZero? Where do they go when they're interested?

Animesh: Yep. Book a demo. You can come to our site, playerzero.ai, but, yeah, book a demo and set it up. Most of our customers are up and running by the end of the day.

Jessica: Thank you for coming to o11ycast.

Animesh: This was a ton of fun. Thank you guys for having me.