1. Library
  2. Podcasts
  3. Generationship
  4. Ep. #34, Together with Nathen Harvey
light mode
about the episode

In episode 34 of Generationship, Nathen Harvey brings data, humor, and heart to a conversation about AI, DevOps, open source, and developer experience. He and Rachel dive into how AI is influencing software engineering, the role of platform engineering, metrics for assessing performance, and broader reflections on engineering culture and career growth.

Nathen Harvey is a Developer Advocate at Google. He helps teams improve software delivery performance by turning research into practice. Nathen is also a contributor to the DORA reports and co-editor of 97 Things Every Cloud Engineer Should Know.

transcript

Rachel Chalmers: Today, I am thrilled to have Nathen Harvey on the show.

Nathen is developer relations engineer and leader of the DORA team at Google Cloud.

DORA enables teams and organizations to thrive by making industry-shaping research accessible and actionable.

Nathen has learned and shared lessons from some incredible organizations, teams, and open source communities.

He's a co-author of multiple DORA reports on software delivery performance and was a contributor and editor for "97 Things Every Cloud Engineer Should Know," published by O'Reilly in 2020.

Nathen, it's great to see you. Thanks for coming on the show.

Nathen Harvey: Oh, I'm super excited to be here. Thanks for having me, Rachel, and it's lovely to see you again.

Rachel: It's lovely to see you too. Tell us what is DORA?

Nathen: DORA is a research program that's been running for over a decade now.

It actually started out of Puppet Labs. They were the first to really start this state of DevOps report.

A few years into it, Dr. Nicole Forsgren, who I think you had on the show recently, she joined the effort, and over time, eventually a company named DORA was founded, and then DORA was acquired by Google Cloud.

But this research program is really all about how do we as teams have the capabilities and the conditions to create high-performing, sustainable, technology-driven teams and does that matter?

So DORA has really had this center of gravity around software delivery performance.

You know, many years ago, and unfortunately sometimes still today, getting software out the door and in the hands of customers is a real struggle in an organization.

So DORA set out to understand the impacts of that struggle and answer the question of does software delivery performance matter?

And so one of the great findings that we've had over the decade is that yes, software delivery performance matters.

It drives towards great outcomes that we all want, outcomes at an organizational level like, you know, better profitability, higher revenue, happier customers, but at least as importantly, is the outcomes for the people in the organization.

Better wellbeing in terms of higher job satisfaction, productivity, less burnout, and so forth. But those metrics really are sort of leading indicators for those outcomes.

You don't get better at the metrics, though, by thinking about the metrics. You know, you don't improve your deployment frequency by showing up and thinking really hard about deployment frequency. Instead, it's about the capabilities and conditions that you have within your organization.

Some of those capabilities might be technical capabilities like using version control and having continuous integration, but others are process related and culture that really play a part in this.

And at a very high level, we think about them in sort of three big categories. You have to have a climate for learning, you have to have fast flow, and you have to have fast feedback.

And that that's kind of a good universal sort of foundational thing that we need to have in order to improve our software delivery performance.

So that's what DORA is all about, doing this research, bringing these findings to teams.

We've built a community around DORA, as well, of practitioners and leaders that are taking this research and putting it into practice, which for me is the most important piece of this.

DORA, at the end of the day, is here to help you get better at getting better.

Rachel: And fortunately, those three things, an environment for learning, fast flow, and fast feedback, are incredibly simple to implement and not complex at all when working with squishy humans.

Nathen: Oh, absolutely. Absolutely. Like you snap your fingers and you're basically done.

Rachel: Even more fortunately, we don't have to worry about any of this anymore because AIs are going to write all the code and AIs are also going to be our customers and purchase all of our products. Isn't that right?

Nathen: Yes, AI has come to completely change everything that we know.

Yeah, I love that sentiment and I think that there is a lot of great, great opportunities with AI.

Unfortunately, I think that there are some foundational principles that are going to stick, maybe not, unfortunately, that AI isn't going to be able to erase for us. We're still going to have those things in place for sure.

Rachel: I keep stealing Grady Booch's joke, "In the future we won't need programmers, we'll just need people who are really, really good at telling computers what to do."

Nathen: That's right, that's right. Yep.

Rachel: Are the LLM-based code gen tools like Cursor showing up in the DORA reports yet, what are you seeing?

Nathen: Yeah, that's a good question. So the tools themselves don't show up, but that's intentional.

Our research has always been program and platform agnostic. So we don't ask about specific tools, but we ask about ways of working and things that you are doing.

So this year, in our most recent research, which was published in late 2024, we did a deep dive into artificial intelligence to really understand, is it being used, how is it being used, and what are the impacts?

And in terms of, is it being used, in a surprise to no one, the answer is a solid yes. It's definitely being used. And I think there's some interesting findings that we have though.

One is, we looked at and asked a question about organizational priorities. So is organization prioritizing AI?

And across the board we saw a very strong support for that, which to me says that there's executive support and buy-in and like mandates from the top. "We have to do AI."

But we also, in our surveys, we talk to practitioners and we want to understand, how much are you relying on AI and for what types of tasks?

And across the board, again, lots of reliance across a lot of different tasks.

Not surprisingly, top in that list are generating code, summarizing documents, writing documents, writing tests, things that you would expect.

But what this tells me is that not only do we have that top-down mandate, we also have that practitioner-led or grassroots-led movement where AI is happening.

I think that those two conditions are really important for this to stick within organizations.

So I'm kind of excited about that and I'm really excited that DORA is researching this right now as this movement starts unfolding, right?

I think that we're going to learn so much over the next few years.

Rachel: Do you have any insight yet into whether the way the highest-performing teams use AI differs from the way everybody else uses AI?

Nathen: I don't know that we have any good insights there yet.

I think that we have some good insights into how AI is impacting teams on the whole and individuals altogether.

But I don't think we we're yet able to tease out, "High-performing teams are using AI in this way."

And frankly, I think that's because high-performing teams, like it's not just how they use AI, it's all of those other conditions are in place when they bring in the AI and as they continue to use it.

So unfortunately it's not as simple as, "Well, if you just used AI like this, you would become a high-performing team." That's not how it works.

Rachel: Back to the environment for learning fast flow, fast feedback, these are really challenging human problems to solve and having a new tool is not going to solve them for you.

There's really no shortcut to sitting with the problem, being curious about the problem domain, and iterating through potential solutions.

Nathen: Yeah, and I think that that's so important, the role of a software engineer, is to have that domain expertise to understand who the users are and why we're building for those users.

Let the AI handle the typing of the programming bits, but we still have to be there for the creative work and the problem solving. That's really, really important.

Rachel: That's a way a lot of people conceptualize AI. I think there's also complexity on the backend that the AI doesn't really grasp.

Like the closer you get to the hardware systems, the more you're talking about low-level assembly and machine code.

Again, once the abstraction goes away and you actually need to touch grass, I wonder if the code gen tools get there in any reasonable timeframe.

Nathen: Yeah, that's a really good question.

I think probably our best hope for that is using an LLM that has all of the local context that it needs, right?

So if you've done that work and have good access to that code, the assembly code, and can augment the LLM that that you're using with that, maybe that's a way that we can help there.

Rachel: What impact are these LLMs having on cloud and platform engineering? Which as we know is a related but separate discipline.

Nathen: Yeah, it definitely is. It's really interesting because in addition to AI, this year we looked heavily into platform engineering.

And of course we're always looking at sort of cloud, and in fact, we kind of abstract away cloud into flexible infrastructure because what we've learned over the years is it doesn't matter if you're using the cloud or not, it's how you're using the cloud that really determines what your outcomes are going to be.

And that looks like flexible infrastructure.

So one of the things that we're seeing is that AI is helping with a lot of things. From an individual level, it's helping with higher job satisfaction, productivity, and so forth. And it's no surprise job satisfaction and productivity typically move together, right?

At a team-wide or application level, it's also helping with things like documentation quality is getting better, code complexity is going down, change approval processes are moving faster, maybe for better or worse.

But when we look at the overall software delivery performance, as you increase your usage of AI, software delivery performance actually falters. It goes down a little bit.

Which is, on the one hand, it's a little bit surprising because of all of those conditions and capabilities that are being improved, they typically lead to better software delivery performance.

I think it's that we're just so new and we're introducing new tools and maybe we don't yet have the focus in the right area with those new tools.

When it comes to platform engineering, one of the things that we're seeing is that platform engineering is certainly taking root in many more organizations.

And of course a larger organization is more likely to have platform engineering discipline in place.

And we're seeing that that platform engineering is, again, like, helping with job satisfaction productivity, it's helping offload some of that cognitive load that a developer might have as they're running on the platform.

Interestingly though, in a similar way to AI, platform engineering is not helping software delivery performance yet. And I think part of that might be just thinking about why you're building a platform.

Many enterprises are building a platform not to increase speed or stability of changes, but rather to increase consistency across their applications and consistency in terms of ways of working.

And so with both of these, I think we're so early that it makes sense that we might see some detrimental effects.

And potentially over time we'll turn those around and really start to see the improvements.

In DORA, we often talk about this as the J-curve of transformation.

Rachel: Yeah, super interesting. That consistency point. I sort of want to dig in a little bit more on that.

Is that so that those organizations can actually move devs around and the learning curve of getting into a new part of the organization and becoming productive is flatter?

Nathen: I think it might be partly that, but I think it's also partly, let's say you're in financial services, there are regulations and, you know, you have compliance mandates that you have to follow.

Let's build those into the platform so that we can get consistent sort of following those policies and build that in. So I think there's probably a little bit of both.

Rachel: Interesting. 'Cause as an investor, obviously I'm very excited about the potential of AI.

As a person with a ton of friends who are engineers, I get a little nauseous when I see those billboards saying, "Stop hiring people."

What's your position on, you know, the lure of AI to people who want to sack their entire software engineering department?

Nathen: Yeah, I think you would do that at your own peril.

Part of the reason that you're prioritizing AI is because you want to keep up with what everyone else is doing.

And we see this in our research, there's a lot of FOMO that's happening. That's why AI is being prioritized.

Well, those organizations that are learning the best ways for the humans to interact with these models are going to be the ones that deliver the best results.

So if you bring in the AI and sack all of your software engineers, that's not going to lead to a really good outcome.

And I think it's also, it's just kind of indicative of this challenge that we have as software engineers in demonstrating our value to the organization.

And I think that we as software engineers will do better as we are able to better talk about the business value that our software that we're generating is creating and have those conversations and move away from this notion that what a software engineer does all day is sit down and write code.

That is a relatively small part of a software engineer's job.

Rachel: That's a perfect lead-in to my next question. Thank you.

How should managers think about assessing the quality of code, whether it's generated or written by a human?

Nathen: Yeah, this is a fascinating question.

On the one hand, my position is you shouldn't care how the code gets generated when you're assessing is it good or bad, right?

And I think that the best way to look at that is to listen to your developers. Are you getting value out of these tools that we've provided for you?

But then, you know, we also do have to worry about what is the quality of the code that's being generated regardless, right?

And I think that the software delivery performance metrics actually give you a pretty good gauge on what is the quality like of your software.

And in fact, intentionally they look at sort of the end of the process, if you will. And there's some scare quotes around end because of course software never really ends, right?

But it looks at that phase where you're delivering software to customers. And AI, you know, there are plenty of studies out there that show you that AI will help you achieve 30% better efficiency as you're writing code, but if writing code isn't your bottleneck, that 30% efficiency gain is going to disappear or even get worse as you look at the entire system.

So I do think using DORA's software delivery metrics is one of the good ways that you can measure that quality and build that trust in the code that's being generated.

Rachel: What's your personal view of some of the alternative metrics that have been suggested? Like SPACE?

Nathen: I really like SPACE. SPACE is more of a framework for creating metrics than it is any set of metrics.

In fact, in in many conversations I've had with Dr. Nicole Forsgren, who was also one of the authors of SPACE, we've talked about DORA as being an implementation of SPACE.

And I think that DORA also, when you look at the entire research program, it's certainly much more than these four metrics, right?

And sometimes we do get dinged on like, "Hey DORA looks at deployment frequency, but what about the people?"

And of course if you look beyond the four metrics of DORA, you see that it's foundationally about the people and the process and we're always investigating and sort of advocating for better practices there.

So I think that it's really great to see things like SPACE and even the DX Core 4. Like we're always trying to layer in more metrics and more frameworks.

At the end of the day though, I'm really honestly less concerned with how you measure and more concerned with how you improve.

And I think that it is taking those steps to help you improve that is really, really important.

Rachel: We talked about how software never ends. I was thinking about, you know, Unix, the clock starts the year I was born.

This software that is 50 years old that we're still using because it's very reliable, that general ledgers in banks still run on IBM mainframes.

Is longevity going to turn out to be a way to test the utility of software?

Nathen: I think you're onto something there.

It is certainly, it is certainly one of those sort of scaling factors that we don't really think about enough, I would say, right?

When you're writing a line of code, what is the lifespan of this line of code, right? Could it still be in production 50 years?

I know for a fact that there was code that I wrote years ago that is still in production and if I looked at that code, I would be embarrassed by it.

I should be proud of it 'cause it's still running in production. So I do think that that's a really interesting factor to think about.

The challenge with that, though, is it's very difficult to make a decision about what to do tomorrow with a guess of whether or not this code is going to be running 50 years hence, right?

So, but I mean, I'm all for the long game and that's what we have to play, so yeah, let's look at that longevity.

Rachel: What do you see are some of the biggest risks as we sort of wholeheartedly plug gen AI into everything? What keeps you up at night?

Nathen: Yeah, I think that I have lots of concerns around, how do we build experts in our field over time.

The whole like junior developer, what happens to that junior developer?

And there's so much of things that we know as professionals who've been in the industry for a while that we know almost through intuition because we've experienced these things, like these errors or these problems in the past or something that feels like them.

And I do worry that when we sort of hand over the reins to the LLMs to generate all of the code for us, what happens when something goes wrong, right?

We call in the LLMs to fix the thing that's gone wrong and we lose more and more of an understanding of what's actually happening.

Now some of that I is just going to be natural. We've been abstracting higher and higher for eons, right?

So I think that that worries me, that sort of, how do you come up as a junior to become a senior engineer?

I think that what happens when things go wrong is always a thing that keeps me up at night.

Coming from a background of, you know, running operations, I'm always worried about what's that page going to be about? Does it matter? It probably does. So those are two things.

I, of course, worry about a lot of the inherent biases that have been built into the LLMs. Intentionally or not, they're there.

And so how do we protect against that and correct them when they come up?

Rachel: Yeah, to the extent that we've distilled the corpus of Western writing about all kinds of things, we've got concentrated essence of all of Western society's flaws, which is exciting.

Let's smear that all over everything we do.

Let's argue about open source.

Nathen: Right!

Rachel: Plenty of people think the so-called Open Source Foundation models are no such thing. Where do you draw the line?

Nathen: You know, I think, and I'm really not trying to skirt the question, but for me that...

As I reflect on my own career, the thing about open source, for me, it started the first time I really discovered open source and that was when I shared some code and someone on the other side of the country who I'd never met, reached out to me and helped me improve that code.

And so to me, open source has always been about the people, it's been about the community and this idea that we can have a disparate and diverse community that comes together and we have some level of alignment, right? We're all working on and towards the same goal. We all have the autonomy to approach that in the ways that we see best fit. To me, that's what's really beautiful.

It's the autonomy, it's the alignment, it's the coming together and really trying to lift each other up.

And sometimes, to be fair, like in open source, it doesn't always feel like we're trying to lift each other up.

Sometimes it feels like we're fighting because maybe we do have some strong disagreements.

But to me that's the real spirit of open source is bringing together that community and working together.

Now, when it comes to licensing, I don't have super strong opinions.

But if you haven't seen it, I would go watch Adam Jacob's talk from KubeCon in which like, I will summarize it up really quickly, "If you're thinking about an open source license, you should totally do that. Just make sure that you can sell that shit for money." That's his advice.

Rachel: An absolute classic of the genre. Highly, highly recommended. We've touched on this a couple of times, but I'd like you to put on your DevRel hat now.

Nathen: Yeah.

Rachel: What advice would you give to today's junior engineers? What are the most interesting career opportunities?

Nathen: Oh, this is fascinating as well.

So I think when it comes to junior engineers, the most important piece of advice is to remember that you became an engineer to solve problems and solve problems for people. And the best thing that you can do as an engineer, in my opinion, is make sure that you're in a role that allows you to experience what your users are experiencing, build that empathy with the users of your system.

In our research, we find that the teams that have high levels of user centricity are the teams that have the best performance, period.

And I think that it's wonderful that you got into engineering because you love to code and that is a small part of the job. You're here to help solve problems.

And sometimes the best way to solve a problem is not to write a line of code, maybe even to delete some code is a good way to solve a problem.

So I think the most important thing to do is, is find a place where, you have to have passion because it doesn't matter what you do today, it's going to change tomorrow.

Whether that's organizationally because of org changes or it's technology that's changing because we're always disrupting ourselves, right?

So find something that you're passionate about, something where you care about the use case and the mission that you're on because those things can sort of transcend technology changes, they can transcend organizational changes, whether it's you leaving a company and going to another, or your management structure changing from above you.

So I think that's what's really important.

Certainly, in my opinion, like having that passion is more important than, should you become a data analyst or a data scientist, or should you become a software engineer, or should you run operations.

I think that this land of technology has so many different options for you. Really, the world is your oyster, but find something that you actually care about.

Rachel: Figure out a way for you to delight customers and make money for your employer--that is a good fit for your particular character and interests.

Nathen: Absolutely. And the whole idea of making money for your employer, I think it is really important as a software engineer, you remember, you're a business person.

You work at a company to further the goals of that business.

And even if it's a nonprofit, like maybe you're not making money, but you're furthering the goals of the mission, right? You want to attain that.

Rachel: Nathen, what are some of your favorite sources for finding out what's going on in AI? It's such a fast changing and wild world.

Nathen: It is such a fast changing and wild world.

It's hard to not find references and learn new things. It doesn't matter where you look.

You mentioned seeing things on billboards, obviously you're finding them all the time posted on LinkedIn and throughout the internet.

Myself, I love listening to podcasts. I'm doing some commuting now, so, which I hadn't done for a few years. And I've found that a commute is kind of welcome again.

'Cause I'm catching up on podcasts and you have that time to disconnect from work, which I think is really, really positive.

So I think in addition to that, really just going to conferences when I can is always super important because you find people that are so passionate about the work that they're doing and that passion is, it's contagious, right?

And so that's how I like to learn.

Rachel: I've been having to sort of push myself out the door. I got really comfortable being at home during the pandemic.

But whenever I do overcome that inertia and spend time around other people who are working on interesting things, I remember why I got excited about this in the first place.

Nathen: Yeah.

Rachel: All right. I'm going to make you god emperor of the solar system.

Nathen: Ooh.

Rachel: Everything goes your way for the next five years. What does the world look like?

Nathen: Oh boy. Well, it's a really interesting time to be asking that question. Both from a technology landscape and from elsewhere.

Look, I think if all goes well over the next five years or five years from now, I think that as a society we will get back to our roots of embracing the ideas and the humanity of everyone around us, right?

And really thinking about that and loving people for who they are and allowing them to be themselves.

I think that's probably the most important thing that we can do.

I think that that starts locally, it starts in your family and then, you know, sort of building out concentric circles as we go.

And in five years we won't, the concentric circle won't be the entirety of the earth.

But if you can do something between now and over the next five years to make that a little bit better, I think that that's really important.

Rachel: I love that. I don't think we talk enough about love in software.

I think you know, that creating customer delight is an act of, you know, interconnection and empathy.

And when I was working in an accelerator and pushing people into doing customer discovery and saying, "No, trust me, by the time you've done the 20th interview, you'll get it."

The most wonderful moment was when they came back, wreathed in smiles and said, "I figured it out. I understood, I know what the customers need."

Nathen: Yes.

Rachel: And that's a very human connection. It obviously has great business value, but that's not the joy of it. The joy of it is that connection.

Nathen: Absolutely. Absolutely.

Rachel: Last question, my favorite question. A generation ship takes many human generations to fly between star systems. I'm giving you one. What are you going to call it?

Nathen: Oh, this is a really challenging one. I think a good ship should have a very simple name, a one-word name.

And I think I'm going to go with Together because as we travel through to colonize or to discover, we're going to do that not as individuals, but we're going to do that together as a community.

And so I think Together sounds like a good name for a ship.

Rachel: I love it. I'll book my tickets.

Nathen: All right. Excellent.

Rachel: Nathen, thank you so much for coming on the show.

Nathen: Oh, thank you so much for having me, Rachel. It's been super fun.