1. Library
  2. Podcasts
  3. Generationship
  4. Ep. #27, It's About Happiness with Melody Meckfessel
Generationship
31 MIN

Ep. #27, It's About Happiness with Melody Meckfessel

light mode
about the episode

In episode 27 of Generationship, Rachel is joined by Melody Meckfessel, an industry veteran and CTO of Jasper AI, to discuss the rapidly changing landscape of AI-driven development. From the evolving role of developers to the challenges of building trust in AI systems, Melody shares her vision for a collaborative, human-centered future in tech.

Melody Meckfessel is the CTO of Jasper AI and co-founder of Observable, with over 25 years of experience in building large-scale distributed systems. As a former VP of Engineering at Google, she led DevOps for Google Cloud, driving innovation in software delivery and developer tools. Melody is passionate about creating human-centered technology and fostering collaboration in the AI space.

transcript

Rachel Chalmers: Today, I'm thrilled to welcome Melody Meckfessel to the show. Melody is the founder of Observable and a hands-on builder with more than 25 years building and maintaining large scale distributed systems and solving problems at scale.

Currently, she's the CTO of Jasper.AI. Before co-founding and leading Observable, she was the VP of engineering at Google, where she led DevOps for Google, including Google's cloud platforms, tools and systems.

Her team powered the world's most advanced, continuously delivered software enabling development teams to turn ideas into reliable, scalable production systems. Melody, thank you so much for coming on the show.

Melody Meckfessel: Oh, thank you for having me. I'm so happy to be here and speak with you.

Rachel: You have written that building technology is a team sport. I love that. How might that change in this age of generative AI?

Melody: It's a good question. I think we're kind of all figuring it out right now, to be honest. Like I think you and I can put our engineering hat on and we can say, "Well, we're going to get assist from tools that are evolving in the generative AI space." And the wish or the dream, right, is that we focus on the pieces that really matter.

So if we go back in our dev tools history, and we look at automation around testing, right? And you look at like things that we just don't even think about anymore because we have tools that we have a high confidence and trust in and we rely on. The same thing is happening right now in the generative AI space.

The same questions around trust are still top of mind for us as engineers, and the confidence that we have in the tools and what they'll do for us, and really messy right now.

I guess, I would just say, you know, I think it is going to change how we work in many of the same ways that we've experienced in the past around automation and assist and viewing AI as our pair programmer, as our teammate.

And we don't know exactly what that looks like, but I do think building technology as a team sport becomes, what's a good model? I love music, so it becomes more composition than we're playing all the instruments ourselves and we're composing.

Rachel: Yeah.

Melody: I think of building technology is like we're riffing, right? Like we're feeding off each other's ideas. We have the tools to build in the way that we need to right now. And now we have this assist that we're not quite sure how to use. But we are doing that real time.

And I would just say for our partners in product management, for our partners in user experience, in design, in visual design, I mean, think of the challenges also when we bring engineering product and design together around how humans are going to be interacting with AI and the products that we're building.

So we have this double whammy of like, we're figuring out how it helps us build, and then we're also figuring out how it's coming into the products that we're building and the humans that are on the other side, which is just like, it's like mind blowing for me.

Rachel: And I guess the weird thing, the new thing, the qualitative change is that, yeah, we've had automation in the past, but that automation was deterministic. We could trust it because if we knew what algos went into it, same input, same output.

What we have here is stochastic. It's a fuzzy model of the world. In the case of LLMs based on written language. It can't touch grass. It doesn't have any external reference. And so it doesn't really know things. It only knows associations between things.

And so those two differences, like same input, different output, and it's not actually talking about what it's talking about, it's just making associations between words.

Melody: Yeah.

Rachel: I think that's what makes people uneasy because it's too easy to think of this thing as an oracle, and that's just a really misleading mental model for it. It's more like a Formula 1 car.

Melody: Yeah.

Rachel: It's incredibly complex and it's capable of being super powerful, but it doesn't know where it's supposed to go, it doesn't know it's supposed to go around the track. You still need somebody really skilled in the driver's seat to make it work as designed.

Melody: That's right.

And that's why I don't buy into AI is replacing all the functions of developers or how we build software for the world. Because I think that the shape of that in terms of what we do every day and fingers on keyboard and all of those things, like they are going to continue to evolve in very new and interesting ways because of what you just said.

Rachel: And, you know, the whole idea of replacing programmers speaks to a larger question about what is the point of what we're doing here.

Like there is one point of view which says, the point is accumulating more wealth for billionaires. That's not a point of view that interests me very much.

Melody: Yeah, I agree.

Rachel: The other point of view is like, are we doing meaningful work? And what does meaningful work even mean? You know?

Most of us like to do work that makes a difference to somebody, that makes a material difference to quality of life or helps us figure out stuff about the universe.

Melody: Yeah.

Rachel: It's meaningless to suggest that AI could do any of that when all it is is this fuzzy model of what we already know.

Melody: Yeah.

Rachel: It obviously helps us do a bunch of stuff that relies on us doing what we already know. It's not going to help us write PhD thesis, which by definition need to be something that hasn't been written before.

Melody: Yeah. I connect with that, Rachel, on that. I have always been an engineer who's centered on the human.

Rachel: Right.

Melody: You mentioned team sport, right? Like we work together, we build better when we collaborate, we evolve our code, right? We build better quality, we have more fun.

And also that to me needs to be very tied to the people that we're writing systems for. So what is the benefit that they're getting concretely? It's not about moving a metric a couple of decimal points. I mean, for some systems it is.

But I think it is about the quality of work that's happening in the world and how we're helping to improve that and change that and improve the interactions that humans have with the systems that are in service of their work and their outcomes.

And I think if we make sure that in this next phase we continue to question and challenge ourselves around what we're building and how that is in service of our customers, our users, our community, all of those different dimensions, then I think, we'll stay on that right path path, that noble path of what we're building.

Rachel: I don't know that it's noble. I just know that a lot of the time I'm optimizing for fun or delight, or as many people as possible having a reasonably good time.

I think that's why we use the team sport and music and Formula 1 analogies, because most people actually organize their lives around fun.

Melody: Yeah. Seeking joy is my definition of noble.

Rachel: Fair enough.

Melody: So it's not more inflated than that. Like there's no morality. It's just like this is the time we have on the planet. We're building, we're building together, we're building, hopefully, for improvements in, you know, our users, our community, our customers. So to me that's, yeah, it's not more than that.

Rachel: It's so fleeting. There was the humbling stat going around Twitter last week that the span of human life in proportion to the age of the universe is the equivalent of like half a second to the span of a human life. It's like, we're just here for an instant. We might as well enjoy it.

Melody: Yes, yes.

Rachel: Before you joined Jasper, you and I were advising a ton of really interesting early stage companies around AI. I want to hear what you were talking to them about, what some of your favorites were, what are some exciting things that people are working on?

Melody: Yeah, I'm very excited, because, you know, my background is in developer systems, infrastructure, site reliability, like everything that goes in kind of the, we used to have a phrase, it's like we build the thing behind the thing that makes the thing work.

I am really excited and interested in companies that are pushing the boundaries around agent and the progression from that kind of inner loop interaction to inner and outer loop, and how we build trust and delegation and what is the signal that we convey back in the user experience of those sorts of projects.

So for me, I am really trying to keep an eye out for companies that are pushing the boundaries around agents and delegation. Because, again, like back to what we just talked about, I think being users of those tools and being able to give signal and understand the evaluation and the quality behind them, like we are those practitioners, right?

So if you think about outage situations for production systems, you still have an incident manager who kind of knows what's going on, but hopefully we have a lot of assist in restarting the cluster, right?

And doing things that we're kind of familiar to us, but pushing the boundaries of what we can delegate and trust. So any company that's in the agent space I have been watching as it relates to DevOps, production ops, production, infrastructure, code quality, code generation, I think those companies are going to be doing really interesting things.

And I think to go back to your question of, or the point that you brought up earlier around quality, it's interesting to see the community kind of focus on what benchmarks can we use, what benchmarks do we need to create, right?

So SWE-bench, a lot of focus on how do you measure against SWE-bench, but it's kind of a, you know, if you look at the startups and the big cloud providers that are out there, like the quality numbers are disastrous right now, but it's the only thing we have, right?

That we can compare against with each other. And I think there's going to be some really interesting benchmarks that can emerge. So that's one thing that has come up in meeting with startups is everyone's trying to think about how do we prove, how do we show quality improvements?

How do we also get signal from users, you know, thumbs up, thumbs down, something that we can use in training. I think the other point that I would bring up, everyone's thinking about training everything, everyone's thinking about research, everyone's thinking about data.

I think then you get into questions of companies looking at open sourcing, open sourcing research, open sourcing what they learn, so that we know the open source model helps all of us move forward in many ways. So I've really appreciated hearing those sorts of questions in the early phases of AI development, as it relates to infrastructure.

And the final two things that I've observed is that companies that are not thinking about some sort of freemium kick the tires sandbox are not doing well.

They're really struggling. And I think the companies that are doing well have that, and they're also thinking deeply about embedding in the workflows of their users and their customers.

So to me, that agent, how we build trust, the benchmarks, how those develop beyond what we have now, this idea of training data and then also just open sourcing components, I think you have to have some sort of freemium, sandbox, kick the tires.

I need it. When I look for at a current product, I always have that question and I try and figure out how I get my hands dirty with it. And then workflows. I think it is about digital transformation in many ways.

And we as builders are going to learn that with our customers even at the very, very early stages in startup life.

Rachel: That's fascinating, 'cause it reflects a lot of what I'm seeing. I think everyone's bootstrapping themselves. I think part of what's behind this idea to replace engineers with AI is just that junior devs take a lot of care and feeding.

It's hard to have someone straight out of college in your team and to like give them a lot of support and a lot of PRs and a lot of care and feeding, but it's worth it because that's the only way you get senior engineers. That's the only way you build people who are able to think in systems ways.

I think the progression you described from a freebie version to like explainable training data to thinking about standards replicates that progress from, you know, we need to kick the tires on stuff to we need to start using it in anger to we need to start to be able to reason about these complex systems in logical and sophisticated ways.

And, and that's the progression that we're all on all the time as we learn new things.

Melody: Yeah, I agree.

Rachel: What gaps in the market still need to be filled? Are there things that you're looking for that you haven't found yet?

Melody: I'd say it's a really good question because just logically, I know that there are gaps that are probably people are working on in stealth that we're not even thinking about.

Having been working with and advising startups, so many people are trying to optimize and have to think about budget and cost. And it's really like that optimization around new projects, new areas of research that are then fed back into what you're building. That part of how we build in this environment, I think is evolving.

And I really wish there were... I know that there are some that are out there, Rachel, and I'm just probably not aware of them, but I think I notice people trying to figure out really, and maybe this is good, right?

Maybe this is a feature, not a bug. Really trying to figure out how to optimize their ideas and projects and exploration with this cost parameter at the early stage of startups.

Rachel: That's really interesting. Yeah.

Melody: Being lean and really trying to be creative, it can be challenging for teams.

So I really hope that there become more options out there for startups to do that more creative exploration at different price points. I think it is starting to shift, but I think it is consuming a lot of cognitive load of developers and teams because you really, you want to get your idea out into the market, you want customers and users. But to get there, I think, there is a lot of iteration and a lot of creative exploration that's happening. And that cost dimension is a challenging one.

So back to your question, the solutions out there are great right now. And I think if there can be some better solutions in that space, I think it's going to help the overall acceleration of startups.

There are a lot of products in the development space that are approaching what they're building now from new angles around AI, that are pushing, what I said before, more toward delegation and agents and proving that those are valuable and can be trusted.

So I think we're going to see a lot of, there's a lot of flowers that are blooming right now. I don't know, one's not really coming to mind right now that's obvious. I think, you know, as we were kind of talking about before being patient, being patient with the practicality of staying in business and continuing to do development, yeah, it's a challenge, but it's exciting time to build.

Rachel: I do have one, it's not nearly as nuanced as yours, but I have a theory that every platform shift needs a virtualization layer and a monitoring layer.

So with the early web, we had VMware and Splunk, with the rise of microservices, we had containerization and observability. I do think LLMs are going to need a security isolation layer and, I don't know, maybe it's explainability something that lets you do very large scale ad hoc queries against your data store.

Melody: Yeah.

Rachel: On that note, you are one of the great thought leaders in observability. LLMs look so different from the last 10 years we've spent on microservices. They're giant, they're resource hogs, they're GPU-based.

Melody: I know.

Rachel: What is that going to do to the way we monitor and manage going forward?

Melody: I don't know. But my theories are that we're going to have to start doing some decoupling around purpose of usage. And I think it's going to come back to the ultimate business value, if that makes sense, right?

So like not all microservices are equal today. Some are higher business value than others. So I don't know, how it changes what we monitor and manage.

I do think that we're going to look for more automation where we can, which is, I think, agents and infrastructure will come into play with that. I think there's a lot of uncertainty around quality and the intertwining of quality with management. I think that's where maybe benchmarks and new evaluation models will come into play that are actually like inputs into how we think about monitoring and managing these systems. And I think we're going to see more merging of data infrastructure and the infrastructure to manage LLMs continue to evolve.

And I think, like on one side, I can envision a world, and I'd love to hear your thoughts on this, like six months from now where we start to like decouple and piece apart and create abstractions that make it easier for us to monitor the things that are absolutely like mission-critical.

And then on the other side, I see like a merging of the infrastructure because of the dependencies between the function itself and the data that's under underlying it. Whether it's training data or it's actually like customer data that is being used in smaller models.

And then I think there's this whole question of open source too. Like what's the role of people adopting an open source way of managing microservices with like proprietary tech that they're building in-house?

So yeah, I don't know. It's a good question. I think there's going to be some level of abstraction that's going to happen. I think it's going to be tied to the outcomes that you want to achieve, which, when I say outcomes, I just mean like cost, quality, and speed latency, right?

I think that's going to come into play. And I'm already seeing that being prototyped, those outcomes being prototyped in some of the startups that I mentioned earlier.

Rachel: Yeah. I love a good abstraction layer. It's always a good way to solve a new problem. I think there's a lot in what you say about separating the business logic from the data, because those tend to be owned by different people on the customer side.

And so everybody's looking for data that lets them optimize their own activity. But yeah, I think, a lot is up for grabs. I think good ideas may happen here.

Melody: Yeah.

Rachel: Same question, but on the developer side, how are LLMs going to change the way we code?

Melody: I wish I had like a crystal ball, a magic wand. I mean, I think they already are, right? I think, you know, part of what I have been really curious about over the last year is a bit around the deployment side of the world.

So the ability for AI to kind of take the helm of managing our environments, managing our sandboxing, and then also assisting in how we're constructing and building systems. We're visualizing them.

Like if you just think about, just go back to the beginning of like, what is the PRD, right? Like do people write PRDs anymore? Do they write the one pagers with AI? Most people are writing those one pagers with AI, right? And then they're correcting.

Okay, so then keep going, right? Like, how do we prototype the first versions of what we're going to build? Like that's happening much faster with AI assist tools.

So developers, you know, small development teams, two to five people are writing what before would be massive software development teams, because they have the autonomy to do the things that they don't need specialized skills for anymore.

And I just want to go back to your point around junior developers, there's absolutely going to be a role and a progression of expertise from early stage career development for engineers through to very specialized principal engineers and fellows.

But I think the autonomy and the speed with which we can move on both the infrastructure, environment management, test management, but then all the aspects where we can kind of really be part product manager, part designer, part like fast prototyper around the system that we're building, I think if you look holistically at it, it's changing everything.

It's speeding it up and it's reducing toil, it's continuing to take away the things that, if you're a builder, that you don't want to care about, you have a lot more options.

Now the con of that is that there's a ton of churn right now, because there's so many different options. And we as developers, we simultaneously, like, we love new tools and we hate churn.

Rachel: Yes. Yeah.

Melody: Right? So if I just look within like Jasper or some of the startups, the number of tools that we're trying out and learning, it's so much more than it was like five years ago.

And you have to think like, not all of those are going to stay. So how do we get better at assessing? Like, are we going to keep using this for another six months and get value out of it and then we'll reassess?

So I think it's adding churn, but it's also like we're kind of used to it. So it's okay, Rachel, you know? Not that we like it. I see you shaking your head. But I guess I'm just being realistic.

Like I think it's changing the whole software development process and what we as developers bring. I think it's empowering smaller teams to move faster in certain spaces. And I also think that it is, yeah, it's changing the dynamic of how we spend our time pretty significantly.

Rachel: I think you're right. I just think from the individual dev's point of view, it's so challenging. Like you put in all of the effort to climb that steep learning curve and then you have a product and it's working for you, and the org's like, "Nope. Next." There's a lot of inefficiency there.

Melody: I agree. I agree. I am trying to bring an experimental mindset to how we design tools. We have an outcome, right? Like it's not about precision, it's about happiness.

Rachel: Yeah.

Melody: And we try and make a call sooner rather than later. And we try and be good design partners, right? So if we are trying something, we make sure that we give feedback, 'cause, you know, it's a good thing to do.

Rachel: What are some of your favorite sources for learning about AI?

Melody: I'm laughing because like, talk about churn! How do you keep up? I want to know from you. So here's the thing that I feel like is maybe a little bit strange.

I love to pay attention to all the free online resources that are out there, because I'm so curious how the language is continuing to evolve with what people are learning. And there are so many that are out there, and they've changed like even in three months, right?

Rachel: Yeah.

Melody: The framing, like it's moving so fast. So I love to like do kind of pulse checks and kind of go through like at high speed tutorials. There's some marketing now that I'm building for marketers, different types of marketers.

There's some conferences and podcasts and things in that space. MAICON is one of them. I try and listen to podcasts, you know, if it's "No Priors" or "Latent Space," I mean there's so many that are out there.

And, you know, I do experiments around newsletters, like to just get the latest like news and feeds, whether it's AlphaSignal or something else. And then I just like, I do read the Gartner and the Forester and the Analyst Reports, right?

Rachel: Thank you. As a former analyst.

Melody: Yeah. I think they're important. Like, you know, I did a meeting with a briefing with Gartner, gosh, it was probably a month ago now.

But I think, those patterns that they're accumulating of what's happening in the space, especially around the customer in the market, this, I think I mentioned, I mentioned this to you, this observation that like POCs and POC purgatory, right?

Gartner's saying, 30+ percent of right now through next year are going to end up in this POC purgatory. Like it was just a waste of time. It's not going to go anywhere, either for trust or budget or the organization isn't ready.

And if you're not listening to the analysts, you're kind of not going to get it. I also like pay attention to the VCs, so the VCs in the space like, wink, wink, right? Generationship.

Rachel: It's funny you should say that, 'cause I was just thinking, I still do what I did when I was an analyst. Like I glean most of my knowledge about AI from talking to dozens of startups a week, you know, sitting in on demo days and listening to 30 startups with different takes on AI.

I'm still doing primary analyst research. It's just, I'm a VC now. Yeah.

Melody: I mean, and the competitive, depending on your space, the competitive research too is moving so fast.

Rachel: Yeah.

Melody: So I also do a ton of trials, which is why I brought up the freemium point.

Rachel: Yeah.

Melody: Like I'll sign up to do a trial just to get my hands dirty and learn.

And I think, you know, if you're not doing that and you're in this space, you're kind of missing it. You need to dedicate some time to seeing what else is out there, even if you're not going to adopt it and use it for the long term.

Rachel: Got to play.

Melody: Yeah.

Rachel: Playing is very important.

Melody: I agree. Joy, there's so much joy in playing around with new tools.

Rachel: Melody, I'm making you god emperor of the solar system for the next five years. You get to decide how everything turns out. What does the future look like?

Melody: Oh my goodness. That's such responsibility. I think the reason I got into engineering originally was just the amount of creativity and building and creativity and building with others. And I think that is still present and alive and flourishing in this next wave that we're in with AI.

And I want the future to look like humans, developers, whatever we call ourselves, AI engineers, I don't know, to have so much more creative freedom around what we build. So the Formula 1 analogy is a very good one.

I think about creating music and I think about composition and I think about art and I think about how other humans are going to experience what we build. And I see technologists and builders in this role of composition, of taking the pieces that are going to fit for the problem that we're solving, which is what we've always done, right? We're going to be doing it in new ways, which offer us so much more autonomy and independence and fast exploration and also a lot of responsibility of being clear about what practices and policies and ethics that we have that responsibility in terms of what we build. And we need to be thinking about that right now.

Rachel: Yeah, what data we're ingesting and how much fossil fuel we're burning and all of those things.

Melody: Yes, a hundred percent. Absolutely.

So I see the responsibility and the role of responsibility increasing as we become more composers and creators in this next phase. And I am incredibly excited about it.

I think like everything that's being built today, like I am inspired by, I am curious about, and I just see that just the tremendous amount of potential with responsibility that's going to come for our field and our industry in the future. And yeah, I'm excited about it. Voice, vision, virtual, all of that, I think, is going to be part of it.

Rachel: In recognition of your stellar five years of service, as god emperor of the solar system, we, the people have constructed a generation ship to take you to Alpha Centauri. What would you like to name it?

Melody: Edina. That's my daughter's name.

Rachel: That's a beautiful name. What does it mean?

Melody: It was really selected, the meaning for her was selected that she would have a good nickname. Her nickname is Eddie.

Rachel: Oh. So great.

Melody: If the ship had a nickname, it would be Eddie Max.

Rachel: I love it. I love it. And with that, we'll let you board the Starship Edina and said you on your way, Melody. It's been a joy having you in the show.

Melody: Thank you so much. It was so good to speak with you. Thank you, Rachel.