1. Library
  2. Podcasts
  3. The Right Track
  4. Ep. #6, Domain Expertise with Laurie Voss of Netlify
The Right Track
64 MIN

Ep. #6, Domain Expertise with Laurie Voss of Netlify

light mode
about the episode

In episode 6 of The Right Track, Stef Olafsdottir speaks with Laurie Voss of Netlify. They discuss the roles in modern data teams, how Netlify uses data, insights on early data hires, and whether or not self-serve analytics will ever work.

Laurie Voss is a Senior Data Analyst at Netlify, but a web developer at heart. He was previously the founding CTO of npm, and an application developer at Yahoo. Laurie started his first web development company 25 years ago.

transcript

Stefania Olafsdottir: All right. Welcome to The Right Track, Laurie Voss.

Laurie Voss: Hello, thank you for inviting me.

Stefania: Great to have you here.

A great honor. I've been following you on Twitter for a really long time, you have great commentary on various things, whether it's data, web development, or cultural, political things.

So recommend it.

Laurie: I apologize for my legacy of oversharing.

Stefania: Get me through a day.

Please, don't stop it.

But to kick things off for us, could you tell us a little bit about who you are, what you do and how you got there?

Laurie: Sure. I am a web developer.

That is the primary way that I identify myself.

I started web development as a teenager when I lived in Trinidad and Tobago in the Caribbean.

I started a web development company in 1996.

I had to get my mom to drive me to meetings because I didn't have a driving license yet.

I took a break to go to college in the UK.

I lived in the UK for seven years, where I got a job with Yahoo.

Yahoo moved me to San Francisco and I have been living in California for the last 14 years.

I have started a couple of companies.

In addition to working at Yahoo, I started an analytics company called awe.sm, where I got my first exposure to big data, although I usually refer to it as medium data.

And then I co-founded NPM, which had an enormous amount of fun data to play with about the world of JavaScript.

And now I am a senior data analyst at Netlify continuing my love of all things web and data.

Stefania: Nice. Thank you for that intro.

That is so great that your mom was driving you to business meetings. I love that.

To kick things off, I like to ask our guests to tell us inspiring and frustrating data stories so that we can all get on the same page about what we love and hate about data.

Can you share some fun examples or favorite examples of a successful data-driven product decision?

Laurie: The one that jumps to mind is when I was still at Yahoo, Yahoo has an enormously popular front page at yahoo.com and a couple of other enormously popular properties like Yahoo News and Yahoo Weather, which have just been chugging since the late '90s.

And one of the things that Yahoo had on their front page was a list of like this week's or today's top 10 news stories.

And these were carefully selected by a team of human editors who took their jobs very seriously and had many years of experience.

And the Yahoo data science team decided that they would run an experiment.

They were like, what if we got a robot to pick the stories instead?

Because Yahoo's revenue is driven by clicks on stories, right?

Like, when they show an ad, that's when they make money.

So the more clicks, the better. And the human editorial team were very much like, "Well, these robots are never going to be able to do what we do. We are incredibly intelligent editors."

But they ran the test and it was just no contest whatsoever.

The ML-selected stories, I can't remember the exact figures, but it was like an enormous multiple.

It was like 10 or maybe even 100 times more clicks went through.

The robots were way better than the editorial team and figuring out what people would actually click on.

The editorial team of course was like, "Well, now we have no job."

But they didn't completely can the editorial team, because one of the things that they discovered with the ML-based system was that selecting entirely for clicks, it tended to produce weird stories because that is what people would click on.

So, like, you'd get these 10 utterly bizarre confusing headlines, and it would make Yahoo News look like this unserious news outlet that there was always like, "Here's a turtle with two heads."

So the editors were left with being allowed I think to select two out of the 10 stories to make it look like serious news and the other eight were selected by robots.

Stefania: That is a very, very good story.

One thing I'm concerned about with this story is that it sort of pushes onwards the theory and the unrealistic expectation that all companies have is that the first thing that companies should do in data science is something machine learning something.

Laurie: That's the thing.

There's so few companies that are really operating at the scale where running ML on your own data is going to be of any use at all.

Yahoo is enormous, right? Yahoo got 400 million unique visitors a day.

That was the size of the data set that they were playing with.

And ML is very good at that kind of scale.

When you have a hundred users a day, what is your model going to do?

Your model is just going to be a nightmare of overfit.

Stefania: Yeah. But we will touch on this, I assume, a little bit later because I know you have some good thoughts on what should the early hires of a data team be.

So we can bring on some opinions when we touch on that later in the episode.

Thank you for sharing this great story.

Do you want to also maybe put us on a track of some frustration around data?

What are some of your frustrating examples of data issues or data stories?

Laurie: Sure. I think I have a whole startup that is a frustrating example of data.

Like I said, there was this company I ran called Awesome, and it was an analytics startup, specifically it was a social media analytics startup.

So what the product promised to do is you, as a social media manager, would be able to sign up for the product and they would instrument your social media campaigns.

So everything you shared on every social network would go out and we would tell you, not just how many people saw it, not just how many people clicked it, but how much money you made.

Like, these people came to your website, did they eventually convert?

How much money did you make?

The idea being that you could show a positive ROI for the investment that you were making into a social media management team. And we built some very good technology and some great real-time processing of all of this stuff. But the problem was that the answer was not the answer anybody wanted. The answer was that social media hardly ever converts directly to sales. It made almost no money.

So these social media managers who were making $90,000 a year would get this report that said that their entire contribution to the bottom line for the year had been $5,000.

Therefore, they were an $85,000 net loss to the company.

And the report basically said you should fire this person who is holding the report.

So the person who signed up for the product would immediately throw the report away and cancel their subscription.

Nobody wanted the answer. The technology was not the problem.

The problem was that the answer was bad.

It's kind of a silly story, but there is a core of like an actual learning there, which is like your job as a data person is not to answer the question somebody asked, it is to give them some information that is useful for the company.

So if they ask a question, you should answer the question.

But if the question isn't illuminating, if the question doesn't serve the company, you should think of other questions.

You should be like, well, the ROI number is bad.

What about brand awareness? What about sentiment? There's other ways of measuring.

There's other ways of determining what is useful to your company being done by social media.

And that is what the industry ended up doing.

The industry ended up measuring other things and people still have jobs as social media managers.

So the learning I took away from that was that you should be answering a question in the context of the business and not just answering the question that was asked.

Stefania: That is a really good takeaway.

The first response also that I had to the story is, yeah, people don't want to hear bad results.

And so obviously the person that heard this result is like, "Okay, cancel. I don't want to know that."

And that as also a really classic tale in data.

You search for the answers you want to hear and you ignore the answers that are really telling your story.

So I think that's another interesting sort of highlight that this story also sheds a light on.

Laurie: Yeah. There's definitely a way for it to become pathological, right?

There's definitely a way for the data teams to believe that its job is to come up with answers that support the existing strategy.

That's not the job. If your answers don't support the existing strategy, that's fine, but you're not done.

You should come up with answers that suggest an alternative strategy.

Stefania: Yeah. And like you said, ask further questions because all the time people come to you as a data person with a simple question, but there's so much depth behind the question, you know what I mean?

Laurie: Yeah, people are constantly asking very simple questions.

I think I have a very repetitive one at Netlify, which is that people will ask me, "Can you get me the email for all of our customers so that I can email them like this?"

And I'm like, every customer is a team.

A team has anywhere between one and a thousand members.

Do you want to email everyone on every team? Do you want to email only the admins on the team?

Do you want to email the billing contacts on the team? There's no such thing as the customer emails.

Stefania: That's a really good point. Exactly.

And that's exactly where your quality as a data person shines through, answering a question with a question.

That's the best way. Honestly, though, I say it as a joke, but that's what I really mean.

I also know that you have some thoughts about common ways that analytics is broken. Can you talk a little bit about that?

Laurie: I've gone off on this on Twitter a number of times.

I think it's the question in analytics that has driven me up the wall for the last 10 years, because that was one of the questions that I had to answer at Awesome, in addition to every other job I've ever had, is the very simple seeming question, which is, how many unique users do you have?

And I have been in the industry of web development for 25 years now and I promise you, nobody knows.

Absolutely nobody knows how many unique users anybody has.

Anybody who tells you that they do is lying, because it's an impossible question.

The same user using the same computer can delete their cookies and come back and appear to be a completely new, unique user.

If they swap browsers, they look like a new user.

If they go to their phone, they look like a new user.

If they have two laptops, they look like two different people.

In addition, one person can be showing up as this like NAT translation software, office proxies can hide like a thousand users behind a single IP.

It will change that user agent when they're coming in.

It is incredibly difficult to have any real idea how many unique users you have.

And one of the most frustrating aspects of this is that if you have a tool that is attempting to count unique users, your clients will come to you and say, "Your numbers are different from Google Analytics. Why is that?"

Obviously Google Analytics is correct and your number is wrong because Google Analytics is gigantic and Google has billions of dollars.

So obviously Google Analytics is correct.

But I know from when I ran an analytics company and I knew for sure, because I was running tests to verify the accuracy of our own analytics, I know exactly how inaccurate Google Analytics is.

I know exactly the circumstances under which it fails.

And so I know that their number is also an approximation. It's pretty good these days.

They do, in fact, have a huge team and billions of dollars, but it's not correct all of the time.

There's lots of ways for it to go wrong.

But it's treated as gospel in the industry and it really drives me up the wall that everyone is just like, "Obviously Google Analytics is always correct."

Stefania: Yeah. An here, maybe to clarify for our listeners, we are talking about unique, unauthenticated users.

Laurie: Yes.

Stefania: Because when you have authenticated users, then you can start tying all of those distinct, like the cookies, the device IDs, all those things.

You can tie them to some sort of a database identifier, and then you can sort of stitch together a holistic user journey and create a unique user.

And then you can also like backwards fill the unauthenticated sessions also and tie them to this newly authenticated user and all those things.

But when you don't have any authentication, then it's black magic.

Laurie: Yes, exactly. You're just guessing with different degrees of fidelity.

Stefania: I remember a few cases where people were asking me why the sources were different between all of these different locations.

I mean, we were using Adjust and AppsFlyer and all those things for attribution management as well.

And then you have Google Analytics and then you have like Mixpanel.

Segment is doing something as well. And then you have Redshift raw data.

And so you have all of these different sources and you also have like server locks.

Server locks overcount distinct users.

What is your answer when people tell you, "This is not the same number as some other source of truth."

Laurie: Like I said, my answer is nobody knows.

My answer is that it is an unknowable thing.

I think the other way that the same question manifests is people are like, can you filter out bots?

And I'm like, no, no one can filter out bots.

Bots are indistinguishable from humans because that's the point of the bot.

If you could tell that it was a bot, then you'd be telling it to go away.

But the whole point of the bot is that it pretends to be a human so they can log in and scrape or do whatever.

And bots are getting increasingly and evermore sophisticated at impersonating humans.

They're supposed to look like humans.

They're not trying to be filtered out and therefore, no, I cannot filter out bots.

I cannot tell you what is about traffic versus human traffic.

Stefania: It's a Catch-22, isn't it?

When we get smarter in detecting them, then they get smarter in being bots, and then we have to get smarter in detecting them.

Laurie: Exactly.

Stefania: What I also sometimes try to do though in this situation is like try to at least as a team agree on what is the baseline.

Like, what is the baseline solution for something or what is the baseline count?

So that most reports at least agree on like when you start looking at numbers from different sources, then you know what you're comparing to ultimately.

Is that something that you do?

Laurie: Yeah. If you need to count unique users, at least count them the same way every time.

If you've decided to use Google Analytics, use Google Analytics numbers for everything.

And if you can't, if your instrumentation doesn't allow you to use or you don't want to pay for the very expensive version and Google Analytics doesn't allow you to do everything, then don't use Google Analytics for those things.

Use something you control at every point in the process to measure it because otherwise you will be comparing apples to oranges and you'll be like, well, it looks like we get a 200% fall off from the top of the funnel to stage two.

No, you don't. You got a 10% fall off, but one of these sources is twice as big as the other.

Stefania: Exactly. Okay.

So the learning here is have like a single source for some of your key metrics and then normalize your other database on that. Something like that?

Laurie: Definitely. I agree with that statement.

Stefania: I wanted to maybe shift a little bit in terms of how the industry is changing before we move on to how you have seen data cultures being built and data trusts being undermined and all those things.

Can you talk a little bit about how you see the industry has changed in the past few years?

Laurie: Yeah. I wrote a blog post about this recently.

I think it's probably the thing that spurred you to invite me to this podcast in the first place.

Stefania: Correct.

Laurie: Which is about nine months ago, I was introduced to DBT. DBT has been around for awhile now, I think five or six years, but it was new to me nine months ago.

And it definitely seems to be exponentially gaining in momentum at the moment.

I hear more and more people are using it and see more and more stuff built on top of it.

And the analogy that I made in the blog post is as a web developer, it felt kind of like Rails in 2006.

Ruby on Rails very fundamentally changed how web development was done, because web development prior to that was everybody has sort of like figured out some architecture for their website and it works okay. But it means that every time you hire someone to a company, you have to teach them your architecture. And it would take them a couple of weeks, or if it was complicated, it would take them a couple of months to figure out your architecture and become productive. And Ruby on Rails changed that.

Ruby on Rails was you hire someone and you say, "Well, it's a Rails app."

And on day one, they're productive.

They know how to change Rails apps.

They know how to configure them.

They know how to write the HTML and CSS and every other thing.

And that taking the time to productivity for a new hire from three months to one month times a million developers is a gigantic amount of productivity that you have unlocked.

The economic impact of that is huge. And DBT feels very similar.

It's not doing anything that we weren't doing before.

It's not doing anything that you couldn't do if you were rolling your own, but it is a standard and it works very well and it handles the edge cases and it's got all of the complexities accounted for.

So you can start with DBT and be pretty confident that you're not going to run into something that DBT can do.

And it also means that you can hire people who already know DBT.

We've done it at Netlify. We've hired people with experience in DBT and they were productive on day one.

They were like, "Cool. I see that you've got this model. It's got a bug. I've committed a change. I've added some tests. We have fixed this data model."

What happens on day two? It's great.

The value of a framework is that a framework exists more than like any specific technical advantage of that framework.

Stefania: Yeah. I love that positioning of DBT.

Do you have any thoughts on why this has not happened in the data space before?

We have a lot of open source tools already built.

We had like a huge rise in people using Spark and Hadoop and all those things for their data infrastructure awhile ago, maybe 10 years ago, and that's still happening in some of the companies.

What are your thoughts on why this is happening now?

Laurie: I think it was inevitable.

I mean, the big data craze was 10 years ago.

I recently was reminded by somebody that I wrote a blog post.

It was literally 10 years ago. It was like July 15th 2011.

I was like, statisticians are going to be the growth career for the next 10 years, because all I see is people collecting data blindly.

They're just creating data warehouses and just pouring logs into them and then doing the most simple analyses on them.

They're just like counting them up.

They're not doing anything more complicated than counting them up.

A lot of companies in 2010 made these huge investments and then were like, "What now?"

And they were like, "Well, we've sort of figured we'd be able to do some kind of analysis, but we don't know how. This data is enormous. It's very difficult to do."

It was inevitable that people would be trying to solve this problem.

And lots of people rolled their own over and over.

Programmers are programmers, so when they find themselves rolling their own at the third job in a row, that's usually when they start writing a framework.

And that seems to be what DBT emerged from.

I think it's natural that it emerged now. I think this is how long it takes.

This is how much iteration the industry needed to land at this.

Stefania: Yeah. That's a good insight.

I maybe want to touch on then also another thing that a lot of people talk about.

And ultimately, I mean, I think what most companies want to strive for, although it remains to be defined what it literally means, are self-serve analytics.

What does that mean to you and how does that fit into the DBT world?

Laurie: I have what might be a controversial opinion about self-serve analytics, which is that I don't think it's really going to work.

There are a couple of problems that make self-serve analytics difficult.

What people are focusing on right now are like just the pure technical problems.

One of the problems with self-serve analytics is that it's just hard to do.

You have to have enormous amounts of data.

If people are going to be exploratory about the data, then the database needs to be extremely fast.

If queries take 10 minutes, then you can't do ad hoc data exploration.

Nobody but a data scientist is going to hang around for 10 minutes waiting for a query to finish.

Stefania: Finishing your query is the new-- It's compiling.

Laurie: But even when you solve that problem, and I feel like a lot of companies now solve that problem, you run into the next problem, which is, what question do I ask?

What is the sensible way to ask?

And also, where is it?

Discovery is another thing.

If you've instrumented properly, you're going to have enormous numbers of data sources, even if you're using DBT.

And they're all neatly arrayed in very nicely named tables and the tables of documentation, you're going to have 100, 200, 300 tables, right?

You have all sorts of forms of data.

And unless somebody goes through every table by name and tries to figure out what's in that table.

And does it answer my question?

The data team knows where the data is and it's very hard to make that data automatically discoverable.

I don't think people have solved that problem.

Even if you solved that problem, the chances are that somebody whose job isn't data is going to run into traps.

They're going to run into obvious data problems that a professional data person would avoid.

The simplest one is like people who are using an average instead of a median.

They're like, "The average is enormously high. So we don't have to care about this."

And I'm like, "No, no, no, no. The median is two."

And that's different from an average of 10.

You've just got a couple of outliers that are dragging your average up.

I solve that problem for stakeholders in our organization multiple times a week.

It's like correcting them just on that particular point.

And that's not even a particularly subtle question about data.

There's lots of ways that somebody who doesn't spend all of their time thinking about how to present and analyze and question data is going to mislead themselves if they are self-serve.

So that doesn't mean that they don't think self-serve should happen.

I think one of the most productive ways that I interact with my colleagues outside of the data department is we have self-serve analytics.

There's no barrier.

They can go in and write their own queries and build their own dashboard.

And they get like 80% of the way.

And then they come to me and they're like, "Is this right? Does this say what I think it says?"

And some of the time I'll be like, "Yes," some of the time I'll be like, "Nope, you're being misled by this. Sorry about that. You looked at the wrong table or you misunderstood what that problem was for."

And sometimes it will be, "You're almost there. I need to make a couple of tweaks to fix this source of error," that kind of stuff.

They can get a lot of the way, but I think being a hundred percent self-serve is not a practical. No.

Stefania: I think that's a really good way to put it.

Another way also I like to think about it is there are layers of self-serve and it depends on your audience, what that means.

So self-serve to a very non-technical product manager, providing self-serve analytics to a non technical product manager means one thing, and then providing self-serve analytics to a very technical backend engineer that wants to answer some question because he's deciding how to architect their API or something like that are two very different things.

And this touches a little bit on sort of, who are your stakeholders as a data team? I think.

Laurie: I agree.

Stefania: But it sounds like you have already built some sort of self-serve analytics and it depends on people knowing SQL.

Is that right?

Laurie: We have a couple of tools. We have a bunch of dashboards.

We use Mode and we have a bunch of dashboards in Mode where if you have one of the set of questions that the exploration tools for these visualizations we've already built can answer, then you can completely self-serve using just point-and-click.

If that doesn't work for you, Mode will let you write your own SQL.

We have recently adopted a new tool called Transform, whose whole raison d'etre is to be a source of consistently defined metrics across the business.

So you give it a metric and then it gives you quite expressive ways of slicing and dicing that metric, filtering it and resorting it and stuff like that.

So we believe our goal is to have most of our metrics be in Transform and have people be able to examine them there and be confident that that data is correct and that those metrics mean what they think it means, which I think is going to lead us naturally to the next part of our conversation.

And Mode is going to become more about ad hoc analysis, one-off reports, very detailed explorations of specific questions, not everyday metrics.

Stefania: Yeah. Exciting, exciting times.

You're touching on something that I definitely want to ask a little bit more about, which is what is your stack over at Netlify?

So we'll touch on that later in the episode.

I think I want to know, because we've started talking a little bit about how you've solved some challenges, I guess, both at Netlify, but also how you view the industry.

Can you share a little bit just with us, how do you use data at Netlify?

Laurie: We use it all over the place. Every department is going to be using it differently.

Netlify is obviously a web host for other people's websites.

So one of the things I should make very clear from the outset is that we do not use our customer's data.

Our customers have visitors, our customers have users and then not our users, they're not our visitors.

That's not our data. We do not use that.

What we use is our own data, people using the Netlify app to do things, or the CLI. And it informs every part of the company.

We use data analysis that comes out of support tickets to inform engineering about looming problems or to inform product like this is a major pain point that we need to address.

Obviously there's lots of traditional uses of data within sales and marketing.

Marketing wants to find audiences who look like this, people who use this feature.

How do we decide what to say to our users and when in their user journey to make sure that they're having a good time and getting ramped into the products correctly?

Sales wants to know users who are very heavy users who are growing very fast.

They want to have a conversation with them about enterprise plans.

And obviously sort of at the core of everything is how the product team uses data, and the product team uses data the way that product managers always do.

We're looking at, okay, we've got this feature. How often is this feature used?

Who is using this feature? What do they look like?

Does this feature correlate with conversion? Does this feature frustrate people?

Do they churn if they relyi on this feature?

One of the things that data can't do is tell you what to build next.

It can't say users would really love this feature.

It can only tell you stuff about things that you're already doing and the things that your users are already using.

I think there's a key distinction between what I would say is data-driven and data-led.

Data driven is great. Data-led can send you down rabbit holes of like endless optimization when some creativity was what was being called for.

Stefania: That's a really good distinction.

It's so important to recognize that life stages of companies being data led it matters so much which life stage of your company you are and which life stage even specifically a particular feature or a team is on.

Is it a fundamental part of your product that's been around for a really long time and you know how it should perform, or is it something that you're experimenting with?

Laurie: Mm-hmm (affirmative).

Stefania: Thank you for sharing how you use data at Netlify.

I think this might be a good segue also into, what does your org structure look like?

How does the data team work with the product team and with the engineering team?

Are they integrated or are they a separate team, et cetera?

Laurie: So we have what I think referred to as a hybrid model.

We have at the moment a data team of eight.

We are hiring so feel free to apply.

And three of those people are what we call core analysts.

So they can answer questions from any part of the business.

And three of those people are what we call embedded analysts.

So one of them lives in product.

They're about to be joined by a second one in product.

One of them lives in the growth team and one of them lives in the finance team.

And the difference between the two is basically like the depth of domain knowledge.

Finance is very complicated and it has very specific nouns and verbs that you really want to be absolutely clear on when you're answering questions.

And so we found it was more productive to have somebody very deeply embedded with that team so they didn't have to re-explain the technical meetings of accounting terms, which is a question that you get a lot when you're doing data things for finance in particular.

Stefania: What is the deferred revenue?

Laurie: Exactly. Like, what is COGs? What is expansion?

What is N plus zero conversions? All of those sorts of things.

And the remaining two members of the team are infrastructure engineers, who their sort of primary tool is Airflow.

And they are dealing with managing the influx of data from all of our other systems into our snowflake warehouse, where then DBT takes care of everything else.

Stefania: And which of these are you?

Laurie: I am one of the core analysts, although my role is a little bit unique in that my title is data evangelist, technically.

And one of the things that I'm supposed to do is take our data and share it with the outside world.

One of my big projects is the Netlify community survey that we are currently running, where we ask web developers what they're up to and what they're using, stuff that doesn't show up in our server logs.

They tell us what they're up to and we share that with the community.

We say, here is yourself. This is what you're using. This is what you say you're using.

And I found from doing this, I did a similar thing at NPM, that web developers find that kind of contextualization of their work incredibly useful.

If you've adopted a tool, you're often wondering, "This thing that I'm using, is it a best practice? Is it old fashioned? Is it I'm ahead of the curve and that's why I keep running into bugs?"

Just answering that question can be very reassuring for a lot of people.

And also there's all sorts of other insights that we can get about how various industries and sectors and levels of experience, experience web development differently.

So I'm very excited about wrapping up that survey and doing the results this year.

Stefania: That's exciting. I relate so heavily to that self validation that you want to have.

I remember when I was doing product analytics before it was called product analytics and we were using tools like Mixpanel before anyone I knew was using Mixpanel and we were trying to calculate things like retention before, there was a single blog post about how to calculate retention on the interwebs.

I'd just get a request from an investor who'd be like, "What is the retention?"

And I'd be like, "Let's see."

And obviously the thing that you have to go through there is you have to go through layers of depths with the business side and the product side on defining, how should you define retention for this particular product and this particular organization?

And in what context is this question?

Laurie: Yeah. A very painful memory of mine is being in a conversation with investors pretty early in my career and one of the investors who was very finance minded asked us, "So what's your operating margin on this product?"

And I had no idea. I'd never calculated it and I wasn't 100% sure what operating margin was.

But I was a founder and trying to run a company and trying to appear like a grownup.

So I didn't say I don't know. Instead, I bluffed. I gave a number that made no sense.

Absolutely no sense. Couldn't possibly be an operating margin.

It wasn't even in the right units.

You could tell that the investor just flipped the bozo bet on me.

We lost that meeting then. That answer is what ended the meeting for us.

And I was like, "I should've just said I don't know."

Stefania: Yeah, that's a really good point.

Just getting some insights from other people is really helpful, but I think it also touches on what you were just describing, it also touches a little bit on that domain knowledge, domain experts, which I believe was learning number four in your recent blog post about what you've learned about data recently.

But data teams are nothing without domain experts.

And you also mentioned that just now on saying there are three particularly embedded data analysts.

Can you talk a little bit about that domain expertise that has to go alongside with the data science team?

Laurie: Absolutely. One of the reasons that I work at Netlify versus other companies that are doing interesting things with data is because Netlify is doing data that relates to web developers.

And as a web developer for 25 years, I know a lot about web development and that turns out to be important because when somebody asks a vague data question, having knowledge of the domain gets you very quickly to a more sensible answer than just answering the question naively.

I mean, if somebody asks you, "How many unique users do you have," for instance, a data scientist might try to answer that question, whereas a web developer will tell you, "Nobody knows."

But there's lots of other ways, specific uses of the product.

Somebody can say, how many active sites do we have? Is a question that I'd get. And what counts as an active site? Does an active site mean a site that someone has created, but not yet deleted? Does it mean a site that somebody has created and is still getting traffic? Does it mean a site that somebody has created and they are still frequently deploying updates to that website? Any of those is a reasonable answer. But as a web developer, you're going to say that an active site is probably the last one.

It's the one that someone's still working on because even a long dead site will still receive traffic.

Bots will show up and random users. And that's not really what you're asking.

You're asking, how many sites are people actively working on?

And that kind of question can get you much further, much faster.

So I'm a big believer that data people should become embedded in the domain of the team that they are working on and the company that they are in.

Our data team, I was just talking to them an hour ago, a bunch of our data team have started publishing websites on Netlify because they've found that being a user of Netlify gives them an insight into how to answer their questions.

They're like, "I know what that button does, and therefore I can tell you when I'm answering analysis about it, like what the instrumentation is really telling me about that button."

Stefania: That's hugely important.

And it's a good segue into, I wanted to ask you a little bit more about, how literally does the data team work with the product team?

So this is a really good example of how the data team works with the product team. I

mean, they use the product. That's a good and important part.

And what are some other examples of sort of-- I mean, how many product teams do you have, for example?

How big is the product team? And are there any routines?

Do they sit with them? Do they have regular meetings with them? Things like that.

Laurie: Yes. Our current product embedded data hire is very new.

I was the one doing the product related questions until she was hired.

But generally, how it works is we have a team of-- Netlify grows so fast.

It's very hard for me to remember exactly how many people are in a team.

I think seven product managers, each overseeing different parts of the product.

The embedded data person sits in all of their meetings, listens to everything that they're talking about, is in their channel with them in Slack and absorbs--

She doesn't answer specific questions when they are presented to her.

Her job is specifically to be aware of what the product team is thinking about and proactively provide insights.

Our data team manager, Emily, is brilliant on a lot of levels, but one of the things that I really appreciate about her management style is that she made very early on clear that proactively generating insights is part of your job.

She's like, you're going to block out this time on your calendar every week and you are going to think of a question that would be useful and interesting to the business.

You're going to answer that question and then you're going to present it, even though nobody asked.

And we get a lot of great stuff out of those insights. It's an incredibly valuable practice.

It turns the way that the rest of the company sees us from this is a cost center and a service department to this is a team that is adding value to the business.

Stefania: Well, that's amazing. I love that tradition. Great work, Emily.

Is that then shared via Slack? Where is that shared?

Laurie: Yeah, we have a stats and graphs channel that I created.

The moment I joined, I created it. And we will share it there.

So people who are just sort of generally interested in data hangout in that channel, and then we share it on a per case basis.

One insight will be a particular interest to one particular team and we will share it with that team in their channel as well.

Stefania: Nice. So product teams, I mean, they release features every two weeks or something, I'm assuming.

Laurie: Not every product team works on that cadence, but yes, roughly along those lines.

Stefania: Yeah. How does analytics for those new product updates work?

Who is involved in that process? Planning the data, implementing it, key wording it, analyzing, all that stuff.

Laurie: I would say that we have a process and it is very clearly defined about when every stakeholder should be brought into the process and informed and all of that kind of stuff.

And the process is very good, which is when the product manager is thinking of the product, they should be thinking also, how do I measure the success of the product?

And that is when they should start the conversation with the data team.

What does success look like for this product? How would you measure that?

And that feeds into the product description.

When it gets written, it's like, okay, in addition to it having this feature and it doing this thing, it requires this instrumentation because that is how we will know whether it has succeeded or failed.

Startups move quickly and things are messy so that doesn't always happen, but we are getting much better at it as an organization.

Stefania: That sounds like a very classic process to me.

Laurie: Yeah. I don't think it's groundbreaking.

Stefania: And so what is the classic thing that fails in that or succeeds?

Laurie: I would say the most common way for it to fail is for somebody to assume that a question was simple and therefore not require specific instrumentation about it.

And obviously, we need to know how many times they click the button and we'll be like, "Do that."

Because you didn't mention that until now.

You wanted to know how many users you have and we instrumented how many users you have.

But now you're saying that you want to know how many times people click the button.

And they were like, are those not the same thing?

No, no. Those are two completely different things.

Not every click creates a user.

So you have to be very specific in advance about what exactly it is that you want to measure, which is one of the reasons that I lean towards over instrumentation.

I don't necessarily think that you should instrument every X, Y position of the user's mouse on the screen at all times and I don't think you necessarily need to make use of all of the data that you capture immediately.

Like, sure, capture 20 metrics and use one of them, but throw the other 19 somewhere that you can get to them.

And six months down the road, somebody is going to be like, "So has this gone up?"

And you'll be like, "Well, we've never asked that question before, but luckily we have six months of data about it. Here we go."

You can do it then. But over capturing the data I think is worthwhile.

Stefania: There is this really interesting balance, I feel like, in these two areas that you're touching.

One of the areas is try to overinstrument rather than underinstrument, because you want to try to capture the questions that you haven't already asked.

And then there is the question of, who knows what to overinstrument?

I have had a lot of "helpful developers" instrument things that just turned out to create chaos in the data.

And so I think there are a couple of challenges here that I'd like to hear you talk a little bit about.

So number one, if you plan around overinstrumenting, how do you make sure you still get the most important things, and how does that process work for you?

How have you seen that work? And trying to still get it through.

That you need both like the P0 or P1 instrumentation done, but then here are some nice-to-have's for potential future questions that we might have.

How does that conversation go?

Laurie: I think at a technical level, there is some important foundational work that one needs to do early on to enable overinstrumentation as a practice.

The first is your data store should be enormous.

It should not be that you deciding to capture an event that happens every 10 seconds is going to overwhelm your database and will make all the other queries slow.

Technically, you must be able to store vast amounts of data.

Also at the technical level, it must be possible to very efficiently extract just the events that you care about.

If you've got some kind of a batch process going and it's going to take--

You've started capturing 10 billion rows a day and that means that the 200 rows a day that you were capturing before now it takes five hours to get, that's a failure.

Your warehouse has to be capable of processing enormous amounts of data and filtering it out very quickly without sacrificing performance if you are going to habitually overinstrument.

If you get those things out of the way, then the remaining technical challenge is that instrumentation has to be very simple to do.

It should ideally be that at any point in the code base, the developer can just insert a one-liner that says I'm firing this event.

Here it goes. And I never have to think about it again. In real technical systems, that can't always be the case.

Sometimes instrumentation is going to be more complicated than that.

But if you can get it so that 80% or 90% of the time all they have to do to instrument something is figure out when that event is happening and fire off a one-liner, that's great.

Stefania: I think that's a really good context setting.

How have you architecturally solved the contextualized or the desire to contextualize events?

Because you want so much more than the actually like here's the event and when it's triggered, you want like so much more information, the properties, the metadata, and all those things.

Laurie: That is a good question.

I would say that we are very liberal about capturing events and less greedy about capturing context.

We're not going to capture the full state of the universe every time an event fired, because one of the things that you can do when you habitually overinstrument is that the context tends to be there.

If in a user's journey you're capturing data once every 10 seconds for everything that they're doing, then once a minute you can fire off an event that tells you, what browser were they using?

Where are they? What client? What language? All of that kind of stuff.

And the other 10 events in that period can be very, very simple.

They can be two bits of data each, because you already know the context because you got it a minute ago and the merits are unlikely to have changed.

That might be hand-waving a little bit.

It's not always going to be that simple, but I believe if you are doing a reasonable job of like capturing data longitudinally, you can gather context longitudinally without having to capture like gigantic blobs of state every single time you capture an event.

Stefania: Yeah, exactly.

And then there's a question of whether people solve the contextualization on the client level, basically, or when you're triggering the event.

So maybe they store a global context in the client.

And so the developer only really has to add a one-liner, but they get the context and the payload is big maybe every time.

But at least you know that it's there and you know it's the right context.

And then there are cases where you solve it in sort of ingestion time, maybe you stitch some stuff together.

And then there are the cases where you actually have some downstream transformations to enrich the data.

Laurie: Yeah. I would say we ourselves tend to do the contextualization.

We do it at all the points, in different systems, but I'd say much more often we're doing it at the transformation stage and add capture or add ingestion.

Stefania: Yeah. So maintaining something like maybe session, stitching together a bunch of events to create some unique sessions and information about unique sessions and them stitching together events to create information about maybe a unique user in a particular day or something like that.

Laurie: Exactly.

We use segment on both the front end and the backend to capture a lot of events and they do a pretty good job of making it possible to stitch together things like that.

There's a DBT plug in that does it, in fact. One of the many benefits of DBT is plugins for common integration cases.

Stefania: Exactly. Critical. Talking about tools, can you share a little bit the stack that you are currently working with?

Laurie: Sure. We recently, and by recently, I mean this week, completed a migration to a new our stack.

So our primary warehouse is Snowflake.

We use S3 as an intermediate store from a lot of systems.

So we have CAFCA and various other message buses like sending information to S3 a lot.

We have Airflow pulling stuff out of S3 and putting it into Snowflake.

Like I said, we have Segment. Segment writes directly to Snowflake.

And those are our main sources.

We have a couple of other ingestion sources like HubSpot and stuff like that.

Sitting on top of Snowflake, obviously we have DBT doing cleanup transformation, aggregation, all of those good things.

And then on top of those, we have two systems.

We have Mode, which for a long time was our primary dashboarding and data visualization tool, and much more recently we have Transform, which is sort of carving out the chunk of stuff that we used to do in Mode that was displaying what I'd call everyday metrics.

You have a set of KPIs and financial metrics and stuff like that.

They have a single meeting. You refer to them all the time.

People want to slice and dice them in lots of various ways.

That is what Transform is for.

It's like take your core metrics and put them into a place where you can be sure you're always looking at the same number, and that will turn Mode into a place where we put more ad hoc analysis, more deep dives on specific questions, that kind of stuff, which is what it's really good at.

Stefania: Excellent. Thank you for sharing the stack.

I'm sure it's inspiring to a lot of people and for something like schema management and data validation, what are you doing for that?

Laurie: I would say that we are less mature there.

Schema validation in particular, you mean for like ingested events?

Stefania: Correct. For ingested events, sure, or anything downstream.

People do schema validation and data validation on so many different layers.

Laurie: Yeah.

For our schema validation, we are in a relatively constrained environment so we don't have problems of like poisoned data or gamed data because our data collection is mostly passive.

So we need to do light cleanup in terms of schema validation, but we don't have the kinds of validation problems that like an ad network would have.

It's just that that problem's pretty easy for us.

And in terms of data validation, it is largely a question of human review at the moment.

It is, we are comparing these two tables. Do these numbers make sense?

Does the stakeholder agree? That kind of stuff.

Stefania: Exactly.

So when you're planning a feature release, someone writes a spec for what should get instrumented somewhere in a spreadsheet.

And then someone uses their humanized to compare the output of the event instrumentation. Is that right?

Laurie: That is the idea.

Stefania: Classic. Love it.

I wanted to talk about data trust also. "I don't trust this data," is such a common statement.

We have talked so much about social analytics and why that undermines data trust, but how do you think about why people don't trust data, and how do you solve it?

Laurie: I think the question of data trust is about getting the fundamentals right first.

If your data capture layer is flaky if your pipelines break all the time, if you frequently have gaps or fallouts of data, then you will lose trust that the data is real.

And it's a common adage of technology that it takes months to build trust and hours to lose it.

Reliability there is absolutely key. You need to have rock solid pipeline.

So people are always like, "I don't..."

We want it to be that your users never question whether or not the data is wrong because of a dropout.

You want that to be like a vanishingly rare event.

And then the other way that you build data trust I think is going back to what we were talking about earlier, if your analysts have domain knowledge, then they are going to avoid misinterpreting the data.

They're not going to make weird assumptions about the data and thus come up with strange answers that don't actually make sense because they will have very early on realized some implicit part of the question, like this person is asking about the growth in the last 10 months.

We should separate enterprise from self-serve revenue because those have very different patterns. That's implicit in that question is like the growth between those two businesses is very different. But if you weren't embedded in a domain, you wouldn't know that. If you hadn't thought about the problem, you wouldn't know that. So that's a big part of it.

And I think a very difficult and extremely important part about it is literally vocabulary.

You should have very clear definitions of the nouns and the verbs that you're using and what it is that they mean.

The word customer, nightmare question, because we have customers who are individual people.

We have customers who are corporations.

The websites that we host have customers that are separate from those.

Customer needs to have a very clear definition. What is an account?

Is one that we run into all the time.

It's like, isn't account what sales means when it talks about an account?

Just like an account as a company, or is an account like the thing in the database that is called an account, or is an account a team?

Which of these do you mean? You have to really be very particular about what the nouns and verbs mean.

And like when somebody uses a word, you should be like, do you mean definition of term?

And there'll be like, "No, I was using that term very loosely. Here's what I actually wanted."

That's how you build trust, is by making sure that you give them the information they wanted, which is different from answering the question they asked.

Stefania: Yeah.

So we're talking about data quality and we're talking about data literacy is super important here, both among the data professionals, I understand like having literacy for their domain, and then for the rest of the company to have some literacy for like, what could data potentially mean?

We are close to time here.

Laurie: I hadn't realized how many opinions about data I had until I started talking, I'll be honest.

I'm like, there's three or four blog posts in here that I should have written already.

Stefania: Exactly. Yeah. There's a lot to talk about here.

I know you have one more good one.

How to build a team, who to hire, and what are the common mistakes when people are building a data team?

Laurie: Yeah. I talked about this in that blog post that we mentioned at the beginning.

I interviewed at a bunch of companies when I was looking for my next role before I joined Netlify and a pattern that initially I thought it was just like a strange decision that one company had made rapidly became clear to me like this is a thing that people do all the time, is a startup that realizes that it has a lot of questions about its data says, okay, let's start a data team.

And they go out and they spend a lot of money and they hire a data scientist, somebody whose title is data scientist.

Often that person is fresh out of their PhD in data science.

And it is almost universally a disaster because that is not the problem that they have.

A data scientist is an academic who goes away for three months and writes a white paper using toy data or very clean artificial data to like break new ground in how ML is used.

They're like pushing the boundaries of the subject.

But the problem that they have is like, well, all of our source logs are piling up in a bucket somewhere and we need them in our database.

And like, that's the actual problem you need to solve first.

And the data scientist is not in any way equipped to do that.

That doesn't mean that they're dumb.

And that's really where PhDs in particular get into trouble is because PhDs are very smart.

So they're like, "Well, I can figure that out."

But it's not their expertise. It's not the thing that they should be spending time doing.

So you have these PhDs figuring out, "What the hell is bash?" for the first time and like writing bash scripts because they don't have any other contexts and cobbling together data pipelines and getting it wrong and spending months getting it wrong, when what they wanted was they want a data team.

Like I was talking about, they want an infrastructure engineer.

They need a data warehouse in place. They need instrumentation done.

They need a bunch of core analysts to answer simple questions.

They need some metrics configured.

And then like when you've got five or six people on your data team, then maybe consider hiring a data scientist to like point some ML at a very specific question that you have, where you have enough data that ML makes sense.

But even then, you probably don't, most companies don't.

My firm conviction is that most companies just will never have the volume of data where using ML to answer the question is going to be worth it.

Stefania: Yeah. I will add to that, most companies have just so many other important problems to solve with their data before they start picking up on machine learning for anything.

Laurie: Exactly.

Stefania: Answer the basic question first and solve the basic things first.

I could not agree more with this. So what is the job title that a company should hire for their first role?

Laurie: Just a data engineer, or don't even hire.

Take one of your engineers who cares about data and tell them, "Your job now is to get it into a database."

"You already know where the bodies are buried. You already know our code base. You know how to instrument."

That's your first hire.

Stefania: I recently heard this concept that was a purple person.

I don't remember which was which, but there was a blue person that knows the business and the domain.

And then there's a red person that knows infrastructure.

And we need purple people who speak business and tech and can translate between the two.

So I wonder whether that sort of falls into this new trending title called analytics engineer.

Laurie: We've had several of those.

Stefania: Yeah.

And that existed not just a few years ago, I would say almost, maybe occasionally here and there.

But this is now a thing I think that most companies are hiring for, and it's someone that munches raw data and gets it into something that's useful, something that is in a self-serve analytics state potentially, or at least the consumable state.

And that might mean different things for different audiences, but it's someone who speaks business and tech. I think that's interesting.

Laurie: Yeah. It's definitely a role that has emerged recently. I agree.

Stefania: Yeah, super interesting to see the shift.

So I guess to wrap things up, I would love to hear you talk about a couple of things first.

What is the first thing teams should do to get their analytics right?

Laurie: If I had to pick one thing, I would say that you should make it impossible to drop data on the floor.

If you are reliably capturing the data and storing it somewhere as fast as you possibly can, and you are making it impossible that that data can fail to be stored, then everything else you can fix later.

If your pipeline is bad, if your question was dumb, if your ingestion model was dropping every fifth row, like, whatever weird thing it is that you've got wrong, if your original source data is somewhere rock solid, then you can figure out everything else that you got wrong.

So that's the one thing.

Like we talked about in building trust, you should never have a hole in the data that makes it impossible to answer the question.

Stefania: Yeah, I like that. And that is a multi-layered thing that can fail.

It can fail on so many different places.

Laurie: It's very much easier said than done.

Stefania: Yeah. You're talking here about anywhere from the instrumentation layer to when things get stored and some database can't store them because it didn't fit some form of whatever.

Laurie: Absolutely.

I think one of the most common way I see this is you've got some kind of streaming endpoint that's sending data from one system to another.

And the thing that does the ingestion falls down and the data backs up and it overflows a buffer somewhere and you've lost data until the ingestion gets back online.

Get those out of your system.

Whatever state the data was in, just throw it in test three as fast as you can and then figure it out later.

Stefania: Yeah. Exactly.

And then what is one thing that you wish more people knew about data and product?

Laurie: About the interface between data and products in particular?

I mentioned it in passing already, I think the most important thing is that data can't tell you what to build.

Data cannot create product. Data can validate questions.

The people who can tell you what to build next are your customers.

As a data team, we have a very productive relationship with our user research team and they are the ones who go to our users and say, "So, what is it you're doing? And tell me about your day."

And have these very fluffy, qualitative conversations about a users habits and preferences.

And then they'll come back and say, "Well, I've had five conversations this week and three people were doing this. Is everyone doing that?"

And then we can go into the data and we can answer that question.

We can say yes, 60% of our users are in fact doing this thing, and that's good or that's bad or that's indifferent.

But we would never have known to answer that question without the deep qualitative sample data that a user research person gets.

It can work the other way, right?

The data team can say, "We've seen this weird spike in this particular user set. Can you go and ask them what they're doing? Because we can't figure it out."

And then the user research team can go off on that and find one of those people that you found and say, "What were you doing when you did that?"

And then answers the question.

Use the research and qualitative and quantitative teams have to work hand-in-hand and are very productive when they do so.

Stefania: Love it. And I think those are great last words.

Thank you so much for joining us on The Right Track, Laurie.

Laurie: Thank you. This has been enormous one.