Ep. #7, Data Translators with Benn Stancil of Mode
In episode 7 of The Right Track, Stef speaks with Benn Stancil of Mode. They discuss the value of analytics for the end user, the emerging role of analytics engineer, building and maintaining data trust, and tactics for optimizing data teams.
Benn Stancil is Co-Founder and Chief Analytics Officer at Mode.
In episode 7 of The Right Track, Stef speaks with Benn Stancil of Mode. They discuss the value of analytics for the end user, the emerging role of analytics engineer, building and maintaining data trust, and tactics for optimizing data teams.
transcript
Stefania Olafsdottir: Hi, Benn. Welcome to The Right Track.
Benn Stancil: Thanks. Great to be here.
Stefania: Could you kick us off by telling us a little bit about who you are, what you do, and how you got there?
Benn: Yeah, for sure. So I'm Benn. I am one of the founders of a company called Mode.
So Mode builds a product for data analysts and data scientists to be able to create analysis and quickly share it out with their coworkers.
So I'm, as I said, one of Mode's founders, and my title is Mode's chief analyst, so spend most of my time doing a handful of things.
One is working actually with our data team internally, so how we think about doing data ourselves.
So it's sort of a very meta job of doing the analysis to build a tool to help people do analysis.
But also do a lot of work with folks in the community to kind of understand what it is that analysts are trying to do, the problems they have, where the space is going, things like that.
We want to be receptive to the problems that are customers have, the problems that sort of the market has, where we think that's all going, and making sure we're building the right stuff for folks like that.
And sort of goes beyond, I think, just a product itself.
Obviously, a lot of people who are in data, it's a very sort of community oriented industry, and credit to a number of folks who've really built that.
And so we want to make sure we're helpful in ways that go beyond just saying, "Hey, here's a product."
A lot of people are trying to figure out what do we do with data?
What's the right way of making it valuable? That kind of stuff.
And we want to try to be helpful in those ways as well.
Stefania: That is awesome.
So very diverse role, both internal data culture things, but also just giving back to the community, helping your customers and the community be better with data.
Benn: Yeah. Giving back is generous. I'm sure that some people would say that mostly--
Not sure how valuable the contributions are, but doing our best.
Stefania: I can at least speak for myself that I love your blog posts.
You are a great writer and you have a wonderful writing style, a mix of opinions with sort of poetic sprinkles on there.
And so I appreciate everything that you do.
Benn: Awesome. Well, thank you. I appreciate that.
Stefania: Benn, so I mean you are chief analyst. Did you say that?
Benn: Mm-hmm. Correct.
Stefania: And go by chief analyst at Mode. How did you get there?
Benn: How did I get to that title? Or how did we get to Mode?
Stefania: Both.
Benn: Well, the title's the easy one.
When you start a company, you can make up whatever title you want.
And chief analyst isn't really a title you'll find in many places.
But a little bit of a funny background for what I first started doing.
We started Mode, which is it was three of us.
So our CEO, who was the more personable version of Mode's founder set, and the one who was out in the early days talking to investors, talking to potential customers.
He was the person who made sense as kind of the face of Mode.
And then we had a founding engineer who started building the product and was the person who was basically responsible for making sure we actually had something to show people.
And me, as a person with a background in analysis, didn't have a whole lot to do.
So we made up title and made up a job.
And my job prior to having customers was mostly kind of what you alluded to, which was writing blog posts.
Though at that time, it was mostly writing blog posts about anything that was related to data, not necessarily related to data technology.
So if you actually go back, the very first blog post on Mode's site, which is still there, is a blog post on Miley Cyrus and the VMAs.
So that was my first six months at Mode was writing blog posts about pop culture.
Eventually, it evolved into doing a lot more customer-facing work, and doing things like customer support, customer success, repping the product organization, doing our own internal analysis once we actually had data to analyze, a lot of things like that.
But in the early days, it was writing blog posts on the internet.
But for where Mode came from, so the background for Mode is the three of us all worked together at a company called Yammer, which was bought by Microsoft in 2012.
We were all in the data team.
And Yammer was sort of one of the early cloud enterprise companies.
It was early in this consumerization of IT type of wave.
The product itself was very similar to say something like Slack.
It was actually more or less a clone of Facebook for work.
This was prior to Facebook for work existing, but it was kind of the idea of like not Facebook for Work TM, but like Facebook, for work.
But the way that the sort of software development happened, because it was a cloud product and things like that, was more oriented towards the way that consumer products got built.
And particularly, we tried to model a lot of things we did after say things like gaming, social gaming, which was big at the time, which was very data-oriented, how are our users using products?
What are they doing? How can we make better product because of that? That kind of stuff.
So we worked on the data team that was responsible for helping people in the organization make decisions based on that.
And to do that, we actually had to build a number of internal tools to help ourselves do our jobs, that the primary data tools at the time, there were a lot of sort of code-forward tools, but they were the old school stuff like SaaS, or data, or things like that, or Rstudio, which were good for us as analysts, but weren't particularly good for any of the business users we were actually trying to work with.
And if we want to do something that was designed for this business users, Tableau was kind of the state-of-the-art at that point.
But it was constraining to us as analysts, that was as analysts and data scientists were trying to ask kind of richer questions than you could with something like Tableau.
And so we ended up building a set of internal tools to let us basically quickly work with data in the data warehouse, to analysis on top of it, build visualizations, and share it out to other people in ways that they could reuse it.
That internal tool looked very similar to a sort of basic, ugly version of Mode.
It was like, at its core, the kind of same idea of write a SQL query, get a chart, send it over a URL.
At the time, when we were building it, we were kind of like, "Hey, we're special. We're a data team that thinks about these things in different ways."
That kind of stuff. Over time, we started to realize that the problems we were solving for ourselves were actually problems that a lot of other people had, that a lot of companies around Silicon Valley had built similar tools.
So Facebook had a version, Airbnb had a version, Spotify had a version, Pinterest had a version, all these things, like these kind of online query tools with visualizations.
We also saw that the product was getting adopted by Microsoft once we got acquired by Microsoft and people realized, "Hey, this is a really quick way to share data and insight."
So one we kind of started to realize that, we said, "Hey, if this is a problem that all these companies are building their own versions of it, then maybe there's a market for us to build a product that really solves this problem."
And at its core, that problem was there's analysts and data scientists whose job isn't to just build basic reporting, and their job isn't to sit in a room and build complicated models all day.
Their job is to help business stakeholders make decisions by asking complicated questions and answering them with the kind of technical tools that are required to do that.
And so we said, "Okay let's build some tools that enable that workflow for analysts."
And so that was kind of the first version of Mode was analysts need to answer questions, share them quickly.
How do we help them do that? And over time, it's built on-- So being able to do a lot more than that, but that thesis is still kind of the core of what Mode is.
Stefania: I love that. Thank you for sharing that.
And it actually sparked this one concept that I was recently introduced to by a member of the DBT community.
She's a community manager in DBT. She calls it Purple People.
And it's this concept where it stitches-- So there are red people and there are blue people.
And they speak technology and business. And then you need the purple people that sort of translate between those two.
And it sounds like you're sort of describing that. It's like you need a bridge between those sort of two roles, I would say.
Benn: Yeah, for sure.
And I think if you read the Harvard Business Review stuff, they have all these sort of fantasy terms, like data translators or whatever.
But yeah, that is, in effect, where Mode sits.
And sort of you see this in the tool, where it kind of has two sides, where there's a technical side and a non-technical, or sort of more consumable side really, that ultimately that's what data, to me, is.
And it's not really a technical thing, necessarily, but it's something more often put kind of some technical barriers, and kind of being able to actually access it and interpret it. But you have to have an understanding of how to actually do that with data, plus an understanding of the business problem. And there are some people that can do both.
There are some of these purple people that can straddle that line.
But in reality, what you really need is people who understand the data, and then people who really understand the business.
The more you understand the business, the better questions you're going to ask, the better you're going to be able to see something and be like, "Hey, this doesn't look right."
And so you have to actually have a way to bring those people together, that it can't just be an analyst doing something and throwing it over a wall anymore than you can have a data scientist build something and say, "Hey, here it is. Here's my conclusions. We're done."
You need to have that collaboration because it's the marketer, it's the salesperson, it's the CEO, it's the whoever else that understands this is what this problem actually is.
"This is how I would interpret this data. This is what we would expect to see. This is what looked weird."
All of those sorts of things you're not going to get unless you live in that world. And analysts for the most part aren't able to live in that world.
Stefania: Yeah, exactly.
I have a section on The Right Track where I love talking about org structures and sort of where the data team should sit.
So I definitely look forward to diving a little bit deeper into that later in the episode.
And so basically, you started on this journey with a passion for data, with analytics.
You were working on analytics, and then you scratched your own itch building internally, and now we have Mode.
Benn: More or less. We built a tool to solve our own problems.
Very much so at Yammer was just like, "This is the way we want to do our jobs."
And then Mode came around as a way to say, "Hey, let's solve our own problems but for other people and see how it goes."
And that's more or less the path that we've been on for a number of years now at this point.
Stefania: Yeah, super exciting.
And a lot of people are strong Mode advocates. I am supposing that that is very rewarding.
Benn: We don't get tired of hearing that. That's always nice to hear.
Stefania: Excellent. Thank you for sharing your backstory.
To kick us off into sort of the data world, thinking a little bit in depth about how to use data, why would we use data, can you tell us an inspiring data story?
And a frustrating data story?
Benn: So this is like a relatively small story in the scheme of things for like an inspiring data story.
I think you could draw from a bunch of things where it's like Instagram was about to fail, and they looked at their data and they figured out this thing, and they pivoted to being Instagram from they originally I think a risky sharing app or something like that.
And they were like sharing things.
Great, good for them. That's cool.
The story though for me, a Mode customer that I've always really liked, partly because it's just like a--
It's the sort of thing that's like this unexpected discovery that ends up leading to something that's pretty cool.
So there's a company called The Knot.
So The Knot is a website that a lot of people don't necessarily know about, but have probably used.
They host wedding website. So if you have a friend who's getting married, they'd have wedding a website with like, "Here's pictures and directions and registries," and stuff like that.
It's like BeckyLovesJosh.com or whatever.
Those are often hosted on The Knot.
And so he was analyst or running a data team at The Knot, and they just launched a mobile app that was basically the same thing for sort of managing your wedding.
And when they first launched the app, it wasn't particularly popular.
They were struggling to get much traction with it.
It wasn't something that was taking off.
And so they put a bunch of money into investing into building it, and they're trying figure out what's going on.
And at some point, this analyst starting playing around with like the product data to basically see, "Okay, let's just see what people are doing. Let's try to figure out what's going on in this thing."
And he discovered this pattern where most people weren't using the app very much.
But there was this kind of small segment of people that were using the app all the time.
But they were only using one page on it.
And so he ended up building this visualization that basically showed people's paths through the app, and you can see this one like spike on the side that's a kind of small sliver, but it's everybody just going like, "This page, this page, this page, this page, this page."
And it's the same page every single time.
And so he started digging into it to see what it was, and what they discovered was the page that they were going to was a page that was a countdown until the wedding.
So it would be like 27 days to the wedding, 26 days to the wedding, whatever.
And what they realized was that people were going to that page, taking a screenshot of it, and then posting it on Instagram or somewhere else.
And so this was like a kind of afterthought of a thing I think they added to the app.
They're like, "Sure, whatever. We'll add a countdown. It's easy to build. No big deal. We'll add all these other features that are much more important for managing your guest list, or updating pictures, or whatever."
But they realized this particular thing was the thing that was really sticky.
And so they ended up kind of adjusting the entire app to lean into that, where it was a much more prominent feature on that page where it was like it looked much nicer, where the screenshot would be much nicer, you can more easily post it like directly to Instagram, stuff like that.
It ended up like turning around the app in a lot of ways by orienting it around this kind of one latent behavior that they had never thought about, and certainly didn't build the app for.
But by actually following the way that people were using it, they could figure out a way to make this app much more successful.
And I think it's a cool story because it's obviously, again, it's not a huge thing.
It's not some world-changing thing, but it's the kind of thing that just by kind of being curious about how people are using it, by asking questions you many not necessarily have thought of initially, you can find all of these interesting discoveries, some of which may be just kind of interesting curiosities, but some of which can also be the kinds of things that turn around an entire project or an entire business.
Stefania: That is a really great story.
Do you know or have any insights, maybe you can't even share it, on like the results of the change?
Benn: So I can share a blog post about this actually.
So they wrote a blog post about this.
It was part of the bigger project was they had implemented Segment and Mode to look at this product usage data.
Prior to that, they hadn't really used a whole lot.
I think maybe something like Omniture before, which wouldn't have been able to allow them to track some of the stuff that they were finding.
And so they wrote a blog post about this on Segment's blog that I can try to find.
But I don't remember specifically the lift was X or whatever.
But it was the thing, I think, where the app went from being a failed project to a success because of this being able to lean into this behavior.
Stefania: And totally, I would have thought it, but this probably both impacted the stickiness and the retention of the app and conversion to using those parts of the app, and then the virality.
I can imagine making it so much easier to share stuff directly from this app.
That's amazing. It actually reminds me of a QuizUp story.
So QuizUp is a mobile game.
I used to be the founding analyst there, and people would compete in really specifically niche or very generic trivia categories.
And we had thousands of categories, all the way from general knowledge to Canadian leaves.
And people would be able to be best in the world in specific things.
And we saw people were-- We learned exactly by looking at the data that this is the thing that really helped virality.
And so we increased the number of titles that you could gain, and how frequently you could gain them, and that was really impactful in virality of the app.
Benn: Yeah. It's interesting on these products how much of this kind of stuff seems to emerge.
So many companies have stories like this.
Again, some of them are meaningful and some of them are less so, but about we built a product this way.
We thought it was going to be used this way. Turns out, everybody just wanted to do this.
And all these little things that as you're designing products, you can kind of never predict exactly how people will end up using it and kind of the patterns that end up emerging, regardless of how much you design it to be like, "Do this."
It's like people are going to kind of find things that they like to do and do it the way they want to do it.
Stefania: Exactly. And what I love about this type of story is that journey that you're describing, it's so important to have--
Like good product teams will have a mix of qualitative and quantitative to make these discoveries.
And what I love about this story is how quantitative led them to this path, which is really interesting.
Often, it's the other way around. But quantitative definitely can help you discover things like this.
Great story. Thank you for sharing, Benn.
Benn: For sure.
Stefania: Can you talk a little bit about the frustrations?
Some frustration example of a data bug, or around data?
Benn: So I don't have like a specific story.
I think that, and specifically like the product data, the thing to me that is the most frustrating is when you're missing something and it's like you're missing 10% of your data, and it's probably random, but probably not quite.
Like the almost but not quite random errors are the things that are--
It ruins everything basically, because you're kind of tempted to say, "Well, okay, we could probably just use the data that we have. But it's possible that the reason that we don't have some of the stuff is because it was this particular type people."
So for instance, this happens a lot on web tracking data that we've seen, where say you have something like Segment instrumented.
You're doing a bunch of front end logging.
Okay, you're going to miss people who have like ad trackers.
You can kind of sort of assume that people who use ad trackers are random, but if you're a product like Mode, which is a kind of technical product that tilts towards people that are analysts and engineers, there's probably some bias in who actually uses an ad tracker versus who doesn't.
Like I suspect that sort of the more technical types of folks are a little bit more inclined to use ad trackers.
I don't know that, but they're the people who probably are a little bit more like willing to fuss around JavaScript consoles and say, "I don't want this thing. And I want this thing," and that sort of stuff.
So what do you do with that? Now you're tracking 90% of your usage. You're missing 10%.
There is some bias in it that you can't quite figure out, that you're tempted to just say, "Well, let's just treat it like a random sample."
But you can't really. It's a hole to solve problems, but it's also something where it's not bad enough that you have to throw everything out.
But it's not really good enough that you can always trust it.
And so I don't really know what you do.
You try to forget about it, otherwise track things, you try to figure out other ways to like confirm the things that you've seen and say, "Okay, let's try this from a different angle or whatever."
But it's this not quite bad enough to be really bad, but not quite good enough to sort of just trust without too much question.
Stefania: Yeah. That's a really good point.
And particularly with like when you have specific audiences that are more likely to have ad trackers, we have a couple of customers like that at Avo, and I also recently had an analyst from Netlify, which is also like very developer, basically web developers, and they're going to be like, "Nope, don't track me. Nope."
But one thing around that I find really interesting, which is-- And so the discussion on like why do people not want to be tracked?
I feel like sort of the world splits into a few groups.
It's the people that just want Google to know everything about you so that they have faster search results and all those things.
Or they want to have relevant ads because they find good things that way.
And then there is the other extreme end, which is just like potentially people who grew up in countries where government monitoring was a real threat, and that's sort of intrinsic in the culture of the country.
And so that would be at the other end of the spectrum where you might even have pushback against like going cache-less because you always then have a paper trail of everything that you've been doing as a citizen.
And so I find it so interesting to think about, I think GDPR is a huge, huge step forward in this direction.
But what are thoughts on why do people not want to be tracked?
Benn: Mm-hmm (affirmative). I think there's a couple reasons for it.
I think that most people's reasons for it aren't the reasons that they should be worried about it.
I think what you're saying is probably right, that most people's reasons for it is some sense of this is invasive, people are going to do nefarious stuff with it, the government's going to put trackers in my vaccine, all that sort of stuff.
And I think sure, and in some places that is probably something to be worried about. And the NSA, for instance, probably is doing more tracking than they probably should.
But I think for the most part, the nefariousness of companies that are doing this sort of tracking is kind of overstated.
The bigger problem, and the problem I don't think that people are as aware of is just the clumsiness of it.
It's less to me about like, "Hey, somebody's going to do this and they're going to figure out all this stuff about me, and they're going to look up everywhere I've been, and someone's watching me."
It's like there is too much data to be collected for you to be watched or anybody really to know what to do with that. But in practice, a lot of data's just getting logged, dumped into warehouses. It's kind of in ways that nobody actually can access it. Or everybody can access it, but nobody knows exactly how it is.
It's not well stored and sort of precisely monitored.
I think there is some concern about like, "Oh, this is a bunch of really tight technologies that are trying to figure out exactly everything I'm doing."
In practice, it's sort of the opposite.
It's just like a giant mess, and the danger to me is stuff gets exposed because it's all a giant mess that people lose, not people are trying to do something that's particularly nefarious.
I've always wondered if you are an aerospace engineer, if when you're on an airplane, if you're like, "It is remarkable that this thing flies."
Or if you're like, "Oh, of course it flies. It makes total sense."
There's no way this thing could ever fall out of the sky.
But if you're an engineer or a data person, most of the time when you see data stuff, or technology, your response is kind of, "It's remarkable that this thing works."
This is glued together so much more with like rubber bands and Popsicle sticks than anybody realizes.
And I think that's a lot of the danger of the tracking infrastructure, sort of at a macro level, is the whole thing is pretty flimsy, and is not well-governed, in a macro sense, not in sort of the data governance at a company sense, in a way that is just kind of a mess.
And I think that potentially has real problems, but it's not a problem of yeah, somebody knows exactly how to do all this, and it's all so precisely managed that people are watching you.
It's like for the most part, people don't know how to watch you.
Somebody is watching you, and it's recorded on a machine somewhere.
But in general, nobody can actually figure out what it means because it's all tracked poorly, and they lose it all the time.
And that's one of the reasons that Avo exists, is companies are watching you, but they don't know what to do with it, or they can't interpret it.
So that is its own problem, but it's a different problem, I think, than the typical nefarious, someone spying on my type of problem.
Stefania: Yeah, I agree with that.
And I mean, the story that you just shared with The Knot I think sheds a light on the value of analytics for the end user.
And that's something that I am personally passionate about is at a certain maturity stage at a company, developers, product developers, product engineers, will see the sort of tracking instrumentation task, quote/unquote, as a task to be done, that has no value for the end user.
But it is remarkable what happens to the data culture of a company when developers start seeing analytics as a part of their process to build a good product.
Benn: Yeah. And users too.
And I think that's something we'll eventually figure out a way to sort of do it safely.
But take TikTok for instance. I mean, if TikTok didn't track what you did, nobody would use it.
Because the whole way that TikTok works is basically by understanding what you do and just feeding you more of that.
Now, that is independent of sort of the China question, which I know very little about.
Obviously, that's somewhat of a different thing.
And it's also not to say that TikTok as a whole is good for society or whatever.
But as a user of it, if they're like, "We're actually going to turn off tracking."
People would be like, "Well, now this app is trash."
And so I think you find yourself I think at a position where like, "Well don't track me, but I want the benefits of it."
I think there probably will be at some point somebody will figure out ways to do that such that they can stand on somewhat of a soapbox and say, "Hey, we are tracking you in a way that we don't actually know all these things about you."
Like this is the equivalent of selling the-- Like Jessica Alba had her house products that were made with clean stuff or whatever that people sort of buy because they feel like it's morally better to do it.
I can see technology and data tracking having something of that sort where it's like, "Don't worry, you're not going to have to sacrifice anything in the product, but also you'll be able to feel good using this product because we have all these things in place that make sure you're safe."
I mean, the counter example to that is like nobody seemed to want to use Duck Duck Go, but maybe we'll get there.
Stefania: Yeah, totally. Exactly.
Yeah, we jotted some things done on what is the most common ways that your analytics break.
Do you have any thoughts on that?
Benn: So it's usually not technology.
All these complaints about tracking aside, like that's actually the easier problem to solve.
And in a lot of cases, those aren't actually really broken as much as just you have to accept some certain level of imperfection.
The ways that things more often break for us and for it seems like for the folks that we talk to are kind of unexpected edge cases.
And a lot of times, unexpected edge cases that are legitimate, but things that you just don't know how to deal with.
So particularly on the business side, I think product is actually okay.
Like a lot of product analytics, because it's all sort of machine-generated.
Crazy stuff happens, but there's a certain level of like it's only going to be so crazy.
When you're dealing with stuff like contract data, or human-entered data, or data that comes from Salesforce, people do all sorts of things.
Like you'll sign a contract that looks unlike any other contract you've ever signed, that usually they're one-year contracts, and this is like a three-year contract with this extra kicker after a year-and-a-half that has some built-in contract structure so that it's automatically going to go up, but then if they don't hit this certain usage threshold, it'll actually churn.
Like you could write all sorts of things into that, and now you've got to figure out how do we compute our revenue based on this thing, and you have to somehow translate a hard enough to understand English version of that contract into business logic that your analytic system is going to understand.
And that happens all the time where there's things like that where you're trying to translate confusing concepts and stuff into some sort of, "Okay, this is the way the computer can understand it."
And that translation process is often messy.
There's YouTube these YouTube videos of elementary school teachers teaching kids how to program a computer, and they'll say, "Okay, I'm going to make a peanut butter and jelly sandwich. How do I make it?"
And they'll say like, "Put the peanut butter on the sandwich."
And so they take the kind of jar of peanut butter and just put it directly on the bread.
And it's like, "Oh no. Open the jar of peanut butter and put the knife in."
And they put the knife in the wrong way.
And so it's like you have to be so expressive of exactly what these things are, and a lot of the kind of business logic that you're actually trying to express is hard to express that way.
It's hard to express that precisely.
And so when you're trying to actually build analytics on those sorts of things, they're very fragile to, "Oh, we're not actually eating peanut butter out of a jar. Now we bought peanut butter out of a--"
I don't know what else peanut butter comes in.
Out of like one of those Go-Gurt tubes, a trend someone tell us what it is.
Or we don't have a knife anymore, we've now just got a spoon.
Suddenly, all of these things start to get weird. And the things that you thought covered it doesn't work.
And so that introduces all sorts of things that would just like be fragile.
Not fragile because you didn't build a good system, or because your infrastructure's bad, but fragile because you're trying to translate complicated concepts into code effectively, and those complicated concepts are things that are kind of constantly shifting.
Stefania: Yeah. I think that's a really spot-on thing.
So it sort of reminds me of a discussion that I have very commonly, which is the concept of whether you should over-instrument everything and just have and just log everything that happens in the app, and then try to figure out afterwards how you stitch those things into information.
Or whether you should try to be deliberate upfront and design your data structures, and design your metrics based on where the information is coming from, which system it's coming from, what's available.
So I think what you're sort of touching on is a personal passion of mine, which is data design is a really important thing to get right when you are, for example, instrumenting analytics that you're going to be sending into a database, and then building some insights on top of that.
Benn: I agree with that with a caveat, that it's important to design the system, but I think a lot of times that can get interpreted into designing everything.
So here is the way that we're going to design our event instrumentation.
Here is the system we're going to use for it, to then turning that into, "Okay, now let's write down everything we're going to track and let's have the perfect sort of tracking plan for everything, and then we'll start to do stuff with it."
I think you need to have a flexible system, but it's often better to say, "Okay, now let's start putting stuff in and seeing what happens."
You need a foundation to work off of, but you shouldn't build the entire thing before you start to and live in the house.
This analogy makes no sense, but you get the idea.
Stefania: I totally, I agree with that. Yes. Start small is basically-
Benn: Yeah.
Stefania: Yeah.
Benn: And there's a habit, particularly in big companies, that are doing this sort of digital transformation thing.
They're used to things taking a long time. This is going to be a three-year project.
The first year is us just doing research. The second year is us writing a plan. The third year is us implementing everything.
After year three, we're going to have everything perfect and everything's going to be super valuable.
And it never works there. You lose momentum on the project.
Executives get tired of trying to sponsor the thing because it hasn't delivered anything for two years.
The business changes over the course of that time, so you're like perfect tracking plan from two years ago now doesn't make any sense because you actually deprecated the product and replaced it with something else.
And so it's just like run the thread end to end as fast as you can.
And ideally, think about how do we do this in a way that's not going to lock us into one very precise way of doing this.
But run the thread and then start to build on top of it instead of this let's build this very big kind of pyramid that takes us forever to get to.
Stefania: I could not agree more with this.
And I think I would love to maybe dive a little bit deeper into this later in the episode for recommendations for how teams should get started with their analytics.
Benn: Cool.
Stefania: But I couldn't agree more. Just start small is such an important.
Run it end to end, absolutely. And thank you for sharing frustrating and inspiring data stories, set the stage a little bit.
I would love to move a little bit from there into thinking about how the industry has been changing.
In your opinion, how has the industry changed in the last two years, or if you want to go further back, whichever?
Benn: Yeah, I mean, I think the big macro shift is things moving to the cloud, that data tooling from 10 years ago, 15 years ago, was a bunch of desktop stuff, it was a bunch of on-prem stuff, it was you having to host your Oracle databases and things like that.
The shift to moving things to the cloud and cloud warehouses, cloud data tooling, cloud ETL tools, all that sort of stuff, had a number of consequences.
It has a consequence of making things way more accessible, both within organizations and across them.
So it used to be that the only people who really could do analytics beyond just putting stuff into Excel were people who could afford a million-dollar Teradata installation and/or could afford Oracle, or SAP, or could afford a whole bunch of infrastructure to do this, plus the people required to manage that infrastructure, plus that people required to actually use the data to make any use of it.
And so the investment at that point is huge.
And so you're only trying to do really important stuff with it.
You have to justify that investment some way. If you're a small company, you can't really do anything with it.
The shift to the cloud made all that stuff way cheaper. It made it cheaper to store data.
It made it cheaper to actually do anything with it.
So now, companies can-- Like any company can spin this stuff up in really a matter of hours, and for almost free.
There's a lot of free tools out there, tools like Redshift and stuff like Big Query are relatively inexpensive at certain scales.
So the big shift, really I think the big fundamental thing is this is now way more accessible to a lot of people.
It's not sort of just the very big companies have whole armies of people who are responsible for building this.
It's like this is now a kind of way to run a business.
Stefania: That's the 10 year shift maybe.
Benn: Yeah. Yeah. It's very much over kind of the long term.
The shorter term thing, I think, is the modern data stack, and there's a lot of conversation about what does mean, and everybody kind of has roughly the same ideas.
Part of it is cloud stuff. To me, the bigger shift, the kind of architectural change that this has brought about is we have shifted from being a vertically oriented stack to a horizontally oriented one, that the way that a lot of product teams, for instance, used to product analytics was with tools like Mixpanel, or Google Analytics, or Amplitude.
Sales teams would do things in Salesforce or whatever CRM they had.
Marketing teams would use tools like HubSpot or Marketo.
There are BI tools, but BI tools were kind of monolithic in the sense that they would be responsible for ingestion, for data storage, for visualization.
If you were using a tool like Qlik, Qlik is kind of the full stack of everything.
We have turned that on its side.
So now, tools are basically solving a horizontal problem across the entire business, but solving just one part of it.
So there's ETL tools that are ingesting data from everywhere, but they're ingesting it across the entire business.
So it's like Fivetran, or Stitch, or whoever else is reading data from Salesforce, and from Marketo, and from Zendesk, and from Intercom, and from Asana, and from all of these different places, and GitHub, and like putting it into a centralized place.
You have one centralized warehouse, or data lake, or whatever you want to call it, that's sort of the centralized place of storing data.
You have single transformation tools that are kind of unified sources of governance.
Avo, in some ways, is a representation of that.
It's obviously more product oriented, but it represents a kind of, "This is your tracking plan governance," essentially, that can be not just across--
"We're not just tracking the mobile app, or we're not just tracking like this bigger product, we're tracking the whole thing."
Analytics tools tend to serve everything, so Mode is a part of that.
The other analytics and BI tools tend to be very horizontal.
There's not monitoring and observability tools that tend to be horizontal. So that does a number of things.
A couple of the big changes is it means that people can pick and choose best-in-class tools.
So instead of us having to say, "Okay, well we choose Amplitude, or we choose Qlik. And we have to use one or the other."
You can use a tool that's the best-in-class tracking.
You can use a tool has the best-in-class storage. So it allows people to build sort of much better stacks, basically.
The other part of that I think that is a big change is it let the business actually work.
Like data becomes less departmentalized and more sort of cross-business.
So we're trying to answer questions about how the sales team is doing, it doesn't just require sales data, it asks like, "Okay, well how is the sales team doing? How are the prospects of the sales team are talking to? How are they using the product?"
"What kind of marketing engagement do they have? Are they interacting with our support team?"
You try to answer questions that are these very sort of cross-departmental questions, rather than having it being siloed in its functional unit.
So I think it's a much sort of generally better way of working.
It's a little bit more complex because you don't have one tool to rule them all.
But I think that's a reflection of the complexity of the system that now exists in all the things we're trying to do with data.
Stefania: Yeah. I totally-- I think that's a really good summary, both the sort of longer term and the shorter term.
I think this is a really good segue into talking also about how org structures have changed in this aspect and what the role of the data person, or the data team, or the BI team or something, how that plays in there.
And I think along with the transition from vertical to horizontal, obviously DBT is on a fast ride right now, and also the role--
There's a new role in time called analytics engineer. Have you noticed that role?
Benn: I have. I have heard of it.
Stefania: It's really interesting.
It's something that sort of I maybe heard of it here and there, maybe it'll be an alternative version of a data engineer or something.
But it's different from a data engineer, and it's just been popping up more and more.
It started popping up more and more in the last couple of years, and now I feel like every company is hiring an analytics engineer, which is probably--
You called it the data translator, or Harvard Business Review called it a data translator.
It's a person that speaks tech, but works particularly towards translating data to insights for different aspects of the company.
Benn: Mm-hmm (affirmative). This is a rant I was on a couple weeks ago.
Stefania: Right.
Benn: I think there is two ways that this goes to me, one of which is very good, and one of which is sort of dangerous.
So my take on the analytics engineer generally is it is a good development in that data engineers are often building applications, they're responsible for a lot of very technical infrastructure that is often complex and requires the kind of computer science foundations.
You don't have to be a computer scientist by any means, but you are building applications and doing development in the sense of a product developer.
Stefania: Indexing tables and distributed computing.
Benn: And even, yeah, stuff where it's like, "Okay, we need to build a system that can process data that does all these different things. We need to build all the tests and do all the sorts of things that is the same as what you would be doing if you were building a production system."
Like what a data engineer is a production system.
And analytics engineer, now especially with all the rise of these tools, there's a bunch of tool management that has to happen that it is not just sort of being like an IT administrator.
It's building ways to connect these tools together and make them all work.
So say you want to ingest data from Fivetran, you want to format that data in your warehouse, you want to write that data to Salesforce again.
You're not building sort of full-on applications in that sense.
But you're managing a lot of very technical, very complicated processes that don't quite feel like what a lot of data engineers are going to want to do because they want to be building applications, and don't quite feel like what a lot of analysts who are mostly trained in SQL and sort of like functional in like a lowercase sense.
Like functional Python to be able to write scripts, but aren't sort of application builders.
And so analytics engineering fills this nice gap of maintaining a lot of the systems and building a lot of the systems that require this kind of translation.
The data engineer largely works with data as a machine generates it. An analyst largely works with data as a business understands it. And a analytics engineer's job is basically to take data that a machine generates and figure out a way to format it in a place that the analyst can actually understand it for business purposes. And I think that's like a valuable translation.
The thing that I think is tricky here is there is some danger of it becoming a gradient to me, where analysts see their job as sort of there is a technical path of like I go from an analyst, I learn some technical skills, I become an analytics engineer, I go further, I become a data engineer.
These things are all like a spectrum of the technicalness of my job.
And so to up level, I basically learn more technical skills.
You see this in some places where there are places where people are sort of transitioning this, where people are transitioning from analyst to analytics engineer, or analytics engineer to data engineer.
The problem with that to me is I think analytics and the job of a analyst is a fundamentally not technical job.
Your job is not to-- Technology can be helpful. Learning SQL can be helpful. Learning Python can be helpful.
But what your job is actually do is answer questions with data, and you don't need technology to do that, or to do that well.
Like you need to be able think about problems.
You need be able to look at a business problem, understand kind of the nuances of that problem, come up with creative questions to figure out how do we think about this.
That's not a technical skillset.
And I think the more that we-- And an analytics engineer makes technical skillsets bleed further into analytics, I think what that does is it attracts more technical people to the field, where it's like, "Oh, to be an analyst, you have to learn all these things."
And distracts from the fact that being an analyst is actually just about thinking about problems in a creative way.
If it becomes sort of a barrier between, "Okay, these are the people who bridge the gap between data engineer and analyst," and a lot of ways, analysts can focus just on the problem solving part, I think that's a really valuable thing because then we can hire a lot more people into analytics jobs that don't have technical backgrounds.
They can learn the SQL they need on the job, but you can hire social scientists, you can hire historians, you can hire political scientists, you can hire people with backgrounds who are like looking at problems and thinking about them creatively, not people who are coming from, "Well, I learned a bunch of Python in my physics PhD. The thing I want to be able to do is write Python."
Solving business problems is sort of secondary.
And so it's, I think, a useful thing to have happen, but there can be some question of which path it's actually going to go.
Stefania: Yeah. I really like that positioning.
And I think, so I've often talked about this, both on previous episodes of The Right Track, and also just with anyone who cares, which is maybe like mistakes that people do or make when they are building out data teams.
And so what's your take on what should the early data team look like and how should it develop?
Benn: The classic data engineer or data analyst hire?
Stefania: Or data scientist, which often is like a hugely misleading term.
Benn: Yeah. That's a whole other can of worms really of like what is a data scientist?
Is it someone who's writing machine learning product code all the time?
Or someone who's just answering business questions?
Or someone who's answering business questions with machine learning?
My answer to who you hire first in the early days, it's frustrating, it's basically it depends.
Like everybody says, it's the chicken or egg problem. Can an analyst do anything without data? No.
But can an analytics engineer or a data engineer, is it of any use to hire one of the people without an analyst to actually do something with it?
Not really. I think that especially with the new set of tooling, where you don't have to actually have a data engineer to implement a lot of things.
Like you can get a full-on data stack implemented with somebody who has no real technical background because it's a bunch of copy and pasteings from docs and stuff like that.
It's not hard to set most of it up. But it's usually going to have some that's scary, frankly, and so you want someone who's comfortable doing it.
But because of that, you don't actually have to have like a full-on data engineer.
I think it depends on what other skillsets you have.
So if you are a company that tends to be fairly analytically oriented, like I imagine both of our companies are like that.
There are probably people who come from data backgrounds who are comfortable thinking about data.
I think in those cases, it's better to hire someone like an analytics engineer initially to own the stack so that you're building the right infrastructure from go that can say, "Okay, look, everybody knows how to kind of consume this thing."
"We just need to get the pieces in place so that the two PMs we'd have who are already pretty capable of this can get the data they need. Our CEO, who actually is a former data scientist, can get the data they need. Our salesperson, who used to sell these data products, can get the data they need."
If those people are capable of that, then great.
Just have someone who's responsible for managing the stack and making sure all the data's like where it needs to be.
If you don't have that, I think it makes more sense to hire an analyst who can do a little bit of the analytics engineering stuff because then you have a consumer who can basically guide the product development of your analytics stack.
They're able to figure a lot of those things out.
It's like throwing a little bit in the deep end of do this stuff, but I think there's enough resources out there that the thing that is more valuable that they can't learn on the job is what do I want as a consumer of this tool?
It was like, "Okay, I'm an analyst. I know I need to answer these questions. I understand the business stakeholder problems. There's a bunch of tools and resources out there to help me do it. I can probably figure that out. And if I have a little bit of technical help from engineers to implement Segment or whatever, great. I can do that."
The point I think is really you need, in either case, someone to own it.
And if you have people who are already able to be the consumers, I think the thing you need to own is the stack.
If you have nobody who's actually a consumer, the consumer is better positioned to be the owner.
And you have to hire that consumer to actually be the owner.
Stefania: Yeah. I think that's a good framing.
And then maybe a good sort of just point to drop in here, because you sort of touched on it, if you are that person, what is the first thing you should do?
Get something from end to end.
Benn: Yeah. I think it's basically spin up the basic thing end to end.
And so really to me, this is spin up a warehouse, probably like Redshift is the easiest.
If you're using AWS, just use Redshift. If you're using JCP, use Bitquery.
Stuff like I think is in a lot of cases like a better tool long term, frankly, but all of them are good.
And Snowflake requires you to actually go talk to Snowflake salespeople and stuff like that, which not that they're bad to talk to, but you just can't do it by yourself.
And Redshift, you can set up an AWS account and set up Redshift on 20 minutes. So set up a database.
Start writing data to that database.
Stitch has a free version for small amounts of data. Fivetran you can do I think without talking to people, maybe.
They may require you to go through a sales process.
But these tools will let you start writing the data quickly.
Segment has free versions. You can instrument Segment.
There are free tools that let you actually start analyzing it.
So Mode will connect to Redshift. We have a free product, like you can do that all from go.
I think it's better to basically say, "All right, the thing I want to be able to do by the end of the week is have a dashboard of something, and it doesn't really matter what."
Stefania: Yeah. A specific question, ideally.
Benn: Yeah. And something super simple.
Don't just make it like, "I want to look at how many people are coming to our site every day."
Or, "How many opportunities do we have in Salesforce?"
Or something that's not some critical business question, it's just saying, "Okay, we now have this thing."
And you can do that. I've done a presentation actually where you can do this in half an hour.
I have a presentation I've done where I've the whole thing live from end to end.
And you can actually have a dashboard set up in like 45 minutes.
Stefania: Nice.
Benn: Obviously, I knew the way to get there.
But the thing that to me is frustrating is we talk to people who are like, "Okay, we're going to do this and it's going to be a three-month project to be able get this stuff running."
And it's like just get it up. And you will figure out plenty of problems along the way.
It's not going to be perfect. But all these tools scale reasonably well.
You're not building something that you're going to have to tear down.
You're building something you're going to have to fortify. But you don't actually need to change any of that.
But you're not going to know what you need to change, you're not going to know what works for your business until you actually start doing it.
And you also can actually kind of answer questions already.
Like if you don't know how many people are logging into your site every day, you can now say that. And that's actually pretty meaningful.
Stefania: Yeah. That's amazing.
And then as a follow-up of this, particularly because you were talking about BI tools, which is sort of I guess like BI is, it's almost a stigmatized word for me because when I was hiring for the data team at QuizUp, I would get applications maybe from people who would consider themselves to be BI experts, or analysts, but they would come from the old world of BI.
And that would be something like a centralized BI team.
They would report to the CFO, and they have potentially no product data, for example, and they sort of speak an entirely different language.
So would love to use this sort of opportunity that we've talked about the industry change, how do you see the modern org structure?
I typically ask this as like what's your org structure. And I'd definitely be curious to hear that.
But because you have insights into a lot of different org structures, I am curious to hear how you sort of see the org structure and how it's changed over the last 10 years probably?
Benn: Mm-hmm (affirmative).
Our org structure's a little bit funny because of some of the particularities of Mode, which I can get into.
I generally am a believer until you get to a certain size of a centralized data org, where you have data teams that are responsible for basically all of the data infrastructure, so data engineers or analytics engineers report through them.
And then analysts are a part of that centralized team that are potentially specializing, like they aren't-- It isn't the they sit in the departments.
They don't report up through marketing or sales.
There may be an analyst on the data team that is primarily working with marketing or sales but are still centralized in that way.
I have a preference for that for a couple of reasons.
I think it's one that a lot of analytics becomes cross-departmental, where as a marketing analyst, you actually need to understand how people use the product.
You need to understand what the sales funnel looks like and how it's performing.
And you're not going to get that if you're siloed in a department.
And I don't think the kind of guild or center of excellence approach that the McKinsey folks talk about really works, at least until you get to a certain size.
Stefania: Can you elaborate on what that means to you?
Benn: So the ways I've seen people do it is say they will have analysts that work.
There's a marketing analyst that reports to the marketing people.
There is a sales analyst that reports to the sales team.
There's a product analyst that reports to product team. They have some kind of-
Stefania: This is the Spotify model.
Benn: Yeah. They have some way to bring those folks together to say, "Okay, let's learn from one another," and things like that.
You can get that right, and like Spotify has a very successful data team that they seem to have done it.
But I think until you get to a certain scale, that's very hard to do.
What ends up happening is analysts mostly feel kind of isolated, and they don't get the professional development they want.
They don't actually get the exposure.
Like, you're not going to get that from having a 30 minute meeting once a week with other people who are analysts, where you kind of like tell each other what we're working on.
Stefania: And you're not going to be amplifying each other in your tool stats and in your knowledge and all those things.
Benn: Yeah. And so there's-- Well, once you're potentially like Spotify size, if you are a collection of six analysts that are all on the marketing team, but you can work with each other, okay, I think you get a lot like that.
And a lot of this is also, it's professional development.
It's like things for the analysts themselves, where it's very isolating to be in a role by yourself on a team that doesn't sort of do the things you do.
I also think it prevents you feeling like your job is to promote the marketing team.
Like as an analyst, your job should be to be a fairly neutral observer to the thing to help people make decisions.
I think analysts can go too far with that and can become kind of smarmy jerks about, "I have the data and I'm going to tell you exactly what to do."
We aren't the best at bedside manner, but I think the ideal is someone who isn't part of the product or marketing or sales or whatever team, and feel like that is their tribe.
Like they should feel like they're able to sort of speak candidly about how things are going.
So as for Mode, our structure is more or less that today.
I am more sort of open to some of like embedded analyst structure at Mode, in part because a lot of the people who work at Mode are very data-inclined.
That's the nature of what we do. People who join Mode are people who are interested in data.
And so I think we want to be able to support the things that they're able to do, that there's a lot of people at Mode who aren't necessarily analysts, but are very capable of being analysts part time, and would be capable of analysts full time if they didn't have other jobs.
So like most of Mode's PMs are people who have backgrounds as analysts.
And so in that environment, there is a little bit of a need to say, "Hey, you're not a part of the team because you have other responsibilities and other teams you report to. But we should be serving you as kind of one of these satellite analysts because you have the capacity to do that.
You want to do that. You have the demand expertise that you can answer questions better than we can about some things because you know the product and stuff really well."
Or in this case, we have sales ops people like that, stuff like that. I think in that case, it would be foolish for us to say, "No, everything has to be centralized. If you're not on the team, you're not an analyst."
It's like these people very much can be, we just have to give them sort of the tooling and the infrastructure to do it.
And so there are reasons why I think different structures make sense for different teams, but for the most part, my bias is towards the centralized one.
In our case, we just happen to have a lot of people who are capable of doing this.
And so it looks a little bit different.
I suspect at a company like Figma, which is building design software, they probably can think about a design team somewhat differently, because I imagine a lot of people who work at Figma are very design oriented and they have a lot of design talent, I suspect, across the board.
Mode's sort of similar on the data side.
Stefania: Yeah. That's a good point.
But it sounds like the short version, centralized with domains spokes is how you framed it.
Benn: Yeah, yeah.
Stefania: Yeah, I like that. And I think that is sort of how it is trending.
You have a centralized team that supports professional development, and almost acts as similar to like a staff engineer would act.
So they support other analysts in doing their jobs.
And then you might have analysts that are specialized in specific domains in the company, be it like finance or product or marketing or sales.
I definitely though see also a development where when you have the hybrid model, or if you have the integrated model, and maybe particularly focusing on that hybrid model, it's interesting how they analyst actually work with the teams.
And for example, with the product team, because they are changing the product, and thus changing the data behind the product, freaking every day, or at least every two weeks.
And so I'd be curious to hear how do you recommend analysts work with product teams to stay in the loop on what is going to happen next and support the product team properly?
Benn: In any model, regardless if it's centralized or not, you have to still be investing in building relationships with other teams.
You can't be a centralized ivory tower.
And you certainly can't show up and expect to the smarmy jerk part, you certainly can't expect to show up and be like, "I have the data, let me tell you how to do this. I'm going to do sort of like drive-by analytics and not invest in understanding your problems," or even more of like the social side of just building the relationship and the trust of the people that you're working with.
So even if you're centralized, you report to someone else and that sort of thing, you still have to have a personal relationship with folks.
You still have to be able to work with them on a regular basis.
It just doesn't work in a kind of consulting style.
Like actual consulting style thing where you show up and be like, "Here's our recommendations. We're going to peace."
So part of it is just like being involved.
That said, I think there is a corollary to that that I think is-- Actually just having a conversation with someone about this today.
That I think a lot of teams try to under-invest in basically.
Which sometimes that just means you got to have more people.
There is a sense to me of like analytics teams say, "Hey, we want our company to be more data-driven.
We want to be thinking about things you're describing of we need to be on top of these changes in product.
We want to be making product, like data-driven product decisions, all that kind of stuff.
The way that we'll do that is we can keep our team small, and then we'll invest in building self-serve tools and things like that.
And other people will be able to do the stuff they need."
I think that's kind of a failed notion.
Self-serve is good for providing people the sort of monitoring and dashboards and stuff they might need.
But to actually be helping people solve problems, or to be making sure everything is up to date with sort of the changing land, changing underneath your feet , you have to have people in the room.
Like analysts have to be involved.
And so I think that requires just you have to have enough people to do it.
You have to have enough people such that analysts aren't spread so thin that they have a once every two week check in with the product team for 30 minutes and that's it.
It's not going to work that way.
Stefania: Exactly.
Benn: And I think there is this kind of pipe dream that we can hire or build self-serve tools and get away with that.
And I think that part is wrong.
We don't have to go out and hire like a bunch of super expensive data scientists.
But if you want to be like really involved in product decision making, you've got to invest the time such that people are really involved in product decision making.
We don't do this in other fields.
If we want to have a very design-driven product, we don't say, "Well, let's buy the best design tools and we'll be done."
It's, "Let's hire a bunch of designers so that designers can be involved in every project."
And if we want to have a really data-driven product, then you don't say let's go buy a self-serve tool.
You say, "Let's go hire data people so that data people can be involved in every product."
And if they don't have the time for it, you don't have enough.
There's not like this crazy silver bullet solution to this. It's just like the solution is the obvious one, which is you invest in it.
If you want to be data-driven, you spend the money on being data-driven, and then you get it.
If you don't, then you can't really complain about not being able to do the thing that you're not willing to spend money on.
Stefania: Yeah, totally.
And so in a good team, where analysts work with product teams on planning and releasing analytics around features, how does that process--
What does that process look like?
Who is involved in analytics for feature releases, planning it, implementing it, fueling it, analyzing it, prioritizing feature based on data?
Benn: So I don't have a strong opinion on who on the team would be.
I think it's just someone who has that view and is thinking about the problem from that way.
It's similar to how do you plan a product release? How do you plan the messaging around that?
How do you plan the marketing around that. How do you plan the release strategy?
How do you plan the way we talk about it internally and externally? You just have a product marketer involved.
What kind of product marketer? Doesn't really matter.
The point is there is somebody there who's responsible for thinking about all those things.
And when you're having a weekly meeting about this product, they're the one who's thinking about it from this perspective.
And if they see a thing that it's like, "Wait, this doesn't make any sense. We were talking about it this way and now we're building something that doesn't fit," or whatever, they can raise their hand and say, "Hey, we need to do this differently."
And they have the same ability to pull the Toyota stop the line thing that everybody else does to say, "Hey, we need to solve this."
And they are a part of the team just like everybody else.
Stefania: Yeah. That's a cross-functional thing.
Benn: Yeah, from the data perspective.
Like someone who's saying, "Hey, we're not tracking this." Or, "We don't know how we're going to measure it."
Or, "We're measuring it, but it's not actually measuring the way we thought it would."
Or all those sorts of things, there's just a person in the room who's thinking about it from this perspective.
And so is it an analytics engineer? Is it an analyst? Is it a data engineer? Probably depends on the project.
I think typically someone who understands how to consume the data is usually more helpful because a data engineer is just like, "Yep, we're tracking it all according to the spec. That's good."
The spec may not actually be useful, but really, it's just like there needs to be someone who's in the meeting to think about it from this angle, as opposed to it being like a PM's kind of secondary job to periodically think about it, and then tell the analyst when they're done and hope for the best.
Stefania: Yes, I couldn't agree more.
So I think it's a really important aspect of this, similar to with the product marketing, and with design, it needs involvement from all of the key stakeholders of the product.
It needs an analyst. It needs a product manager. Typically needs an engineer as well.
And so I think that's spot on. Side note, I love those New York City sirens that just passed by your window.
Benn: Yeah. Can only do so much in a New York City recording studio.
Stefania: It's really good. Well, thank you so much for sharing those insights.
I appreciate that. Maybe quickly on that culture building as well, data trust is such a huge, huge, huge issue.
And I know that you've written about social analytics.
And I think that perspective touches a little bit on that.
It's like it's very difficult to build data trust if you don't want to really understand both the domain and the data itself.
But I would just love to hear you talk a little bit about like why do people say I don't trust this data?
Benn: I mean, there's a cynical answer to that, and there's a less cynical answer to that.
I think the cynical answer is, and this is the answer that like an analyst will give you when they're bitter about someone not listening to them, is because it tells them something they don't want to believe.
That it's like, "I don't trust it because we researched this product and I'm pretty sure it'll still be useful, and I know that people aren't using it the way we wanted it to, but I don't trust that it's telling us it's not valuable. Like we'll get there, something will change, whatever."
And there is motivated reasoning behind that. And I think, okay, yeah, to some extent I think that's true.
I think in product development, that's probably more true than anything else.
Less about like not trusting the data to be telling them what is actually happening, but more about you see a lot of mousetrap features, which is a term that I've completely made up, which are features that people will say won't work until the whole thing is built.
This is like an intelligent design thing.
There are people who, like anti-evolution, who basically say there are certain parts of a body that are like mousetraps, where like you have to build the whole thing for it to work, so it couldn't have possibly evolved, because there's no iterative way to get there.
And I think that there is some product features that people will say like, "Well, okay, I know we built the 50% version. Of course nobody liked it. We didn't do this other thing for it. Once we have that, it'll work."
And it's sort of this nothing works until the whole thing works.
And it's really, I think, a little bit of a way to dismiss the early signals that you get from data that something isn't being used as much or the way you thought it would by saying like, "Well, of course it's not. It'll only be there once the whole thing is gone."
And I don't think that's necessarily wrong.
But every bit of evidence you get from the data is some evidence that's saying you might be, and so over time, your faith and whatever initial hypothesis should erode.
And I think people, for reasons, because they believe in it, because they've heard it from users, because they're personally invested in building it, whatever, these are all valid human reasons, they may not necessarily trust it to tell them what other people may think it's telling.
The bigger problem, and the one I think is the more fundamental thing is data is going to tell you different things, and it's going to look different in different perspectives.
And I think you have to work really hard for people not to look at something and say, "This is telling me two different things, therefore I don't trust anything."
And people are very perceptive to these small differences that make them not trust it.
So the analogy I've used with this before is say you look out a window and I look out a window, and we see 95% the same thing, but I see a tree that you don't see.
Suddenly, we both start completely questioning the entire nature of our reality.
We're like, "What in the world is going on? We should be looking at the same thing. And yes, it's mostly the same, but how come I can see one thing that you can't? Something is way off. My entire world is upside down now."
And I think data is kind of like that, where even if it's pretty consistent, this one thing is off.
These one numbers don't look the same. This one release wasn't track the same way in this other thing.
The way I track the win rate as a sales rep is different than the way you track the win rate.
And I have my numbers, and you have your numbers, so I don't trust these things because something's off.
All those little discrepancies, I think, make it hard for people to just kind of take it as reality.
And that's a really hard thing to overcome, because data is nuanced in a way that there is no reality, there is no real win rate necessarily.
There's like the win rates that are the-- Like there's my definition.
There's your definition. There's all these different definitions.
There isn't one that's real. Does win rate count non-profits?
I don't know. It could, it couldn't.
And I could count it, and you could not, and we have a different number, and they're both correct, but they're both different.
And to me, reality is different, and I don't trust anything.
And so I think it's as a data team, you have to work really hard to overcome basically the notion that there is no objective truth here, and that people are going to kind of look for that.
And when they see something that doesn't look like that, they're going to become really distrustful of the thing that you've done.
And so you have to do a lot of work to kind of counter that and to get people to trust it inherently, even though it's like sort of a difficult thing to trust.
Stefania: Yeah. I relate to those to key things. I think data literacy is basically one aspect of that.
Understanding that there are nuances and it depends on how you look at the data what you'll actually find from it.
And then sort of the discoverability, and having some sort of a definition for how different teams should be viewing the data.
I think that's a really good way of framing it.
Benn: And that's actually I've never kind of liked that particular spin.
Data literacy is one of those things we talk about a lot. I think people tend to talk about it a lot as like you understand kind of where data comes from, or how to analyze it, or you understand outliers, or you understand bias, or things like that.
Probably an almost more important skill in data literacy is understanding that there is no sort of ground truth in it.
That you need to accept that there will be differences, there will be wiggles, there will be things that are immeasurable and stuff like that.
And it's like knowing kind of what it can and can't do that I think people look at data and it's like it's a number.
It's exactly trustworthy or if it's not, and it's kind of like that's not really like that.
Stefania: Mm-hmm (affirmative).
Yeah, I mean this is maybe we're here touching on your arguments for why self-serve analytics is a pipe dream.
For readers, Benn has a really, at least two great blog post on self-serve analytics that I really recommend reading.
My favorite one, or one of my favorite quotes from it at least is when you sort of painted this picture of an analyst that just really, really wants to just sit around and answer questions all day.
Which I sort of relate to a little bit.
Benn: Some people enjoy it.
Stefania: It's interesting.
But the point being people have different capabilities to be able to do that, and they need context, they need to understand data, and they need to understand business context as well.
Benn: Yeah. Yeah. And the root problem I have with self-serve analytics is-- Well, I guess there's kind of two.
One is there are two ways you can kind of think about it, and we often, I think, are trying to provide the thing that people don't actually want, that when business users are asking for self-serve, I think most of the time what they're actually asking for is they want to look up a bunch of metrics.
They're like, "I want to know what our win rate is. I want to know what our revenue is. I want to know how many product people are using this product, and I want to filter it by some dimensions and things like that."
That's not analytics in the way that an analyst would think of it.
An analyst would think of, "Hey, here's a question. Why is our win rate down this quarter? Okay, we're going to go do a bunch of research to figure that out."
That's not really what most self-serve things are doing.
Like if a CEO has that question of why is our win rate down, they're not like, "Oh great, I'm going to poke around my BI tool to try to figure this out myself. And then I'm going to go make a decision without talking to anybody." They mostly just want to see that the win rate is down, and then maybe do like a very cursory, "Well, is it down across all the regions? Okay, yeah. Is it down across segments? It's just SMB. All right, I'm going to go talk to my SMB sales manager and be like, 'What's up?'" That's not really analysis so much in the way that analysts do it.
And so I think self-serve really is this kind of narrowly-- Should be a more narrowly scoped thing for helping people just like extract data they need.
The other part is it's getting back to the part about analytics isn't a fundamentally technical job.
There is a skillset that analysts have that is figuring out why our win rate is down this month that is hard.
It is hard to do that. It's not something that nobody can do. You're not born with it.
But you just practice solving those problems for a long time, and you get better at it.
And the problem is not that analysts need to learn how to write SQL, and they need a tool that like anybody that has a tool that can abstract to write SQL suddenly be able to do this.
The problem is like answering this kind of confusing, ambiguous business question with a bunch of data that doesn't really directly address it is a really hard thing to do.
And so just by giving people a self-serve tool does not enable them to do that.
Kind of going back to the design thing, if you want to be good at design, you hire a bunch of designers. Why?
Because giving me a tool to design a website means I can put pixels on a page, but it doesn't mean I'm going to make something that looks good or is functional.
My skillset I am missing to be a designer is not that I don't have the technical ability to drag and drop stuff around a page.
I'm sure I can find a tool that helps me do that. My skillset is I don't understand how to be a designer.
And analytics is no different. But self-serve, we kind of treat it as though all I need to give you is the technology, and then you'll magically be able to do this.
And I don't think that's right.
Stefania: Yeah, totally.
Both skillset, and also just time. Those are things that take time.
It takes time to design good products, and it takes time to really understand answers to questions.
Benn: And most people don't want to do it.
What they don't want to do is they don't want to bother somebody.
That doesn't mean they want to do the job.
There's a difference between saying like, "I don't want to have to go through the frustration of asking somebody else and waiting forever," to saying, "I want to do this."
If I'm a salesperson, what I want to be doing is I want to be talking to customers and selling.
I don't want to be spending three days doing analysis.
Stefania: Yeah, totally.
Benn: If I wanted to do that, I'd probably have a different job.
Stefania: Exactly. Amazing. We had very deep conversations on a wide variety of subjects.
So we took a good, over an hour talking. I love it.
I really appreciate your time.
Would love to wrap this up with maybe you talking about what is the one thing that you wish more people knew about data and product, even though we've already covered many, many things you wish people know about data and product.
Benn: I think some of the stuff is the stuff that we've already covered.
Like if you're thinking about building data teams and product teams to be oriented around data, I think it's recognizing the skill that is a data skill and the skill that isn't.
The skill it isn't technical skills. It's helpful but it's not what makes a data team great.
It's not what makes someone a great partner to a product team, or a PM a great data person isn't that they need to go learn Python or go through some tutorial on data science fundamentals or whatever.
A lot of it just comes from asking questions, really being willing to dig in those questions, being really curious, kind of having this relentless curiosity around, "I need to understand what this is. And if I see something that looks weird, I'm going to try understand why that looks weird."
That's really, to me, where you get the value in all of this.
And so I think if people are thinking about building products where they want to be data-oriented, it's not going to come from, "Okay, I have a bunch of dashboards."
That's helpful. And that's going to give you a much better sense of kind of the world around you, and that's very useful.
And you absolutely should do it.
But this example from The Knot, or the example you described from your own experiences about the banners and stuff that people had, and the awards.
That doesn't come from a dashboard.
That comes from someone who's curious, who sees something that looks a little bit weird, who wants to understand why that is, and keeps asking the next question, the next question, the next question, until they get like, "Oh, wait a minute. Now I'm starting to connect these dots. That doesn't happen overnight.
It's something that just requires a lot of digging.
And I think for some people, that's very much the fun part. Like I enjoy that.
There's a lot of people who enjoy just like seeing something that they're curious about and kind of figure it out. But that's what it takes.
And so I think, again, you can't sort of buy the right tools to be data driven or whatever.
They can help you, but really, it's about having that kind of mindset about asking questions, about being curious about the answers, about trying to connect dots, and being willing to sort of see if you're right or wrong on that stuff.
Stefania: I love that. I think we could even add that into like a what is a fundamental trait of a good analyst is they're curious.
Benn: One big challenge in all of this is it's easy for me to say what an analyst isn't.
It's like the skills that are required, it's like yes, you need to know SQL and Python sometimes, but that's not the defining skill.
It seems very hard to define what actually makes a good analyst.
It's like some sense of curiosity, some sense of sort of analytical reasoning, which is kind of begging the question of what an analyst is, kind of like of inductive reasoning, ability to connect dots, I don't know, see patterns.
But all that's pretty fuzzy, but I don't have a better answer than that.
But it's like the people who are kind of detectives is the best I can come up with.
Stefania: I love that. 100%. Thank you so much for your time on The Right Track, Benn.
It was really informative. It was great to listen to you expand on some of the things that I know you've already written about many of these things before and talked about them before.
So for anyone listening, I really recommend you go check out Benn's blog.
And it was an honor having you, Benn. I hope we can have a part two.
I know we could probably talk for many, many, many more hours. So look forward to next time.
Benn: Awesome. Well, thank you so much for having me.
This has been great and I really appreciate it.
Stefania: Thank you, Benn.
Subscribe to Heavybit Updates
You don’t have to build on your own. We help you stay ahead with the hottest resources, latest product updates, and top job opportunities from the community. Don’t miss out—subscribe now.
Content from the Library
The Right Track Ep. #12, Building Relationships in Data with Emilie Schario of Amplify Partners
In episode 12 of The Right Track, Stefania Olafsdottir speaks with Emilie Schario of Amplify Partners. Together they discuss...
The Right Track Ep. #6, Domain Expertise with Laurie Voss of Netlify
In episode 6 of The Right Track, Stef Olafsdottir speaks with Laurie Voss of Netlify. They discuss the roles in modern data...
The Right Track Ep. #5, Intangible Metrics with Elena Dyachkova of Peloton
In episode 5 of The Right Track, Stef is joined by Elena Dyachkova of Peloton. They discuss the intersection of fitness and data,...