In episode 9 of The Right Track, Stef speaks with Erik Bernhardsson, founder of Modal Labs and an early engineer at Spotify. This conversation covers cohort analysis, data team recruiting, and Erik's experiences building and refining data cultures.
Erik Bernhardsson is the founder of Modal Labs, specializing in tools for data teams. He spent 2015-2021 running the tech team at Better, and took it from 1 engineer to about 300. Before Better, Erik spent six years at Spotify, mostly building the core of the music recommendation system, eventually hiring a team of 20 people.
In episode 9 of The Right Track, Stef speaks with Erik Bernhardsson, founder of Modal Labs and an early engineer at Spotify. This conversation covers cohort analysis, data team recruiting, and Erik's experiences building and refining data cultures.
transcript
Stefania Olafsdottir: Hello, Erik, and welcome to The Right Track.
Erik Bernhardsson: Thank you.
Stefania: To give us all context, could you tell us a little bit about who you are, what you do, and how you got there?
Erik: Do you want the long story or the short story?
Stefania: I would love a long five-hour story, but maybe we can find some medium in between.
Erik: Okay. I'll try to condense it. So I'm Erik Bernhardsson.
I'm currently working on some random projects in data.
But to give a little bit longer context of how I ended up here. I'm from Sweden.
I was doing a lot of programming competitions when I was young, and then when I was deciding to think about what to do after school, a lot of my friends at that time from school doing these programming competitions started at this obscure music streaming company called Spotify.
And I decided like, I don't know what's going on, but clearly they're hiring these smart people so I want to go work with them and see what--
And hopefully I'll enjoy it because there was a bunch of people that I really looked up to and respected.
So I was able to convince Spotify to hire me to do music recommendations, which was a little bit of a joke because I didn't even know anything about machine learning at that time.
But I think early on at Spotify, no one really knew anything, and it was a good place to be because the fact that I didn't know anything still meant I was the best person to do it in a certain way, the least bad person.
So I joined Spotify and started hacking on that, and then relatively quickly realized that there was far more important things to do than focusing on music recommendation.
So I actually stopped doing that for a while then instead I spent about a year and a half building up a data team.
So we did a lot of business and intelligence type stuff, working with a lot of different stakeholders across the business, and fundraising, and investors, and also just product analytics.
I left briefly and went to hedge fund, and then actually came back to Spotify, but in New York where I ended up spending another three and a half years focusing this time on music recommendations.
So I ended up building out and launching a number of features with music recommendations, built up a team of about 20 engineers, open source thing called Luigi, which is one of the first major open source workflow schedulers.
Stefania: Which we used at QuizUp. Thank you.
Erik: Nice.
Stefania: Yeah.
Erik: Yeah. A lot of people used it back then.
And I think now these days like Airflow and other ones have taken over.
So then that open us to another called annoy, it's what a lot of people used.
But then I ended up leaving Spotify in 2015. I felt like I wanted to go through the same journey again.
I missed the early days of Spotify and ended up joining a very early stage company as a CTO, and the company was called Better. I was number eight or something like that.
And I spent six years there building up a tech team of about 300 people.
In the end, I realized I keep postponing the things I always wanted to do, which is to start my own company.
And so I parted ways with Better in good terms, and helped them find a new CTO and all that stuff.
And then since earlier this year I've been starting to play around with different data things, and starting to look at this space, and trying to figure out, "Okay. What are the opportunities? What should I be working on?"
And I just started doing a little bit more seriously.
I finally have an office and I started to hire people, and we have some protest we're working on, but it's still pretty early stage.
Stefania: Very exciting. Would you say stealth mode?
Erik: I would say stealth mode, but I don't really like that term because I feel like some companies try to create a weird hype out of being in stealth.
At least I didn't feel like that a few years ago. To me it's just necessary evil.
We just haven't found a good pitch and we're also building something that's complex, and it's going to take a long time working with various types of close stage, close beta design partners to really figure out, how's it going to work?
Stefania: Very exciting. I look forward to part two of this conversation later when we can all learn a little bit more about this.
Erik: Yeah. It'd be fun.
Stefania: I think everyone is dying to know what Erik does next.
Erik: I want it to be something no one expects like Uber for dog walking, because everyone was like, "What is he working on?"
Stefania: Exactly. Well, thank you so much for that intro, Erik.
I hope this sheds a light for everyone listening on how excited I am to be having this conversation because there's a lot to dig in there, obviously.
And you have a great experience, and you've been in the space for a really long time.
And you've shaped a lot of discussions, and you've also built huge teams.
And so I'm excited to dive deeper into that backstory.
Before we do, would you mind kicking us off with some real life traumatizing data stories, maybe some inspiring ones as well, but something frustrating.
Erik: Yeah. I mean, I think so much of frustration has really been on the people side.
I think kind of person, if something on the technical side breaks, it's actually in a way less stressful. I don't know.
There's been so many cases where we've done the wrong things, or in retrospect messed things up.
And I mean some of the things comes to mind.
Like in the company I worked out, we used the wrong metrics to optimize something for ad.
Like, basically when you're spending millions and millions of dollars on ads, and you're optimizing for account signups, but then you realize actually transaction volume is completely backwards.
There's a lot of users who create accounts, but never actually do anything.
And so yeah, we discovered at some point there's this almost inverse correlation between cost per accounts and the ROI.
And so that was like one example where in looking back, we should have built up those data pipelines much, much earlier.
We probably wasted $10 million just throwing ad dollars in completely channels because we didn't realize it.
So I don't know. I have a lot of stories like that where I think in retrospect, if we just thought a little bit smarter about how we think about data, how we measure things, and what are the main metrics, clearly we wouldn't have done all these.
And I think a lot of companies go through this. It's probably a pretty normal story.
Stefania: Yeah. I think that's probably right. At least when you're at the stage of spending millions of millions of dollars on ads.
And then, I guess, a really calling follow-up question for me here is, how did you then finally discover that you could probably be using a different metric?
What was the trigger, and what was your learning process for that?
Erik: It was just a lot of plumbing.
I mean I was the very early, and so lot of the reason we hadn't done it was just the data didn't quite exist.
There was just a lot of plumbing we had to do first.
And so I think, I don't know if there's a general learning experience here, but I think almost every time when I've solved these plumbing things, I wish I would've just solved them a year earlier. And I'm a very data driven person.
I'm like, "We should get all the data into the data warehouse as early as possible."
But even for me, I'm regretting all these times when I didn't do that.
So to me, that's maybe goes a long way to say maybe already when you're building the features, and starting, and just getting off the ground, is a good time to start thinking about tracking, and logging, and how you store events, and all that stuff.
Because you're going to realize very soon that it's actually very useful to have that data.
Stefania: That's a very good point. It's a really interesting problem though.
And it's like that's scale up period, when you're scaling up an organization.
It's just like over-engineering, it's such a delicate balance of investing in something that is future proof, or helps you in the future.
And you know that future self will thank you for it. But what are you going to sacrifice instead? I guess is the question.
Erik: Yeah. It's super hard.
Because if you read Twitter, there's 500 million thought leaders saying it's like, "Oh, when you're 10 people, that's the time to invest in a chief happiness officer."
And if you follow every single advice like that, when are you actually going to build your startup?
Stefania: Exactly.
Erik: And so I think to me the most important thing is to learn which one of those you can't ignore because 99% of that advice is probably wrong.
Because if you follow all of those advice, you're never going to actually do the work.
But some of the things are actually very important to do. And so I don't know.
I don't want to make any sweeping generalizations, but I find that that's something that so hard to do is like, defer everything about except like absolutely necessary.
But one of the things that are absolutely necessary right now, I don't know. It's hard, right?
Stefania: Yeah. And this is something that, I guess, probably... This is the thing that experience gives you.
Erik: Yeah. Hope so.
Stefania: Exactly. But like, I mean on that note, I mean, what is the right time to do it?
I mean because it sounds like you've gone through a bunch of periods where like, "Ah, I wish I had invested in that a year ago."
And you mentioned logging events consistently, or start to think about your plumbing as you call them.
What is realistically the time that you should have done it, for example?
Erik: I think the best answer on the spot would be like, I think, it's so dependent on to what extent data drives your revenue and your costs for a lot of types of companies.
If you're a B2B SaaS company, it's actually very hard to AB test to even know anything about.
So maybe in that case, it doesn't matter.
But if you're doing consumer stuff where you have a lot of scale at what you're doing, if you're spending millions and millions on ad dollars, my guess is there's probably a set of factors like that to determine like, yeah, in that case, you better really work on your data stack because that has such a massive impact on the bottom line.
Stefania: Yeah. That's a good on the spot answer to a very difficult question.
Erik: Thank you.
Stefania: And I'll probably want to touch on that a little bit later on your experience with building a data team, and wanting to hire the right folks for that, and where they fall into the organization and things like that.
But thank you for sharing that frustrating story.
Can you inspire us as well? A little bit, a little bit.
Erik: Basically, one of the first things I did at Spotify and I was straight out of school.
Was early on at Spotify there was always this debate, is the freemium model going to work?
We have all these ad funded users, and we weren't really getting that much money from the ads, and then we're praying that they would convert to premium, and we make money me back from them.
And there was always this argument, very few people actually use premium, so does it actually make any sense?
And so I remember this was one of the first things I ever did in my career working on data was just like, let's look at a cohort plot.
And I didn't know the name back then.
I was just like, "Okay. I'm just going to plot the percentage of users that convert to premium, but I'm going to pick a cohort of people that joined a certain month, and I'm just going to track them over time." And then we saw this beautiful linear curve where every month these people, the premium commercial just kept going up and up.
And I mean at the time, Spotify was raising I think their B around or whatever.
And this went straight into the board deck. It was a big part of investor presentations.
And I like to think it did make a huge difference at that time. I mean I don't know.
But I think, to me, that was one of the first gratifying things I've done in the sense like I felt like actually what I do has some meaningful impact.
And I got hooked. Because up until then I considered me to be this very like algo driven, I like to solve technical problems type of person.
But I think that was the first time I realized like, actually this is cool.
I found something in data that has a meaningful impact on building a company.
And then I just kept going after that, like, how can I find more things? How can I discover insights? And it was fun.
Stefania: That's a really beautiful story.
That's exactly-- You're the hero we didn't deserve, but the hero we needed by bringing us that story.
But I totally relate to the cohort thing. I remember--
And I guess that touches a little bit on, I remember thinking when you were delivering the intro, it's interesting to think of people's different paths into going into data.
And so in your case, it sounds like you went from the developer world into becoming a data specialist, data scientist, data whatever.
Whatever we want to call it. Data has something. Data person.
And so obviously we don't have all of the words and the vocabularies.
And I remember an exact same experience with like, what cohort analysis? What?
Even though you've heard about cohorts before, but you probably just never put it in context with product analytics, or user experience analytics, or things like that. I don't know.
Erik: Cohort analytics has been a consistent theme throughout my life where I've plotted something as a cohort, and blown someone's mind.
They're like, "Wait a minute, you can look at it that way."
And I still feel like it's-- I mean now a lot of people do it.
But still, I feel like sometimes people really, they're like, "Wait a minute. That's pretty cool."
I feel like. And survival analysis, which is a related type of analysis.
I think it's such an underappreciated aspect of data analytics these days.
It's not even like rocket, I mean it's pretty straightforward, just sorting events by time and grouping it, and--
Stefania: Exactly. And it's really powerful. For whoever's listening, can you tell us what survival analysis is since you mentioned it?
Erik: Yeah. I mean survival analysis, as the name implies it's about the inevitability of death.
And comes from medical research and looking at if you... I mean, I don't know.
Maybe this is me making up a story.
But if you administer a drug, then looking at, okay, what percentage of people survive?
And maybe look at a control group and a test group.
And the problem is, some of these people have not died yet. So what can we infer?
If they haven't died for really long time, maybe that's a good thing.
So in tech businesses, I tend to think it's the other way around.
It's not like deaths, it's like conversions and steps.
We flip it upside down. And instead you look at conversions after a certain time.
And what ends up being always challenging with these types of analysis is, you often end up forcing...
The crude way to analyze a lot of these things is you have some complicated model you're trying to understand like, who converts and who doesn't convert?
And what is the outcome variable you're trying to track.
And it's both conversion, but it's also time to conversion.
Another crude would be like, okay, actually, we're going to look at conversion rate at 30 days out.
But if you're doing that and you're throwing away a lot of information because you might actually have people that convert earlier, or people who haven't converted at all after 29 days, or whatever.
And you're throwing out a lot of that ability to learn quicker, which is called censoring in survival analysis.
You have this left censoring and right censoring.
This idea that you've only observed a certain person up to a certain point, but they might still convert in the future.
You don't know. And so survival analysis, unless you deal with this in a nice, consistent way where you can build cohort curves, and incorporating the censored data.
The data from people who haven't maybe converted yet, but maybe will convert in the future, and look at two different cohorts, and say something much, much quicker than you would otherwise would've said earlier in that 30 days, or whatever.
And then they there's all this science you can get into more complicated models. Starting with Kaplan-Meier, but then there's cost proportional hazard, and you can start to fit...
At Better, I spent a lot of time building a thing that fit viable distribution because it turned out that actually very beautifully approximated the commercial curves.
And that actually ended up improving our ad performance in a way that saved us millions and millions of dollars every month.
And so these things can have a very large impact if you do it right.
Stefania: Exactly.
And I think when you're bringing this up, and your magic moment around your first cohort analysis.
I mean, I think the birth of these for each individual is probably when you just think of segmenting by a specific dimension, you're like, "Interesting. What if I filter out these people? Or what if I filter out by that?"
And then you reach the stage of you start to develop a scale, or take time to do survival analysis because you've learned some things about how that works, and how you can treat different cohorts in a different manner.
How can we help people reach this discovery sooner?
Erik: Yeah. I don't know. I've tried to blog a few times.
So I have some blog posts on the topic. But, yeah. I mean I don't know.
I think just probably better tools could be needed.
I think there's probably an opportunity to build an cohort analysis tool that could be its own startup.
And through more blogging, and teaching, and knowledge sharing, hopefully these things permeate through as tools.
Stefania: The all-encompassing data community of the internet.
Erik: Correct. Yep.
Stefania: Okay. Awesome. Maybe a quick follow-up question on this.
What was your role when you did this?
What were you working on when you stumbled upon doing this?
Erik: I mean I mixed up a lot of different experiences.
As I said, it's been my whole career. So part of it was doing it at Spotify in the early days.
Later it was me as the CTO just looking at data at Better.
I have a math background so I started realizing I can use viable distributions pretty soon.
So it's been something that's recurred and recurring throughout my entire career at different roles.
Stefania: Yeah. Exactly.
And you already mentioned it a little bit like when you joined Spotify, you thought you wanted to work on, or you would work on the recommendation algorithm that first time around, but then you had more pressing issues.
And then you're started building a data team, right?
Erik: Yeah. I mean looking back then, Spotify's main breakthrough and the main utility these people use it for, is you want a click play on a track in the cloud and then you get the music.
And that is the magic moment of Spotify.
And so what we realized, Spotify had more urgent issues making that work, and also just raising money because money is the oxygen of any startup.
And so I think it made sense to not do recommendations for a while.
And I think what's important to remember is advanced machine learning is often something that just makes something that already exists a little bit better.
It's not something that enables you to do that thing in the first place.
And so I'm very happy in the end.
I do think that the music recommendations at Spotify makes Spotify X percent better.
But it's not the thing in itself that is the magic of Spotify, I think.
It's the sort of thing you built on top of it that makes it extra magic.
Stefania: Discover Weekly. Oh my God.
Erik: That's right. Yeah.
Stefania: That's very good stuff. And when you realized that, was there a data team at place at Spotify?
Or were you the person that went like, "I think-"
Erik: No, no, no. I built it up. I built it up.
I proposed it, and initially there was a push to build a team focused on reporting.
And my argument was like, "Why don't we broaden the mandate, and make it about product analytics, and business intelligence, and all kinds of stuff."
And so I started managing that team. I was pretty young, like 25 or something like that.
And it wasn't a particularly big team. I think we were four people in the team in the end when I moved to New York.
But I mean now it's like, I think two, 300 people probably at Spotify, or something like that, probably even more.
So it was fun. And this was back in the days. No one was really doing data.
There wasn't any patterns, there wasn't really best practices. We just did whatever we needed to.
And the tools were terrible. It was all like Hadoop, and awful things. Running even basic queries took hours.
But I guess we did what we could with the tools we had, and I'm glad to see that these days, it's not quite as bad.
Stefania: Yeah. That's a really good point. I relate heavily to this.
I know we talked about it a little bit earlier.
There's a lot of people, everyone in their own corner slightly inventing the wheel a little bit.
Erik: Yeah. Absolutely. And I wish they didn't.
I mean it would be nice if someone could just do it for them, right?
Stefania: Yeah. And we're approaching that.
I mean everyone is knowledge sharing so heavily about this right now.
And this is maybe a good segue into, how do you see that the industry has changed?
I like to think about it from even just a small period of two years, because we are moving super rapidly.
But also from this perspective of like, you now have, what? 13 years going on right now.
Erik: Yeah. I've been coding for almost 30 years.
But yeah, 13, 14 years, something like that working as an engineer. Yeah.
And in the data where like, of course, many... I mean when I started working the cloud didn't exist or anything like that, and that data warehouses the only one that existed was Oracle.
And if you wanted to do anything as a startup, most people did Hadoop and it was terrible.
I think we've come a long way. It's so much easy to work with data.
I mean, particularly zooming in the last two years, I guess the big transformation has been probably like the Snowflake dbt combo. Feels like it's so dominant these days.
And I think to me it's almost in hindsight, maybe an obvious thing. Why didn't we get that sooner?
And maybe part of it was like people. We didn't quite have SQL as a data warehouse this way.
But to me, that's been the most interesting transformation in the last two years.
This wildfire spread of dbt, and how clearly it solves a problem that a lot of people had.
Stefania: I wonder how big also a part of that growth has been their strategy of really focusing on the community.
Erik: I think it's huge.
And I think-- I mean, in general, I feel like it's less about the community, but more like-- I feel like it's almost like creating a narrative of a dominant platform and a movement.
And I almost feel that way when I look at the positioning of dbt as a tool for data, or for analytics engineer.
They almost created this role out of nowhere.
I think there was a lot of people throughout many companies they weren't really sure, who am I? What am I working on?
And dbt is like, "You're an analytics engineer. This is how you do it."
And there's a certain movement aspect to it where now all these people are like, "Yeah. I'm an analytics engineer."
And then all the other companies are now latching onto that and like, "Yeah. This is the new data stack. We're part of it too."
And you get this interesting follow on effect where everyone is trying to cruise on that trend.
So I don't know where it's going to end, where we're going, but I think it's been really interesting to observe.
Stefania: Yeah. That's a really good positioning of it. It's a sense of belonging almost.
If we look at the... What is it? Like the Maslow pyramid of needs, the first thing is don't die. No. Yeah.
Bio engineering life is the second layer.
The first layer is have enough food so you don't die. And the third one is a sense of belonging, so-
Erik: Yeah. And then I think the highest one is self-actualization right, or something like that.
Stefania: Yeah. Something like that. Exactly. So they're building us on the journey of that.
Erik: Yeah.
Stefania: So that's a great identification. I would love to maybe move a little bit into data cultures now.
This is a good segue into that. What are the roles in companies, and how people build their teams and things like that?
You also talked about, when we were talking about how analytics are broken, you mentioned that that also is a people problem.
Erik: Yeah.
Stefania: I'll just open with this. I don't trust this data.
It's such a common statement, and I'd love to hear your thoughts on, why is that, and how can people solve that?
Erik: I like it. Because in a way it's like I'd rather have... It's like the scientific discourse.
The whole point is someone proposes a thesis, and then other people go and scrutinize it.
Is that right? Is not right? I don't think that's necessarily a bad thing where if people don't trust the data.
And I'd rather have too many people making too many conclusions about data that may be wrong than too few people even looking at data.
So I'm pro having a constructive disagreement, and arguments, and trying to deduce.
And where I've seen that really work is when you have that mutual respect and admiration, and this platform of trust of people.
Then if those people argue about something, then it's like, you're not trying to put down a person, you're trying to together find the truth.
And that's actually a good thing. You're arguing about, "I don't trust what you're saying."
But I'm arguing because I want to figure out, I'm also interested, what is the truth there?
And I think that's the, to me, the aspiration that everyone should try to get to, right?
Stefania: Yeah. I love that perspective. That's a fresh perspective.
So when people say I don't trust this data, that's a good thing.
Erik: Yeah. But to answer your question, why is that? I don't know.
I've seen so many mistakes, people take correlational things and present it as if it's causal.
Or making assumptions, you rolled out a feature. Okay. Look at it prepost.
There's a lot of just sloppy analytics that I think is being done.
And I think I've actually seen a lot more of that than technical issues.
Yeah. Sometimes of course you have technical data gathering problems, but more so I've seen issues where people make implications of data from purely correlational, or other sloppy conclusions.
Stefania: Interesting.
So you're saying, when people don't trust the data it's more often when someone is actually trying to make a statement about some truth, rather than when they are literally just trying to take into the data, and the data is broken.
Erik: Yeah. I mean the times when I've had these good arguments about data, it's like when someone finds a really strong relationship.
And then the argument is like, "Is this actually causal, or is this just some random, weird thing that you can explain because there's some confounding thing" I mean this is one example.
So it's the starting point. I think it's good to be a little skeptical when you see very impressive results somewhere.
Stefania: Yes. And to take it a step back.
I think typically when I see very impressive results, or very terrible results, my first assumption is like, "There's something wrong with the data behind this."
Erik: Yeah. You should.
Stefania: Exactly.
Erik: You should doubt it.
Stefania: Yeah. Because it's very difficult to get your data right.
Erik: Yeah. Absolutely.
Stefania: Awesome. So maybe moving that into also, who is then working with the data?
You've now touched on the org structures a little bit at Spotify, and that's a famous example, obviously the org structures at Spotify.
And then of course at Better, I'm curious to hear how you scaled up the team at Better, and how the data team was involved there, and what the org structure was.
Can you maybe just share a little bit, those two examples?
What was the org structure data wise?
How data work with product and engineering in those two companies?
Were they integrated with the product teams, or were they separate teams, and things like that?
Erik: No. I mean first of all Spotify, I left Spotify six years ago.
So I think this is somewhat dated. But at that time it was just fully centralized data team.
And I think that was in retrospect bad because everyone had to have come to me and ask for help.
And I made this discretionary decisions, and like, "Who am I going to help or not?"
And I think in introspect, I think people didn't feel like they got the support they needed.
And so I think that created a lot of unnecessary pain. I wish I would've spent a lot more time building up more of like, here's your backlog, and here's your resources, and here's some software tools.
I think I did a better job at Better doing that.
What I ended up with is instead, I ended up still having in centralized data teams, but tried as much as possible to decentralize the backlog management.
So I had embedded data scientists or data engineers working together with other teams, like in particular product managers.
But also many other cases, even teams like finance or whatever, marketing a good example too.
Where they would be their main point of contact in terms of what I'm actually working on.
Whereas you still have the mothership that helps supervise, are we doing analysis in the right ways?
And are we contributing back to the data platform in the right way?
But day-to-day work was more like, your identity is more like, "No. You're working with the marketing team. Your goal is to help marketing or whatever. But you are a data engineer or data scientists, and don't forget that."
And I think that's as good as it gets.
I never love hybrid approaches, but I do think for data, it is a weird case where clearly there is a set of very specialized knowledge you need in order to be good at data, but also to some extent like that skill is being needed very completely different type of a team.
Stefania: Yeah. Exactly.
And so having a centralized team, but also embedded the hybrid model, it empowers both the professional growth of the data folks, but also empowers and makes sure that the communication paths are super strong, and that each team is empowered to make good data decisions.
And even furthermore, the fact that you specialize in finance, or marketing, or something, it just makes you a better ally for that team.
Erik: Yeah. I think that's right. And I think it's worth to be aware of the drawbacks.
It is very hard as a manager to know what your direct reports are doing if they're spread out over 10 teams.
And that's something I saw on Spotify too, in this matrix model.
It's very hard to have good accountability, and also good recognition of good work.
And so managers do need to spend and even more time thinking about, how am I making myself available?
How am I keeping my direct reports accountable? And how am I recognizing the good work?
It's very hard as a manager when you have this embedding model.
Stefania: So that's a good input. And I want to ask a follow-up question of that.
So it sounds like... And I assume you're talking about Better here.
They did report to... Ultimately they reported to the CTO.
And was there a data person, a data leader also that reported to you that they reported directly to, or something like that?
Erik: In the end there was a data leader in between me.
I managed the data team directly for quite some time, given that I have a lot of background in data, and the team was quite small for a long time.
But then in the end, in my last year or so we had data leaders.
Stefania: So is it right then that you were managing a bunch of different product teams as the CTO, set of engineering teams basically that had...
Did they also have product managers and things like that reporting to you? Or did they report to like-
Erik: No. They reported into the product. They had a product.
Stefania: Okay. And so the data people on the teams, they were embedded in those teams, but reported directly to you.
Erik: Correct.
Stefania: And even when you hired a data leader in between those teams would report to you, or some sort of an engineering leader, maybe, that reported to you. But the data folks would still report to the data leader.
Erik: Yeah. That's right.
And I strongly think that data teams should be a part of tech. I mean what they're building is writing code.
And in the last few years, this shifted away a little bit from less code and more like... Less Python and more SQL. But still to me, it's like, this production pipelines.
It's version control. It's polar requests.
It's code to me, and it's the same engineering mindset.
And I think it sets them up for success to report into tech.
Stefania: That is a really great observation. And it sparks two follow-up questions from me.
First one, tapping into what I was mentioning from your intro.
You entered data as a developer, you were a developer by training, had been doing a lot of software engineering, things like that.
How did you build your data team? Did you recruit from that angle?
Did you recruit someone who had, maybe, like a different type of experience? What are the roles, backgrounds?
Erik: Yeah. Actually my background is a bit weird. I studied physics.
But I grew up coding all the time so like I have a little bit of weird background.
I think, yeah, physics actually did help me a lot going into data because I obviously know a little bit more about math than someone from the computer science background.
But yeah, I think in terms of building up the team, my feeling... I mean this goes for any team.
Like software engineers or a data team. Early on you need full-stack people as much as possible because you don't know what you're going to need from day-to-day, and you're going to need people who are flexible, who can jump around as much as possible.
So I very intentionally in the earliest days of building up a data team, I was very intentionally put to the role description as data engineer.
Because my feeling was like, if I hire a data engineer who's also entrepreneurial and cares about the business, they can build a platform while doing the analytics, and let the analytics work inform the platform they're building.
And they can be this full-stack person who can jump around.
They can take, "Today I'm going to look at this thing. And oh, by the way, actually in order to do that I need to ingest this new data set, and index it, and blah And build a pipeline, and then go back and do the analytics."
And so I did that for almost two years where had a combined team doing both data engineering and data science.
And I called everyone a data engineer. Even though they're probably more like data science in what they did day-to-day.
But the data science these days has been such an overloaded term that I found that in a way it was a little bit easier.
If you want to hire a full-stack data scientist, it's actually easier to put data engineer on the role description, and just look at the people that apply for your role.
And I mean, of course, we did a lot of sourcing as well.
Stefania: Yeah. But you are touching on a really interesting subject there, which is like, I feel like there's a lot of discussion lately about bringing in people that have soft skills, and then they can learn the hard skills.
But you talked also about, okay, you put data engineering in the role description, and then you filtered and recruited for also entrepreneurial mindsets, and things like that.
How do you find those people? How do you filter for those people?
Erik: Yeah. I mean first of all, somewhat controversial opinion maybe.
But I think soft skills are in abundance and hard skills are scarce.
And so I think it's much better to hire for hard skills, and then filter out on those people having a base level of soft skills, than teaching people hard skills which can be extremely hard especially if it takes a lot of experience like learning about how to work with data, or learning statistics, or machine learning, whatever.
That's hard. So I always wanted to hire people who are the best people early stage in the tech team, in a data team, I think are people who they understand data.
They're okay at the numbers. They understand software engineering. They don't have to be the world's best distributor system database, whatever. But they understand and they know how to use Git, or whatever. But most importantly they're excited about the business. They're excited about the outcome, and how my analysis is driving business outcomes. They want to find a truth in the data. They're like data journalists.
They're like, what's lurking in the data?
What can we find today that's going to have a huge impact?
And there's no easy way to filter for that.
But I mean, I think there's a lot of ways to filter out people on the other side of the spectrum.
Sometimes you start talking to people and they're like... I think there's a spectrum of goal oriented people versus tool oriented people.
And it's pretty easy to determine that somebody is tool oriented.
If they're really, really like, super obsessed about functional programming.
Maybe they're really smart and that's fine.
But I think there's always going to be a conflict of interest if today the type of work we need as a business to be successful doesn't have anything to do with functional programming.
That might be a problem. We might have conflicts of interest here.
So for those reasons, I think it's very dangerous to hire these tool oriented people.
Same goes for something like machine learning.
If you make your career pursuit to improve your knowledge in machine learning, that's fine.
There's a lot of large established companies where they need people like that.
At a startup that might be very, very dangerous to hire a person like that, because they might everything needs machine learning.
That's not true. 10% of things need machine learning.
So what I want to hire is the people who maybe they know machine learning, but for them it's just the tool in the toolbox, and the real goal is to figure out what the business need and build those things.
And if that involves going through an Excel file, and manually classifying text messages.
Whatever it is. And they have to do that for three days.
If it drives a really important business outcome, they're happy to do that.
And so I like to go deeper and ask people, "How do you feel about working on boring things, unsexy things? What's a good story of something you found in data that was surprising? Or, and how did you find it?"
Trying to go deeper into like, what is motivating this person?
Stefania: I love that identification.
I also want to say just data journalists, I relate so heavily to it.
It's a person that is extremely curious to learn some new things and find some value.
And I think what you just said, are you okay with working on boring things?
It's a filter for people are okay sometimes not over-engineering.
It's an indicator that they won't over-engineer everything, basically.
Erik: Yeah. And it's so clear. It's certain cultures are actually not that way.
Certain startups have cultures that very much emphasize solving hard technical problems, and promoting the people that do that.
And I think it's actually quite destructive to do that. And there's nothing wrong.
I love a hard technical problem. But it's a means to an end, right?
Stefania: Exactly. Really great point.
So I mean I can't have a conversation with Erik Bernhardsson without talking about the incredible blog post, or short story, or what should we call it?
That you released earlier this year.
Erik: Terrible.
Stefania: And in that story you were covering all of those journeys, the journey from discovering all of these different problems, and putting out the right fires where needed.
And you're also now, just now when we're talking about this, you're talking about who are the right people to do that?
If you would put out a recommendation for someone who is about to take on a role like that, and vice versa for someone who's trying to hire for a role like that, what's your recommendation?
Erik: Yeah. I mean I don't know.
I think, again, it's a maybe the data journalist role we talked about.
I think those are the people that I have seem to be most successful, or the people like, yeah, they're good with numbers.
They're okay with stats. They're okay with software engineering.
But beyond everything, they're just driven by this pursuit of the truth.
They're like, "I'm going to win the Pulitzer Prize. I'm going to find some smoking gun in the data."
Stefania: I'm going to get my stats into the board deck.
I mean the interesting thing also about this is you wrote the story, and it triggers many people with their PTSD.
Erik: I know.
Stefania: And the interesting thing about it is like, what is so fun about it still? Why do we really love doing this?
For me, when I look back for example on my QuizUp time, probably every week I was like, "Why am I doing this? Why don't I just go build a house somewhere, or be a farmer, or whatever."
But there's something that pulls us back in into this super...
Especially then when the data role of a company was just so much still being shaped. So what do you think like-
Erik: I mean, I don't know. I think there's a part of everyone in tech who's whole career purpose is that they see incompetence around them, and they want to fix that, and show how it could be done better, right?
Stefania: Yeah.
Erik: Yeah.
Even though no one would admit it, that's, I think, a big part of what drives people to advancing their careers is they get to certain level, and they're like, "What's actually going on there. That's not very good. I bet I can do better."
And I think with data it's been especially painful, I think, as a lot of people go into that industry and look at what's actually going on.
There's so many weird things that are evidently not good.
And so I think, I don't know, maybe that's what pulls people back is the feeling that they want to show how it could be done better, right?
Stefania: Yeah. I like that. So something about leaving a legacy.
Erik: Yeah.
Stefania: If I talk about like, for myself, it was always, ultimately...
Because you mentioned earlier that like, okay, you joined the first time around before you made a brief stop at a hedge fund.
Erik: Yeah.
Stefania: You were there to build a recommendation engine, which is super exciting.
And it's one of the things that, for example, when I joined QuizUp, I was like, "Yeah. I'm hired as a data analyst, but I'm going to be working on also super cool stuff that they don't even know about yet."
But obviously I had the same experience as you, which is there are more pressing issues to be solved here.
So I think ultimately, potentially what was also driving me was like, I'm building a better word for myself where in the future I can work on even more cool stuff, or some of the people I am working with can work on all the cool stuff when we've you figured out the plumbing.
Erik: Yeah. Totally.
Stefania: So it like a carrot.
Erik: Yeah.
Stefania: Beautiful world carrot.
Erik: Yeah. You passed the marshmallow test. You held off, and then you get your marshmallows.
Stefania: Exactly. Awesome.
On this note of the data teams and where do they report to, and what are their roles?
And you touched on the fact that you typically always framed the role as a data engineer.
You also had a great recent article about what is the right level of specialization.
Erik: Yeah.
Stefania: With great analysis of some kitchen cooks, and cutting onions, and stuff like that.
Erik: Yeah. It's funny because, yeah, I mean I just said I think data engineers is one way to frame what I want.
But on the other hand, I also think data engineers should not exist.
I mean in the sense that if I look at so much of the work what data engineers do across companies is building platforms that they should not be in the business doing.
And I think what's the sad truth about data these days it's almost become like a super set of software engineering.
It's like, in order to be a full-stack data person, I'm just going to call data person for now, we need to know all the data science, and all the software engineering, and maybe a bunch of other stuff around whatever.
So that's a bad thing and because it's full. The stack is full. You can't fit more stuff in there.
And I think there's a lot of reasons why actually, why do we normalize things around what software engineer is doing?
Why do we think of this as an extension of software engineering?
There's a lot of things that actually that maybe in the data world is very different that we shouldn't be doing.
And I think one of the things that I think... I mean this is broken even in the software engineering world, but infrastructure tooling the stack right now, I think, is very poor abstractions.
It's like, this is so much like garbage, you need to learn about Kubernetes, and Docker, and Terraform, and all these other tools to actually build infrastructure.
And to me the vision has always been like, "How can we get people to spend 100% of their time thinking about business logic?"
And I mean, I don't know. I'd imagine this maybe a poor set of terrible, crude version of history.
But I'd imagine when electricity came, maybe it was the same thing.
Let's say you were like a garment factor, and you're like, "Wow, this new electricity thing. I bet it's really cool. We could use it, hook it up. We don't have to have windmills powering our whatever."
I don't even know what the term is.
But in order to build your own electricity, I bet you have to hire a bunch of mechanics, you have to hire a bunch of people understanding how generators work, and a bunch of people build...
But over time, those things just become commoditized.
So you're just buying a power line from the power company, you just plug it in.
And all these engines are built in China and you just buy them on Alibaba and they get shipped to you.
And those things are now just things we take for granted, but all these weird professions that have to deal with every factory building their own power supplies went away.
And I think there's always this thing where new technologies come.
First everyone has to hire all these specialists that do it themselves.
But over time those things tend to factor out into platforms and separate companies, just like a power company.
And then you just buy it from there.
And then people can go back to what they're good at, which is like, I don't know, managing factories, or designing clothes, or whatever it is.
And then do that. And I don't know if that was the best analogy, but like that's a little bit how I think about it in the historical context.
Stefania: Yeah. I like that analogy. I talk about it sometimes with just literally with AVO when I'm explaining it to a person that's not very much in the data space, or even just the software space.
It's like, we're building tools for developers so that...
Imagine if you're a chocolate maker, and the first thing you would have to do is build the whole machine to mix chocolate before you can even just start about thinking.
Erik: Totally.
Stefania: But you made a case for specialization, and having everyone on the team be able to do basically anything that needs to be done on the data team, sort of.
Obviously, I think I agree with you, there is some level of specialization that you want to have on the team.
You want to allow people to grow in the trajectory that they want to grow.
But would you literally recommend against, for example, teams having special roles, data teams having special roles?
Erik: All of what I'm saying is like somewhat aspirational.
What I'm saying is like, I would love a future where there's less specialization.
I think people should always strive for less specialization.
When I started building up my tech team at Better, I didn't hire frontend and backend people, I only hired full-stack people.
Stefania: Exactly.
Erik: And then over time we're like, "Oh yeah, but someone really needs to think about CSS and whatever."
Then we started hiring front and the back separate. And I think that goes for everything.
And I think where a lot of this drives, it's like the latency of human communication, and the coordination costs of assigning different people tasks, and bouncing it back and forth.
So the specialization argument also applies within teams, but also across teams.
I think everything else equals, is much better to have full-stack people that can do everything themselves.
But let's say you don't have that. And let's say you have a bunch of specialized people doing in a lot of different things.
Then still everything else equals like, it's better to have them in the same team.
Because then at least you have, roughly, the resources management is still easier.
Coordinating things between two teams, the latency it takes is like weeks.
Coordinating things between people within the same team, the latency is like, I don't know, a day at most right.
Coordinating things between the same person doing the same thing, the latency is like minutes, or seconds, or whatever. It's just...
And I think when you're building a startup or any company like that, for that matter, these coordination costs, they add up, and they cause all these conflicts, and make it harder to get things done.
And then you add all these project managers on top of that.
And then they end up needing coordination.
And so to me it comes down to the ability to get things done in an autonomous way that doesn't require all these coordination costs.
And then to some extent also flexibility.
You don't know from day-to-day what you need, and so having people who can a little bit of everything makes sense.
Stefania: Exactly.
Erik: I guess the third thing would also be to some extent also many companies, you don't want people who are driven by a particular tool, you want people who are driven by outcome.
And I think that goes a little bit hand in hand with hiring generalists.
But I'm not like and absolutist here. I'm making more a completely relativistic argument here.
Everything else equals, less specialization is good, is my argument.
Stefania: Exactly. Yeah.
I remember you explicitly added that disclaimer in the blog post after you got some backlash after the proclamation onto it. Which is funny.
Erik: Yeah.
Stefania: But yeah, I mean I totally agree with you because I recently also... And just as like a data point into this conversation, I originally had a conversation with a friend about recruiting on an engineering team.
And the premise of the conversation was also a little bit around how companies can structure their recruiting and interview processes to find the right people for the team.
Because we don't want to exclude people that don't yet have the experience that prove that they are right for the role.
First of all, because it doesn't scale. There's such immense growth in all of the different roles in software.
So you really need to be hiring young people coming out of college, or whatever.
And obviously they won't have the experience that you need for the team.
And then the other thing is like, it won't allow people to penetrate the specialization ever.
So I totally agree with the mindset that it works better, particularly for early stage teams if they try to hire a generalist.
I mean there might be a time in your life as a development team, or a software organization that you need a really specialized infrastructure engineer to solve a specific problem, and like-
Erik: For sure.
Stefania: But that is far from the first people on your team, for example.
Erik: Yeah. Yeah, absolutely. And it goes beyond team building too.
I think a lot about it now out that I'm thinking about the tools that I can build for other companies to buy.
My goal to a large extent is also to build the tools that enable less specialization.
Like, can I do the things that currently for specialization and so that people don't have to think about it.
Because all those reasons why people specialize to me are kind bad and should be avoided.
And if I can give the tools so people don't have to think about infrastructure, don't have to think about all these things.
I think they can go back and focus on business logic 100% of their time. Aspirationally, so I think that would be good.
That the somewhat naive vision that I have in my head is like, that's a where we should try to get to always.
If people are spending this proportion of their time not doing business logic, and other stuff, that's a bad thing.
Stefania: Yeah. Exactly. It brings me to wanting to ask you about any hot takes about titles in the data space.
We've already aligned in this conversation that we're not going to call them data scientists here on.
We're calling them data persons, at least in this conversation.
But I mean, okay, you hire data engineers.
I find it interesting to hear your take on when people advertise the role data scientist, how helpful is that?
Erik: I mean I actually don't really think it's that important.
I think long term as role hopefully consolidate and de-specialize, if we end up with fewer overall numbers of titles, that would be a good thing.
I don't care which one in the end is that role.
At Better we ended up with both data scientists, and data engineers and that was fine.
And I think that's fine. I don't know.
I really don't have a strong point of view of which title is the best one in the long run.
Stefania: The one thing that I think we might be approaching soon, but at least for a really long time we were not there.
When people were advertising for the role data scientists, I think a lot of the time you would get someone that really did want to just do research, and work on their machine learning or something like that.
And so I think that's one of the downsides of like... But we might actually be approaching a reality in the world where when you see the role data scientist in a software company, then you can understand what that means.
Five years ago, I don't think that was the case.
Erik: I agree with that. Five years ago if you put up a role description for data scientists, you would get about 1,000 applicants who all they wanted to do was to train machine learning models.
Stefania: Exactly.
Erik: And so I agree with that. And so that's, again, why you should call it data engineer.
But I think you're right. If you put up data scientists now, there's better expectation of the fact that you might do some machine learning, but there's also many other things you will do.
Stefania: Yeah. So expectations versus reality is closer today for the role data scientist than it was five years ago, maybe.
Erik: Yeah. I think and hope so.
Stefania: I want to maybe talk a little bit about like...
Because we're on The Right Track, and it is an all-encompassing data culture building, knowledge sharing podcast.
So I want to go into practical things about... Because you already mentioned one of the things that you sometimes wish you had done even better, and that you're recommending that teams do earlier, literally talking about analytics logging, and releasing analytics for feature release.
I think that's a thing that's like a bastard.
It's a bastard baby where the data person is responsible for making sure there is analytics in place for something, but they have absolutely no control over whether the analytics was actually in play when something was released.
And then the software engineer, I'm talking very general, and at an early maturity stage of a company.
Software engineers might not have interest in data, or they see it as something that's not relevant for the end user or something like that.
And then you reach a different maturity stage where it becomes a little bit more intertwined.
Can you talk about analytics for product releases, and what that process looked like, for example, for Better, when you already had the Spotify experience?
Who was involved in planning analytics, implementing it, queuing it, analyzing it, prioritizing feature work based on data.
And I'll set one more stage for the question.
The reason why I think this is particularly interesting for product analytics is for many data sets, they don't change much over time.
But product analytics data sets, they are ever changing because you're always changing our product.
So can you shed some light on that?
Erik: Yeah. I guess I don't have too much to say other than the observation that here's really where embedding model makes the most sense.
If you have a fully centralized data team, you're always going to have this misalignment where some software engineers are releasing whatever they want to, and then it's like this centralized data team's job to make sense out of it.
And they might not have anything to go by because there's a open loop.
There's no closed loop. The software engineers have... It's a principal agent or whatever problem, or something like that.
I don't know. It is misaligned in status.
So that's where I think if you have an embedded model instead, I don't necessarily think you saw 100% of it.
But I think at least that it becomes the product manager's job to make sure whatever they're building is also like the onus is on them to show the impact and the metrics.
And so then they are going to go make the soft engineers talk to the data person who's working with the product manager to make sure that whatever they're building is something that they're going to be able to show had whatever measurable impact or metric later.
So I think that works, and it's probably as good as it gets.
That's the thing I've seen in reality that can work.
Stefania: Yeah. I could not agree more with that. It's about bringing those stakeholders together.
In practical terms, what did that process look like for Better or for Spotify?
Erik: I mean, I think it's more like just making the own...
If you align incentives, things usually fall out.
So let's say you have an embedded team, where data engineers are embedded, and they work with feature teams.
Then let's say you have a data driven culture where it's the job of product managers to show the impact they're making through data.
That's all you need. Because now the product managers, they know they're going to be measured by.
They know people are going to ask about the data, so they're going to go talk to the data engineer, or the data analysts, or whatever that data scientist.
And the software engineer is going to go talk to them too. And so I think those are the tenets.
That's it. I think that's enough. This should be the only two things you need.
Stefania: You mean set that stage, and then people will figure out the process.
Erik: Yeah. I mean I tend to think that incentives matter, and if you solve the incentives, you automatically solve a lot of other downstream things.
Stefania: That's a beautiful way of seeing it.
I wish the entire world agreed with that, or saw it that way.
Erik: Yeah. I mean it same thing as, I always think about this as people are the root because of almost everything.
Almost every dysfunction is some sort of function of misaligned incentives, or could be information asymmetry, or whatever it is.
And if you go beyond the obvious things, you often find some sort of people problem like that behind the scenes, and that's actually the real thing you should fix.
Stefania: Yes.
Erik: Which is very hard. Yeah. And then an organization, and accountability mechanisms, and all kind of stuff.
Stefania: Culture.
Erik: Yeah. Culture is a big one.
Stefania: Yeah. I guess like we've already touched a little bit on this, but can you list out some of the things that you wish existed, or that you wish you had back and better, for example?
Erik: Yeah. I mean it's still crazy to me how much time we spend, like any engineer spends on infrastructure.
And I think we're still so early in that journey of getting way out of the business of thinking about research management, and provisioning, and configuration, and all that stuff.
Thinking about 100% business logic. And I know that's abstract.
But the amount of time now... I mean Kubernetes is probably a little bit better than what we had before, but I look at it and it's still complicated.
People spend a lot of time debugging weird things, and it's like, that's hard. I don't know.
I think on the data side we've come a long way.
There's like Snowflake, there's dbt, there's all these ETL tools.
And we're starting to have more real-time stuff like materialize which is pretty cool.
But I still feel like we're still early, and there's still so much of it to time that's spent duct taping together a lot of different tools.
Thinking about deployments, thinking about productionizing things, thinking about how do we get these things working together?
Every single data team that I talk to at different companies are billing the same internal platforms for sales service, for metrics, for dashboarding.
Some of them are starting to factor out into startups and businesses offering these as a service.
But to the extent that they are factored out, they tend to be often like what I think of is widget type companies that over time, then the burden is like, "Okay. Now, we need to integrate these 35 vendors, and that's a lot of work."
And so I don't know. I wish for a world where there's fewer set of vendors that offer more holistic things that snap together in beautiful building blocks.
And people don't have to think about the things that they shouldn't think about, and they can focus on what is it that the business needs, and then just use platforms for everything else.
Stefania: Yeah. That's a good point.
I guess it's like we're in the fragmentation stage right now, but we might be approaching a stage where things start to get consolidated again.
Erik: I think so. And I think a fragmented thing to me is often a sign of massive demand, but poor supply in a way.
It means no one has really cracked this, then not like, "How do we actually build something that scales well, and has good economics?"
But on the other hand, it means there's a lot of demand.
There's a lot of people paying for a lot of these small vendors.
And then that to me I think is a very exciting market to be in, now that I'm thinking about being in that market as a tools provider.
I think there's a huge opportunity, I think, clearly with the amount of demand for these tools.
Stefania: Exactly. Yeah. Ultimately what we're trying to do is like, okay, can we just consolidate the role of a data scientist so that they don't have to be super specialist, and have to build all of the infrastructure as well?
Erik: That's right. Yeah.
Stefania: And it re reminds me, I have to plug this in here.
Or last guest was actually Josh Wills, who I know for a fact you once said was kind of interesting. Words I find hilarious.
Erik: I was referring to one of his tweets.
And then he put it on his Twitter profile, as if I was making a statement about him. So that's the back story.
Stefania: That's a very good additional backstory.
But I mean he had this definition of a data scientist as a person who is better at statistics than any software engineer, and better at software engineering than any statistician.
Does that make sense to you?
Erik: Yeah. That seems good. Sort of Pareto frontier of those two axis.
You're slightly better in some way, you're on the efficient frontier.
Stefania: Exactly. And I mean you mentioned that your background is actually in physics, it's not in computer science and math.
And that's the same background that as I have. Like I have mathematics-
Erik: Nice.
Stefania: ... and a philosophy background actually as well. So maybe that's a good combination.
Erik: I do not have that, so I have nothing to say about philosophy.
Stefania: Let's pivot this show.
Erik: Yeah.
Stefania: But maybe to start wrapping things up, I'd like to talk a little bit about people's misconception about data, and what you wish more people knew about data, and maybe data and product.
Specifically focusing on data and product analytics.
And then some things that people can do right.
So why don't we start with, what are people's biggest misconceptions about how data and product analytics works?
Erik: I mean one misconception, we sort of touched on this, and I think this is pretty well understood at startups is to the extent that simple things are usually far more impactful than more complicated things.
And I think the average startup understands this really well.
But I don't think that's necessarily clear in the broader world.
I think there's a, like if you go talk to the CIO or whatever, CTO, like a major financial institution, I think there's often a desire to feel like they've done everything they could, and now it's time to get the AI to sprinkle the magic dust on top of it, and get--.
But the truth is, they probably haven't at all done everything they could.
They have no idea, where are people dropping off in their conversion funnel?
And where are people struggling on their website? People can't log in, whatever.
And so to me, one of the misconceptions I think in the broad world is, I think when I think about how my parents think about like us data people.
I think they think of us as these people devising new mathematical relationships to model super-advanced whatever, using fluid mechanics to understand.
But in reality, I'm like, "No. I'm doing a scatter plot of this dimension versus this dimension. And look at it, it's a little weird, so that's not good. Let's show this to someone."
And that's like, has such incredible value.
Again, I think this is well understood at startups these days, but maybe less so among the general population, to what extent data work is really finding crude things.
Stefania: Exactly. Wow. I love that. First of all, just your framing of it.
Simple things are usually more impactful than complicated things.
And obviously it reminds me of the meme that's going on everywhere for all things right now.
Which is that normal curve of intelligence, and the stupid person does something, and then a medium range IQ does something super, super complicated, and then the very intelligent person does the same thing as the stupid person.
Erik: Yeah. I think that data equivalent of that one would be the ends, the sides of the spectrum would say, look at a scatter plotter. I don't know.
Stefania: Exactly.
Erik: Like let's look at the chart. I don't know.
And then the middle one would cry out, and talk about P values and whatever, random forests. I don't know.
Stefania: Exactly. I could not agree more with that.
To wrap things up, some advice for the people who are listening, what is the first thing teams should do to get their analytics right?
Erik: I think they should think about the org structure first, and then think about the business goals, first of all, and then solve backwards from that.
What is your purpose? What are you getting hired for? How do you generate the most value?
I think aspirationally everyone should ask that every morning. Just come into work and say, "How can I increase the value of this business as much as possible?"
And that's a very hard question to ask yourself and not realistic at all.
But I think this is theoretical model that I like about that. Everyone just thinking about like, how can I act as a shareholder?
How can I do what's right for the business.
And I think if you do that like, kind of solve for that backwards, you can usually think through like, "Okay. What are the problems we should prioritize? What are the stakeholders we should talk to? What are things that clearly are suboptimal right now that we should focus on?"
And then you can ask the question, "Okay. What data do we need for that? And how do we gather that? And then where do we put it? And how do we make it easier to quarry it?" And then you're done.
Stefania: I could not agree more.
And case and point we literally developed a meeting at QuizUp called the purpose meeting, which is like, okay, you're talking about the general purpose of the organization and your role, but ultimately we wanted to apply the same thing for every single feature release.
And literally having a sit down like that, talking about the purpose of a release, merely for the goal of merging the stakeholders.
Like the product manager, the developer, the data scientist into thinking, what data do we actually need?
But thinking about it top down like that, it often actually sparked a change in roadmap.
Erik: Yeah.
Stefania: Just doing that train of thought in advance. So I couldn't agree more.
Erik: Yeah. It's super important.
Stefania: Yeah. Those are really good words to end this podcast episode, with simple things are usually more important, impactful than complicated things.
Erik: Yeah.
Stefania: And think about your purpose.
Erik: Yeah. I think that's what it's going to say on my grave one day.
Stefania: I love it. Awesome. Well, Erik, I want to thank you so much for taking the time.
Erik: Yeah. It was fun. I enjoyed it. It was a lot of fun.
Stefania: It was very enjoyable. Look forward to part two, when we know a little bit more about Burnco.
Erik: Sure.
Stefania: Very exciting. Very exciting. Yeah.
Erik: Sounds good. The Uber for dog walking.
Stefania: Exactly. Thank you so much for joining us on The Right Track, Erik.
Erik: Thanks. It was fun to be here.
Subscribe to Heavybit Updates
You don’t have to build on your own. We help you stay ahead with the hottest resources, latest product updates, and top job opportunities from the community. Don’t miss out—subscribe now.
Content from the Library
The Future of AI Code Generation
AI Code Generation Is Still in Early Innings AI code generation tools are still relatively new. The first tools like OpenAI...
Personal Branding for Founders
Why Personal Branding? A lot of founders I’ve spoken to, especially those of us who are technical, bristle at the idea of...
Machine Learning Lifecycle: Take Projects from Idea to Launch
Machine learning is the process of teaching deep learning algorithms to make predictions based on a specific dataset. ML...