Ep. #25, Replacing Yourself featuring Melinda Byerley
In episode 25 of Generationship, Rachel Chalmers speaks with Melinda Byerley, founder and CEO of Fiddlehead, about the transformative role of AI in marketing. From practical applications to ethical considerations, Melinda shares how AI enhances productivity, democratizes analytics, and addresses the challenges of building trust in marketing. Discover actionable insights and the human side of AI innovation in this thought-provoking conversation.
Melinda Byerley is the founder and CEO of Fiddlehead, a leading digital marketing agency specializing in data-driven strategies and AI applications. With over two decades of experience in marketing analytics, AI strategy, and growth hacking, Melinda has worked with iconic brands like Netflix, GitHub, and Dropbox, as well as pioneering initiatives at PayPal and eBay. She is a passionate advocate for ethical AI and shares her insights through her Substack, Let’s Get Real.
In episode 25 of Generationship, Rachel Chalmers speaks with Melinda Byerley, founder and CEO of Fiddlehead, about the transformative role of AI in marketing. From practical applications to ethical considerations, Melinda shares how AI enhances productivity, democratizes analytics, and addresses the challenges of building trust in marketing. Discover actionable insights and the human side of AI innovation in this thought-provoking conversation.
transcript
Rachel Chalmers: Today, I'm so pleased to welcome my friend Melinda Byerley.
She is the founder and CEO of Fiddlehead, a top woman-led digital marketing agency that helps clients like Netflix, Impossible Foods, GitHub, and Dropbox grow their businesses with data-driven strategies and AI applications.
Melinda has over 20 years of experience in marketing analytics, AI strategy, growth hacking, and data ethics. She was a key member of PayPal's early growth team, pioneered Omniture integration in 2003, and drove marketing and product initiatives at tech companies from eBay to Linden Lab.
Her 2022 cancer journey inspired her recent move into marketing AI, and the creation of the Let's Get Real Substack in February 2023 has been successful due to using her hands-on experience and analytical mindset to pinpoint immediately practical applications of AI for marketing.
Melinda, it's so great to have you on the show.
Melinda Byerley: Always great to see you, Rachel.
Rachel: You have written that you want to replace yourself with AI. That is an unusual desire. Can you expand on it?
Melinda: Well, there's extensive research that shows that the more decisions a company makes, the better the outcome. And that is independent of the quality of the decisions. Because the fact is that the faster you make decisions, the better you get at them.
And when it comes to marketing data, I also like to say that I'm an alchemist. I take all of what I call the lead, the raw sort of, you know, unstructured data that we deal with all the time in marketing analytics, and I turn it into gold. And that gold is what do we do with it? Helping companies figure out what to do with all the data and make sense of it and take action that moves the business.
But I spend the majority of my time actually doing the mining of the lead. And most of us who are good at marketing analytics do the same thing.
We have to extract all of that data, we have to mine it all up, we have to clean it, we have to decide what to use. And it turns out that that is a very slow process.
And so the reason I want to replace myself with AI is I believe that we can both extract more gold from the lead we already have, as well as find new ways of making gold. And we can do that faster.
So it's a very slow process for me and my colleagues to do this. And we want to speed it up. And the other problem I see is that only the most well-capitalized companies can afford people like me. And I'm not saying that to brag. I'm saying there just aren't enough of us.
And I truly believe that every company deserves this. We're all spending money on marketing. We all deserve to know what's successful, and we need to know that in an accurate and a timely fashion.
So if we're able to make AI replace me at scale, I think there's a huge benefit. And in fact, I'd go so far as to say it's a generational opportunity to unlock business value. And that's why I'm passionate about it.
I want to replace myself so that everybody can sort of have access to this stuff. And so that I can spend more of my time working on things like the cultural impact of data and more on what to do with it.
Rachel: I love this ore-refining model for what large language models are capable of doing. Because at heart, they're transformers, they're translators.
What they're really good at is taking unstructured data and finding patterns in it. They're not generating net new insights, but they are really good at taking all of this data exhaust that we've generated over the years and giving us something to work with, that feels like a really good application.
Melinda: I hope so, and we need it.
Rachel: You and I have both been heads of marketing at startups and we've talked a lot about how that position is a revolving door and always the first to be punished when something goes down.
How, if at all, might CMOs harness gen AI to get even just a little bit of job security?
Melinda: Well, no matter what, whether you use AI or not, it comes down to trust. The more that people trust you, the longer you should be theoretically in the role.
And so it starts with not overpromising. And I think AI is ripe for overpromising. And it's understandable. We're very optimistic creatures, we marketers. We really believe we can have an impact on things.
And we want to please, we want to make people happy. And the boards, right now, all they're talking about is AI, AI. And it's very easy to get into this trap of saying, "Yeah, we're going to just do all the things with AI."
And I would go so far as to say that I actually think most companies should not be doing a lot with AI right now. I think it's about doing what you've always done, but finding ways to do it better, to do it faster, maybe a little bit less expensively, because unless you're right at the epicenter of this, unless the company that you work for is right at the center of it, the actual change at the moment is small.
And I think sort of putting the brakes on that and sort of educating exec staff and building trust slowly, pilot it, you know, you're not going to ignore it, you're not going to say it's not a thing, but be thoughtful.
Build a plan, be strategic, figure out what the business problem is because AI is no different than any other marketing tool. And it's that tools don't solve business problems. So what is the business problem?
And if we're not clear on what that is, it doesn't matter what tool we use. And I do think it's important to be honest with our fellow executives and boards before we make those kinds of commitments.
Rachel: What are some anti-patterns that you see? Are you seeing banks and airlines just slapping ChatGPT onto their website as a chat bot?
Melinda: Well, I would never comment on that per se, only because I am really impressed at how many big companies have been working with AI for a long time. It's not new.
Many of us, it sort of burst onto our consciousness in November of 2022, but for many large companies, this has been a multi-year effort. So I think that there are definitely anti-patterns and there are certainly sorts of, you know, that sort of wrapper on GPT being sold or dropped off into Chrome or what have you.
I would also say just also there's been what I call the dissent into madness with regards to content. Like everybody's producing all the things, and as we expect, some qualities going down.
There's just this thing about where everybody's presenting themselves as an expert in AI when very few people truly are. And so that's driving me mad.
When I started my own blog, it was really from a learning out loud, I never intended to present myself as an expert. It was like, "Hey, I don't know anything about this and I'm trying to learn."
And it blew my mind when people with even less knowledge than I do we're out there saying, "I can teach you how to use AI." I think it's really important to understand that it's still very early days.
I remember when I asked friends of mine about sort of how to approach this, I said, "This was in January 2023." I said, "Aren't I late?"
Rachel: Ha!
Melinda: And they laughed at me and said, "Byerley, what were you going to do? You're not a developer. What were you literally going to do before ChatGPT?" Which is an excellent point.
Like, so if you're thinking to yourself, "Oh, I'm behind," and "Oh gosh, I need to go hire so and so to do this for me," I'd say think twice. If you're smart and you've adopted tools before, there's a lot you can do on your own.
Specifically, though, to get tactical, I would say that there's a lot of magical thinking around SEO. People think it's going away. Nonsense, like that is simply not true.
So you still have to build something people like, and you still have to find a way to talk about it. And you still have to find it and help yourself be found in ways that are not necessarily about buying traffic. And search engine optimization, building your email lists, that's just not going away.
Rachel: But Melinda, SEO is really hard and I don't want to put the work in.
Melinda: Yes, indeed, yes, that's exactly right. I mean, like I said, I'd like to replace myself. I wish I could snap my fingers and have most of the work I do go away so I could do only the fun parts, but I also want a million dollars and a pony.
Rachel: I have the pony.
Melinda: You have a pony.
Rachel: You're welcome to come and feed him carrots at any time.
Melinda: Yes. So, you know, and they won't solve the business problem. I keep coming back to what is the business problem.
Rachel: No, the pony actually makes business problems. He creates them.
Melinda: Yes.
Rachel: On that note, what are some of the ways... I know you've become enormously productive partnering with ChatGPT, what are some of your techniques, your tips and tricks?
Melinda: Well, we only have so much time, but I like to say that it has been the most important enhancement to my own personal productivity since remote work.
And I have ADHD. Since you know me well, you know, that my mind is sort of a vast sort of collection of stuff, and it's often, like the marketing data, unstructured.
And the biggest challenge I've often seen in business for myself is to put my thoughts into a format that makes sense to other people, to put them in a logical structure. And that type of work is very cognitively taxing for me.
It's just for me, and that doesn't mean that there's anything wrong with the other person or with me, but it's just a fact that it's hard for me to structure my thoughts.
And AI makes it possible for me to organize myself in such a way that I can communicate with other people.
And I liken it to what it must be like when people who have been deaf learn to hear, or anybody who's been sort of trapped, feels like trapped inside their own mind, I have been able to connect with people because I can speak their language. So I can just dump all my thoughts out into a pile.
But before I go deeper into that, I also like to mention two things. One is I never ever copy and paste from AI. I think it's morally wrong. I also think it's lazy, and it's very clear if I spend any time looking at the output of an AI that it doesn't reflect exactly what I think.
I think of it as like an ugly first draft, and then I want to spend time shaping it and picking the right word. I find it easier to edit than I do to face a blank page on a lot of things.
The other tip I'll share on this, and I sort of tell everybody when they ask me, I say, "Think about it as a human being, but never forget that it is not a human being."
So Ethan Mollick was talking about this, I think it was on Ezra Klein's podcast, and he was talking about how people who have jobs where they interact with other humans, like teachers, are actually finding much more success with AI than many programmers, because it's not a program, you're not talking to it like a piece of technology.
You are talking to it like a human, but you can't forget that it's not human. So it's not thinking in that way, in the way that we think of thinking. It's not critically analyzing, it's a mathematical construct.
But by talking to it as though it were human, the results are better. I mean, it can do things we know about, which is it gets rid of my silly spelling errors, silly proofreading errors.
It helps me unstick, as an executive, as a leader, if I am stuck on something, either something's really bothering me and I could get past it. I can vent to the AI. I can barf out the profanity-laden email I would prefer to send and have it help me find the bane, consulting way of saying the same thing.
You know, and this is before we get into like systematizing proposals, it's a thought partner for sticky political situations. It's just an incredible tool. And it doesn't solve business problems, but I think it's an incredible personal productivity solution.
Rachel: I love that notion that the way your brain works is unstructured and what you're using is the transformer in its essential mode as taking unstructured data and making it more structured, giving you a more structured output. It's literally translating from neurodivergent to neurotypical. That's a beautiful model for how to use it.
Melinda: I can get tears in my eyes talking about it. It's been profound.
Rachel: I also love what you said about teachers and educators being very proficient users because they're used to being patient and gradually leading something along. Can you expand on that a little bit? How do you see that as the core skill?
Melinda: Well, and if you think about training someone who's new at their job, another metaphor, I'm sure I've heard this elsewhere, I don't think it's mine, is you're a fresh college intern.
It's eager to please, it's got a lot of knowledge, but it doesn't have a lot of discipline or structure and it doesn't know all the rules and it's going to make dumb mistakes and it might even be overconfident, as we all were when we were fresh college interns.
And so there's a process of teaching it and there's a process of... You have to give it more detail so it's not just, you know, give X output Y, it's X within constraints and boundaries.
And I think it was in a very interesting discussion with some other folks recently, we were talking about one of the challenges is many people are not able to stop and think about how they think.
You have to slow down and say, "What does it mean to do this?" I need this with these constraints and boundaries. And when you do that, then the AI is able to give you what it needs.
And you'll find this out if you just start playing with it because you'll give it a broad thing and it'll give you bad crap. And you go, "Oh, I forgot to ask it that."
And then you refine it and then I'll have it actually give me the prompt. When I'm done sort of like wrangling with it, I'll say, "Now, next time, how would I ask for this," and have it tell me.
And then I will save that so that I have examples of things that work. So, but it's human language. I've done a small amount of programming. It's not for-next loops. And if-then, it is not like that at all.
Rachel: It's non-deterministic, yeah.
Melinda: Yeah, I think that's the point of why programmers often struggle with it, because they think there's a key that if they can just learn the language and unlock it, then it will just do what you ask it to do, like a robot.
Rachel: Prompt engineering.
Melinda: No, no. It's more human than that without being human.
Rachel: Yeah, that's a very subtle point. My partner is a very good mentor to younger engineers and his go-to question is, what is the real problem that you're trying to solve here?
And you're describing a similar metacognition around how to interact with these things.
Melinda: Yeah, and it's hard won, meaning it's... And it's hard for people to see this. So I often just will sit with my friends and say, "Let me just show you. Let's just sit with me on a Zoom and let me just, you know, log into ChatGPT and show you."
And it doesn't take more than 10 or 15 minutes for most sentient humans to go, "Oh, okay, I get it now." And then go fart around with it.
Go ask it for how to become better at playing bridge or ask it for duck recipes or a training program and just start, you know, 10, 15 minutes a day just getting comfortable with it.
Like any other discipline, most executives have something, they run, they write, they meditate. This is no different.
Rachel: Do you worry about the risks of widespread use of commercial LLMs? These things are black boxes and we know a lot of their inputs were iffy. Are we just baking algorithmic bias into like critical infrastructure systems?
Melinda: Well, I mean, you know, my answer's going to be yes.
I mean, the internet is a cesspool and we don't know what they're trained on and how they're trained. We don't know really what data we're giving them.
I mean, the team's agreement that the team's license notwithstanding on OpenAI. There's a part of me that just shivers about... I would not want to put client data into that. I know people are doing it, but it just makes me nervous.
And also when I look at, you know, kind of the background of some of the people that are leading some of these efforts, there are some I trust more than others. Some have a history of sort of, even if they get it wrong, they're trying to get it right.
And there are some that just don't seem to care what the impact is. That said, at this time, I feel that a lot of the AGI and the larger discussion are outside of my own control.
I am not big enough in Silicon Valley to have, you know, influence over those things. I try to be a responsible user and to tell people how to be responsible users and try to let go of some of the... I don't believe in AGI, I'll just say that straight up.
I don't believe it. I think it's a distraction from what it can do for us right now and what the challenges are right now. I just, I mean, I was promised flying cars. I'm 54 years old, it's not happening in my lifetime, so-
Rachel: Like going back to the moon.
Melinda: Yeah, I mean, that would be great. I'm here for that. I'm here for all space travel, but I just don't think that we're headed towards AGI.
My friend and another, he's an investor in other AI companies, you know, Adam Nash, he used the phrase when he was talking to me about this of humans-first or partnered-with-humans, and that's his investment thesis.
And it lines up with my sort of ethics and morals, which is human-assisted. Like how are these tools improving human being's life and improving the world that we live in versus replacing us? I don't understand. When I joke about replacing myself, I don't mean that robots are going to live here in the world and humans have no reason to exist.
Rachel: I mean, the end goal is like four hours work a day, and the rest of our time reading library books or riding our ponies. It's not being batteries for the matrix.
Melinda: Oh God.
Yeah, I mean the finding of purpose is so important in human life, that I do believe in work, in some form of work broadly defined, as necessary to mental health. But again, I believe in sort of the middle ground, and I don't believe in the extremes.
And so it's like finding that nuanced in subtle way. The internet's not very good at nuance or subtlety.
Rachel: And I think our idea of what productive work is will start to change.
I think one of the benefits of this whole movement is that we are starting to appreciate the incredibly important work that early childhood educators and primary school teachers do.
We're starting to notice in our own lives how important senior engineers mentoring junior engineers is. And I think those of us in knowledge work joke about how we spend all of our time in meetings.
But in fact, for you and me, especially, I know that connecting with much younger, especially women, in this industry and sharing our burn scars and our insights is some of the most generative and rewarding work that we do.
Melinda: My biggest worry is actually not for myself. I mean, when I was a kid, my parents'biggest fear was being replaced by a robot. My dad worked on the line of Chrysler.
But now I'm senior enough that I almost think I'm not the problem. I actually worry for our junior teams. How does somebody become a great writer, a truly great writer without an editor who can look at the work and help them see?
How does someone become a truly great coder without that? For that matter, a car mechanic or a hairstylist. These are things that we need to talk to each other about. Oral tradition predates written tradition by many thousands of years and is baked into our cognitive DNA.
So how do we preserve that is, you know, a big part of it. So I worry for the juniors, I really do because it's too easy now to write something... And I don't want to sound like our parents when we got calculators.
So kids, like, just know that gen X understands, because when we were your age, they brought out calculators, and they were said, "Oh, you can't use calculators in math class because then your brain will atrophy and you won't really know how to do math." And it's like, "Great, so let's use an abacus."
I mean, so I think it's more about finding how to use it, and use it ethically, and not cheating ourselves as we grow and learn so that we develop those cognitive skills. But the temptation has got to be great.
Rachel: I do worry about it, but I also, this feels like the same magnitude of platform shift as the early web.
The early web, let me, a double humanities graduate, find a little niche in technology, and was the shakeup and the opportunity that I needed to find my niche in this industry.
I do think we're already seeing an influx of founders for machine learning and data science, which are slightly less demographically slanted than computer science classes.
I do think there's an opportunity for a lot of humanities graduates to come in and help shape this tide.
Melinda: Oh, that would be so great, because we need people to think of development. It's a skill like reading or writing. It's not the end unto itself.
It's something that we used to communicate and build things with. And having subject matter experts building the tools, that would be a nice change.
Rachel: It would, it would. How might we mitigate some of the risks of depending on these mysterious black boxes?
Melinda: It's hard work. I think you have to do the homework to read the terms and services and understand what you're using it for.
And I think this is where running in too fast is just as bad as being too slow. It's typical of technology. We tend to, what is it, overestimate in the short term and underestimate in the long term, that is no more true than it is here.
I actually like to say that I think we and gen X have a very important role to play right now because we have enough gray hair. There's gray hair under here, apparently. I promise you there's plenty of gray hair under here.
And it doesn't mean we know everything. It's not a license to be in charge. But we have lived through technology revolutions. I think I've counted at least four or five in my own lifetime.
And so we can kind of see not exactly how it will be played out, but there are things that we understand about technology. It's like we're not going to get there all at once, everybody, slow down for a second.
Like, let's take a moment, let's be thoughtful about how we integrate it. So I think it's about being thoughtful and taking our time and thinking through.
The faster we go, the more mistakes we can make. And for some companies, there is no option. If you are at an existential risk, you don't have any choice, you got to go fast. But for most of us, it won't be that fast. And so it gives us time to stop and sort of be thoughtful about who we work with and choose where to spend our money.
Rachel: And what kinds of work we want to do and what kinds of impact that we want to make in the world.
Melinda: Amen. Amen.
Rachel: Melinda, what are some of your favorite sources for learning about AI?
Melinda: Well, there's no substitute for personal experience. I think the moment we start developing beliefs based on things we've read from other people where... And again, I'm going to date myself, but it's like making a copy of a cassette tape. The fidelity is lower.
I don't know if there's a modern equivalent of degradation of signal when we make copies of things. Copies of copies. So there's no substitute for going to the source.
But that said, there are a few that I do rely on. I just love Ethan Mollick. He's a Wharton professor who specializes in the impact of technology on business. He is actively integrating AI into his classrooms and talking about that experience.
He wrote a great book basically about... This just been released. It's very timely, but it's also the kind of book you can hand to anybody. It's approachable. It explains what's going on, and it's small.
Most people, I think, could read this in an evening or two, and I think it's a great place to get started. And I think his approach is both humorous and very down-to-earth pragmatic.
I do stay very in touch with the marketing AI, MAICON, you know, Paul Roetzer and Cathy McPhillips and the team over at Marketing AI. I think it's amazing. He's a fire hose of information about AI.
I can't follow it all, but I like keeping an eye on it and then picking out a few things that I think are worth digging into.
From the beginning, I have wanted to go back in time and be Emily Bender's student at the University of Washington. When I first read about stochastic parrots and the, the "York" piece on her, I mean, she is the heroine we all need in AI.
If you're out there, Emily, like, and a 54-year-old lady can do some work with you, I would do it in a heartbeat.
I just think she's got a great way of poking a holes, saying that the emperor has no clothes and doing it in an approachable way.
Rachel: We are all Emily Bender fan girls here. Melinda, I'm going to make you god emperor of the galaxy just for funsies.
Melinda: Oh.
Rachel: Everything for the next five years is going to fall out exactly the way that you think it should. What does the future look like?
Melinda: I'd like to see AI baked into what we already do versus standalone. I would like to see that the people who are experts in what they do are taking AI to make it better.
And so my favorite example right now of this is Google in the sense of, I use Okay Google in my house. You could ask me why I don't use Alexa, and that's who gave me the hardware, a friend of mine gave it to me.
And it drives me crazy because it doesn't talk to me like AI. It's just, they're getting there, and they're starting to move towards it. But I still have to say, "Okay Google, what is the temperature outside?"
And instead what I really want to know is a whole bunch of other things. And it should know that and give me a little bit more information. And I shouldn't have to preface everything with "Okay Google," and I want it to sort of be more sort of responsive.
Most other voice tools, I feel that way right now, are still very beep, borp. I do think Microsoft Copilot is showing promise. It's not there yet, but oh gosh, I can see, I want it there.
Like I can feel that it's going to be right. Grammarly, like when Grammarly really gets it right with AI, that's going to be amazing.
So I think I prefer that and like to see that. I also very much, if I could wave my magic wand, it would be that we would have a safe and accurate way to layer an LLM on top of marketing analytics data.
That is my dream, that's what I mean by replacing myself. There's no safe or accurate way to do it right now, no matter what all these startups are saying.
And if I could ask for one thing, it would be that because we could make so many businesses more successful.
Rachel: Finally, my favorite question, if you had a generation ship flying to the stars, many generations of human lives taking place on board, what would you name it?
Melinda: Well, I hope you'll indulge me for a moment because I would name it after my favorite... My favorite poet is Mary Oliver.
And the reason I love Mary Oliver is because she talks about nature and humanity, sort of very real things.
And I would name the ship The Messenger, after Mary Oliver's poem. And may I read it, I hope that that's okay.
Rachel: I was hoping that you would.
Melinda: My work is loving the world.
Here are the sunflowers, there the hummingbird, equal seekers of sweetness.
Here the quickening yeast; there the blue plums.
Here the clam deep in the speckled sand.
Are my boots old? Is my coat torn?
Am I no longer young, and still half-perfect?
Let me keep my mind on what matters, which is my work, which is mostly standing still and learning to be astonished.
The phoebe, the delphinium, the sheep in the pasture, and the pasture,
which is mostly rejoicing since all the ingredients are here,
which is gratitude to be given a mind and a heart and these body-clothes,
a mouth with which to give shouts of joy to the moth and the wren,
to the sleepy dug-up clam, telling them over and over how it is that we live forever.
Rachel: Melinda, the "Messenger," thank you so much.
Melinda: Thank you, Rachel, appreciate it.
Content from the Library
The Kubelist Podcast Ep. #44, Service Mesh Evolution with Idit Levine of Solo
In episode 44 of The Kubelist Podcast, Marc Campbell and Benjie De Groot sit down with Idit Levine, Founder and CEO of Solo.io,...
Open Source Ready Ep. #3, The Open Source Pledge with Chad Whitacre of Sentry
In episode 3 of Open Source Ready, Brian and John sit down with Chad Whitacre from Sentry to discuss the Open Source Pledge, a...
Generationship Ep. #24, Nudge with Jacqueline-Amadea Pely and Desiree-Jessica Pely, PhD
In episode 24 of Generationship, Rachel Chalmers speaks with Dr. Desiree-Jessica Pely and Jacqueline-Amadea Pely, co-founders of...