Ep. #2, A Standard Metric Tomato with Dr. Aleks Krotoski of BBC Radio 4
In episode 2 of Unintended Consequences, Yoz and Kim continue their conversation with Dr. Aleks Krotoski of BBC Radio 4. They discuss the nature of ethical guidelines, accountability in the modern world, and the standard metric tomato.
Dr. Aleks Krotoski is an award-winning journalist, broadcaster, social psychologist, and roller coaster enthusiast. She has a PhD in the social psychology of relationships in online communities and has worked extensively with BBC Radio 4 and The Guardian.
In episode 2 of Unintended Consequences, Yoz and Kim continue their conversation with Dr. Aleks Krotoski of BBC Radio 4. They discuss the nature of ethical guidelines, accountability in the modern world, and the standard metric tomato.
transcript
Kim Harrison: This is a great kind of segue to like the ethics of the thing.
I mean, some people are thinking about the ethics of it.
Some people are not, not because they don't care but because like you said it's usually just people who have to get a thing out because there's a meeting that's going to happen in a day or an hour.
There's unintended consequences.
You build a thing and don't realize, holy crap, there's something I should have considered.
How do you think about the ethics of the thing without slowing yourself down?
Or do you need to accept that you have to slow yourself down and be considerate of that because these systems are now so fricking huge?
Dr. Aleks Krotoski: Well--
It goes back to being humble and even recognizing that you have to think about the ethics. And I think that's so important. And I think also being really explicit about how you conceptualize humans and human behavior.
Really, really explicit. I think human beings do this.
This is why I put this in like this, discuss. If it works for you, great.
If it doesn't work for you, let's have a conversation.
I am not going to answer every single person's question.
I'm not going to solve for every single person's problem. This is how I think of people.
I think that's, first of all, the most important thing.
I do still think that there's a responsibility on people to be a little bit critical of what it is that they're using rather than just thinking, ooh, here's a thing I'm just going to eat it.
Digital citizenship and all of that media literacy, digital literacy is a really important thing.
But I think that, first of all, being humble, secondly, being really explicit in your sort of reflectivity moment, in your reflectivity problem is a good way of being.
And define, recognize when things are getting so big that you need to start being more humble.
So, the classic example do no evil but what does evil actually mean in this context?
And how do you define evil and how do you define what you're doing?
And now that you've gone beyond the moment when you can control for that because so many people are using your technology or whatever, how can you be less evil, maybe less evil and be very clear about the fact that you're being less evil.
And I think that those are some of the ways of being more ethical.
It also depends, Kim, I think on what you're thinking of, what you mean when you talk about ethic.
Kim: I mean, I know that governments are now thinking about this and getting involved.
They're saying this is a standard set of rules and behaviors to be compliant within.
And we expect you to uphold that.
Don't have a data leak and share everybody's credit card information.
Dr. Aleks: Yeah, but I mean, that's hard.
That's really hard because you're always going to get actors in the outside who are going to try and get into a closed system, aren't you?
Yoz Grahame: We now have a situation.
And I think that this is something that the pandemic has done where governments are not only being asked this more specifically.
This was like, I think something in the UK about a new set of guidelines around children and privacy.
And how children's data is handled online.
And especially because now that children, their lives are happening almost entirely through the computer in terms of education.
And so much socialization is happening through the computer because so little of in-person interaction is happening right now.
This is when suddenly all these tools pop up like Proctorio and other things which are basically saying, all kids are going to cheat the whole time.
So we need to be watching them and analyzing their every movement and then reporting on anything that looks like cheating.
And so knocking a huge percentage of them out of school or out of their degrees because of something that our AI thinks is cheating or suspicious.
But is there a way, is there a framework for making these kinds of ethical decisions or guidelines in a hurry?
Because that's what we've had to do this year.
Dr. Aleks: No.
Yoz: Right.
Dr. Aleks: Because ultimately what we're trying to do is we're trying to create solutions for questions that we don't actually know the answers of, aren't we, before we even have those questions.
So there are a lot of ethical guidelines already out there.
I've helped to contribute to many of them. Most of them are within the academe, within the ivory tower.
Those are good places to start. Because often they're guidelines rather than regulations.
They pose questions more often than they actually provide the answers.
Is this really what you want to do?
Recognizing that tools are not necessarily the best solution, right? That's a big thing.
Tools are not necessarily the best solution for human problems and often one of the reasons why people get really pissed off with technology is because they forget that what the technology is doing is connecting humans to humans. And the humans are very happy to escape a technology when in fact, they don't really want to have the conversation amongst themselves.
So, I mean, that's something I don't talk about because we want to avert any kind of discomfort, don't we as human beings?
Yoz: So, I realize we're imposing that question.
I am doing exactly the kind of thoughtlessness and short-termism thinking that is the problem here.
Dr. Aleks: Well, yeah, exactly. This is a short-term issue, we hope.
What is good is when things come along that break our boundaries.
So that we can then have those conversations as long as people do not come to harm.
What people are afraid of is that the Overton window pushes the acceptability of for example an AI making a decision about whether something is plagiarism or a technology just simply constantly tracking you.
Think about, would you want that to happen to your mom?
Or would you want that to happen to your kids?
Would you want that to happen to you?
How would you feel about that?
Then of course we have other issues of psychology which is like, oh, well, I'd be fine with it.
I've got nothing to hide, whatever.
Then we get into politics and then you get into a whole different thing which is why coming up with an ethical guideline is very, very difficult to do because there are many, many different opinions in this world.
So how to come up with an ethical approach? Perhaps it's not about solutions-based thinking.
Kim: Being flexible is key and understanding that we all come from different reference points.
We all have different perspectives and being open to that
Dr. Aleks: That's right, get rid of the hubris.
Stop thinking that you have the solution, that you're able to do.
It don't think that. You've looked at what?
Five academic articles and you've played around with a couple of online quizzes and therefore you know how human beings are.
I know that's incredibly rude and I'm going to get completely flamed for that.
And I recognize that that's not how these things work but you know what I mean?
Like it's magic, it's magic what's happening.
And we got to figure out what's inside that black box.
Otherwise we will end up with mad ethical breaches.
Yoz: Right, and I found myself questioning something you were saying earlier about that humans aren't that simple that we don't have these very simple switches that you can hit which is mostly true.
Dr. Aleks: Exactly, it's mostly true. But then there are the times when it isn't.
Yoz: Right, I'm thinking of like Bay area companies, Zygal or King which is a British company.
Which have made literally billions of the predictability of humans when it comes to games stimuli and gambling stimuli.
And I think they've even had huge research departments internally looking at exactly that kind of thing.
Dr. Aleks: Absolutely, so it depends, right?
It totally depends on the situation and whether you want to, well, it's not evil.
It's just, what your desire is.
Human beings are incredibly messy when you are trying to create for solutions that are trying to replicate or examine or operate on the back of something that is messy in human.
But then we also have our kind of twitch mechanisms.
Malcolm Gladwell writes a lot about these things.
I have a problems with Malcolm Gladwell.
He likes to reduce the human and he likes to reduce my field of research to very simple.
Yoz: And here's a load of stories. And then something to think about.
Dr. Aleks: Exactly, super, super, super well-written.
I mean truly what a fantastic writer.
But again he's telling stories, he's telling a particular story and he's pulling in the things that he wants to pull in, so anyway.
Yoz: So, one of the questions I want to ask for our podcast, so when we're talking, because here we're coming to you asking for advice for how we go and talk to these especially we'll be talking to a lot of entrepreneurs.
But hopefully also people who aren't mainly in it for the money, or aren't currently trying to build a business around this kind of thing.
When we are talking to people with a vested interest, what are the best approaches for talking to them about it?
When you're doing this kind of research.
This is both for finding out what truth of what they think or for getting them to think in a more human way.
Kim: Part of why we want to talk to you is we exist in a bubble.
We live in Silicon Valley. And so we want your outsider's perspective to hold us accountable, to keep us honest and say like don't forget.
There are other people that see this in a very different way than you do 'cause you're so entrenched in it all the time.
Dr. Aleks: Oh man, seriously.
Funnily enough, going back to second life, I remember having the opportunity to join the company full-time.
And I had been in on a consultancy basis for the work that I was doing.
But at the same time, I've been a journalist.
At that time I was a tech journalist and I'm still a tech journalist but I was more of a, sort of a systems and products tech journalist.
And I was told that I would have to stop doing that.
And I remember thinking I can't, because that's the only thing that's giving me the outside world.
'Cause I know what would happen is that if I stopped doing the journalism and being required to have a really broad view of what was going on just for my job, I just wouldn't, it's so easy to just not do that.
So what are the ways to get people to think about these things?
Yoz: Before getting into that, that have gotten to think what are the best ways to approach these conversations?
What tones do you use, what kinds of questions do you ask to just get them even to honest, let alone get them to actually think?
Dr. Aleks: OK, so if I was asking somebody about serendipity, for example, right?
We'll just go back to that.
And actually I ended up doing-- The serendipity engine ended up being supported by Google.
I presented it at their big tent event and Zeitgeist as well.
So they were very, very supportive of the research that I was doing.
That was kind of contradicting what it was that they were doing.
Or not contradicting, but sort of inquiring about. I think the first thing is to find out what it is.
Get the pitch, find out what the wow is that's going to get people to put money into it which usually has a, we are solving for human problem X by doing Y.
Find out what they define human problem X as, what is human problem X?
What is serendipity? What is it to you? What do you think it is?
OK, and you know, what have you found about it? OK, that's interesting.
And who says that and why do you think it is this?
Why do you think serendipity is something that you can program?
When in fact it's also to do with somebody's background and their ability to interpret the accident that's fallen in front of them as happy or sad.
How are you going to make sure that you capture that moment and create that emotional response that is going to get these people ready and willing, your consumers ready and willing to respond to your solution to human problem, X as blah, blah, blah.
Who is it that you think you're actually talking to? Not just like, these is my possible audience.
Like tell me, define for me, draw for me a picture of tell me a story about this person and who they are.
What happens when your aunt who's a technophobe picks up this thing because suddenly, that's the only way that they can communicate with you?
How is she going to navigate that? Yeah, but she can't see, she's blind.
Oh, OK, well she's also, she's hard of hearing and so how do we make sure that--
And just sort of start to pose different, start to be that really irritating person that everybody hates at a party which is why I don't go to many parties.
And the only reason my partner is still talking to me is because we are married and we eat dinner together.
And sometimes we talk about things that aren't this.
And the reason why I still work, I still make the radio programs that I make is because we don't script all the time.
And I don't keep asking them, why did you put this in here?
And what exactly are you trying to say at this moment in time?
But those are really fun questions because the thing is right, the thing is, is that the people that you're going to be speaking with they will have done a lot of work on this. They will have done a lot of work to come up with a solution. They don't like to be questioned about it because it's hard.
It's achy to be like have somebody questioning about these things.
But if you ask them in a way that shows actually you're really bloody interested.
You're like, no, I want to know about all this.
I'm not trying to catch you out, I'm just trying to find out a little bit more about why you're talking about this and why you think this is a problem.
And how you think the way that you've come up with solving this problem is going to be the best way to solve it for this person.
Well, what about for this person? That's what I love about talking with people.
People may not like talking with me but I'll tell you what, almost after every interview that I do, truly, and I'm so proud of this, almost after every interview that I do I have people coming back to me.
It's happened enough times that I know they're not blowing smoke up my ass.
They say, I've never thought about this in this way. I've had such a lovely conversation.
And all I'm doing is just asking them questions because I'm interested.
And I think that's the best way to approach it is. Is like, I'm not trying to catch you out.
I'm genuinely truly interested.
Yoz: That's great.
Dr. Aleks: It doesn't work when you're in a big party.
It works when you're sitting down with somebody one-to-one.
It works when you are at a dinner party and the two of you were sitting next to one another and you can just get lost in a conversation.
It doesn't work in public, it works in private.
Yoz: I'm thinking of like, I have a terrible memory.
And yet one of the earliest party conversations I can truly like seriously remember because it shook me and made me reevaluate things.
In my early twenties, somebody asked, well I'd taken it for granted in my life that I didn't seem to like travel very much.
And somebody was almost offended and she was continually quizzing me about it.
But it made me look at it in a way that goes, oh, well, hang on.
And open up things about myself, it's amazing.
But especially because it's something that clearly that as an engineering exercise, you have to do because you have to take a human thing and then find a way to represent it in ones and zeros.
You have to break it down.
The trouble that we see is that quite often they do this initial breakdown and then they don't question it again.
Or they only question it when it comes to what extra database columns do we need?
Dr. Aleks: Yes, absolutely.
And the problem is, is that like when you start to ask these questions, it's like an onion and you start to peel it back and you peel it back.
Or it's like a giant Lotus flower, whatever it is.
It just starts to kind of tree diagram and expand in all kinds of different directions.
And at some point you do have to set boundaries.
And I think that the way to do this is to say I know you've had to set boundaries at some point.
Why did you set them there? I'm really want to know why you set them there.
Why does this feel good to you? Why don't you like travel? What is it about travel?
Is it because you're on an airplane? Is it because you've got jet lag?
Is it because you don't have the food that you like? Is it because you don't have your bed?
Do you really liked your pillow? Oh, I love my pillow, I love my pillow so much.
I know people who actually take their pillows with them traveling.
And you know, just those types of ways of just like really trying to understand what's driving somebody.
Somebody built this. They've spent hours, usually, not at their day job.
If it is at their day job, it started out as a nugget of something else, like thrown their heart into it.
Wow, it's amazing what you've done. It's amazing what you've done.
How did you get there? What happened when you hit your crisis of confidence?
When you lost the moment, when you had a moment where you're like what I'm doing is bullshit.
What brought you back? What kept you going?
Because those are the human things that you can get at.
And by asking them about those things, then you leave it up to them to make the decision about whether they want to change it within their system or not.
Yoz: And that ties into as well like what are the driving forces?
What are your markers for success here?
And what are specifically, what are the kinds of success or failure that aren't to do with money, right?
This is like values, what are your values?
Dr. Aleks: Yeah, totally, but that's what you get through talking with people.
When you really hear them, when you really just stop and say you just sort of shine a light on what it is that is magical to them about what it is that they're doing.
And then that opens up the most fantastic conversations.
Because then you can start talking about values in a way that's not direct what are your values.
Because if you say, what are your values to somebody then they're just going to trot out whatever's in the press release.
I value human beings and I value this. No, it's through example. Oh, isn't that nice?
What does that mean, what is that? I just came off a conversation about monsters.
I'm making a radio program. This is something I've wanted to do for a really long time.
And we finally found a way in, in the series that I do.
The idea behind it is what we talk about, what we really talk about when we're talking about monsters.
We're not talking about a werewolf or a vampire or a zombie.
We're talking about fears and anxieties.
We're talking about things that are within us that we create a metaphor of that other people understand.
And if you can get to that and then you can start breaking down what that metaphor means.
Because what people are making these beautiful creations that we're all using and thinking are magic, they're metaphors for what people think that human beings need.
Kim: Now I'm feeling very optimistic again
Yoz: I'm feeling very small.
Dr. Aleks: I'm happy you feel that.
Kim: It can't help it, it's in my nature to be a bit of an optimist.
I feel like it's so easy to look back and say, wow, they made a big mistake, they should have known.
And sometimes, yeah, we should have, but sometimes we didn't and sometimes it truly was a passion piece and somebody felt like they had a wonderful thing to share.
And it's a tool that's helpful.
Or I don't know an app like Instagram that's just a fun way to share photos with your friends.
And yeah, it could go sideways, but I don't know for me I feel like most of the time people have good intentions.
Dr. Aleks: I like to think that.
I also think though that there are opportunities that we have to make decisions in our lives. And sometimes we go so far down one that we realize that there's no way of going back.
I mean, that's what midlife crisis is all about, aren't they?
Oh my God, I've gone this far and now I can never go back and become an astronaut or whatever it is.
And I think, there are inflection points in our lives where we make decisions about what it is that we want to do about projects that we want to take on.
And sometimes our intentions are unethical and we can correct for them if they're caught early, if we wish to.
Or we could just keep going down that path and turn into a total annihilating, horrifying monster.
But then that's the decision that we make. So I don't know.
There's also a piece in there about what are the big decisions that you've made.
Yoz, you shared something with me that I didn't know about you which is that I didn't know you don't like travel.
And that that moment, like that conversation, ironically given that where you are now but that conversation meant something.
Why did that mean something to you?
I want to go down that rabbit hole and explore that to find out even more about Yoz.
Yoz: The thing is, I like travel now more than I did but this is the thing that ultimately it opened up that I'll just tell you.
Because you'll see how it opened up a whole bunch of other stuff in my life.
Realizing that I didn't like travel because I have a bad memory for experience.
And so what happened was that I would come away from travel just mainly remembering all the hassle of travel, all the pain of getting packed up.
And not remembering enough of the good times, enough unique experiences that make travel worthwhile.
And so focusing more on those experiences and doing it in a way where I came away with not just memories but artifacts and photographs and things like that, made it much more enjoyable for me.
Dr. Aleks: What was it about that conversation that kind of stopped you in your tracks?
She kept pestering you, why don't you like travel, you don't like travel?
Was that it? And you were like, shut up already.
Yoz: It was the way that she seemed almost offended by it, which I can understand.
Because I can see how if you really value the rest of the world and the viewpoint that traveling the world gives you and exposure to other cultures gives you.
Then seeing somebody who is willfully refusing those experiences, it feels like an obnoxious kind of obstinacy.
It's like somebody saying, I refuse to be educated. I refuse to open my eyes.
But in doing so the thing is I was trying to answer a question every time. I wasn't just kind of stalling.
And I was going into it and going, OK, well, this is what I don't like.
This is what I don't like, why don't I like these things?
And then she was like, well, but what about these things, getting the positive experiences?
I'm just going, well, apparently I'm not. And that itself is a bigger issue.
Dr. Aleks: Just that moment of self-realization.
That moment of kind of self-reflection, that is the magic moment.
That's the magic moment in a conversation where you're like, oh, I really have not thought about that.
Not I haven't thought about that and I'm going to change my ways.
It's just cool, that's another data point that I could add to who I am.
And I now have a choice about how I progress.
What are those choices? What are those moments?
Yoz: And there's a way in which I think the industry, the tech industry has some of that humility.
I think that engineering, for example this is what the agile movement is about in engineering.
Is to say, look, you don't necessarily have the answer.
You can't plan ahead for months because things may change.
You may suddenly realize that something is radically different.
One of your assumptions, one of your premises is radically different and you have to change things but it's still doing that in a very technical way.
And being able to do that in a way that is more human.
This comes to something vital in our definition of this podcast.
Which is that we talk about technology a lot and about scaling technology, but ultimately I'm interested in what happens when you scale systems.
As we know the difference there is that many systems, the human systems, the corporation being the most applicable one here. And this was something that we've already talked about in one of the other conversations we've had is how do you grow a company in an ethical way?
There's the saying that capitalism unfortunately the philosophy is just growth for the sake of growth which is the same ethos as the cancer cell, right?
Growth for the sake of growth is not an actual thoughtful philosophy.
It is propelled by outside forces.
It is trying to do some things that you don't necessarily actually believe in unless you really are a megalomaniac.
And so how do we deal with human systems scaling?
Like in the ways that one of the main obvious downfalls of the big social networks is there an ability to moderate.
There inability to apply a human view to all of the content that is flowing through them and treat all of it in a human way.
Is a giant scale problem, right?
And you have these people who are dealing with engineers saying, well, here's how we set up servers to deal with 10 million tweets a minute.
How do set up people to deal with 10 million tweets a minute is suddenly something that they don't want to think about.
Dr. Aleks: How do you protect the people who are dealing with that as well?
Yoz: That's the other side as well.
Dr. Aleks: Yeah, we did a big episode on that for digital human. It was wonderful, it was called "Sin Eaters."
And it was all about sort of imagining the people, the blessed moderators.
Yoz: Did you talk to any of them?
Dr. Aleks: Yeah, we got one. He was very forthcoming as well.
But you know, I mean, part of it is, I mean, you're basically asking me to create a national political system.
New political theory, with Dr. Aleks Krotoski.
I don't know because you know, everything does seem to operate within some kind of ideological framework.
These ideological frameworks have existed for millennia or have emerged and evolved over a very long period of time.
For that, you might wish to talk with a political theorist actually.
It might be really interesting to talk with somebody who is observed or like watched the caucuses become their own entity.
Or watched South Sudan and North Sudan become their own countries.
Speak with somebody in Catalonia about how to ensure a single identity when you have so many different people that you're trying to please, how do you create that identity?
There was a lot of people who were talking about that when the EU began, when the single currency began.
I remember fascinating discussions within my psychology department about how do individual countries still maintain their identities when underneath the auspices or underneath the umbrella of a larger nation.
And as we've seen that that has been more and less successful.
So that might be an interesting person that answer that question.
I don't feel qualified to say, this is the way you scale a company.
I mean, I'd be like, oh, make sure everybody's has a voice.
I don't know, I genuinely have no idea.
Yoz: But no, you've actually answered the question because the thing is what we're asking you is not so much what do you do, but how do you ask what you do?
Who do you talk to? What are the kinds of questions we should be asking?
Who are the people we should be talking to?
Dr. Aleks: Yeah, I'd ask somebody who's been in--
Not necessarily somebody who's been involved in growing a company, I'd be interested in, I mean basically growing a country, how do you do that?
How do you make sure that everybody is bought in?
And how do you make sure that even those people who aren't bought and don't feel like they're being ignored?
Because if they're ignored then you're going to end up dead, literally dead. And like, how do you do that?
How do you support people through that new age of identity?
But yeah, I personally would want to hear from them rather than somebody who's grown their company.
Would be more interested in finding out what they've done.
But then again, remember when was thinking about how we can get more humanity into digital technology, I was interviewing sound designers and I was interviewing scent makers.
I was interviewing a taste computer. which is a panel of people that exists.
There are three different ones around the world.
There's one in New Jersey, there's one running in the UK. And then there's one in Singapore.
And it's, how do you manufacture a standardized tomato?
How do you create that thing that tastes the same to everybody?
And it's like, it's fascinating panel.
So that's what I was interested in. That's how I do these things.
So I look at like other people from other environments and ask them how they do that.
Kim: You mean what the tomato should taste like?
Dr. Aleks: 14 people around a table.
But they don't because that's why they have three different ones because there's one in the US, there's one in the UK or was Europe.
And then there's one in Singapore to try and capture what that means.
Yoz: You've completely blown my mind.
Because I was already fascinated by the concept of, you know the standard metric kilogram living in Paris or wherever.
Dr. Aleks: Oh, yeah, I know what they did with it.
Yoz: But there's the standard metric tomato.
Kim: Yeah, I swear this is why diversity is important.
Everybody thinks they have the answer and we all have very different life experiences.
Dr. Aleks: Absolutely, if you asked me how to grow a company I'm going to be like, don't talk to somebody who's grown a company, talk to somebody who's grown a country.
Yeah, that's what I would say.
Yoz: That's amazing. 'Cause I'm thinking about countries like Botswana and Estonia that have gone through total reaction within a couple of decades.
Dr. Aleks: Yeah, and the president of Estonia is very forthcoming.
He's really nice. He's super, super nice. I think he's from Philly, weirdly.
Yeah, and his mother was Estonian or something.
And he like, he went over and he's like, I'll take the job.
And he like helped to reinvent and help to get everybody on line.
Yeah, it was really cool. I did it for the virtual revolution TV series that I did.
Yoz: This has been astonishingly helpful in giving us things to think about for our podcasts and things that we want to ask people and investigate and people we want to talk to.
Dr. Aleks: Oh, I'm so glad.
Kim: It was great to have you here today.
Dr. Aleks: Anytime guys, thank you so much.
Subscribe to Heavybit Updates
You don’t have to build on your own. We help you stay ahead with the hottest resources, latest product updates, and top job opportunities from the community. Don’t miss out—subscribe now.
Content from the Library
Generationship Ep. #24, Nudge with Jacqueline-Amadea Pely and Desiree-Jessica Pely, PhD
In episode 24 of Generationship, Rachel Chalmers speaks with Dr. Desiree-Jessica Pely and Jacqueline-Amadea Pely, co-founders of...
Generationship Ep. #16, Brains in Jars with Raiya Kind, PhD
In Episode 16 of Generationship, Rachel Chalmers hosts Raiya Kind, PhD. Together they delve into the human-AI paradigm through...
How to Launch a Dev-First Startup: Day-to-Day Tactics
Welcome to the third article in our definitive series on how to launch a developer-first startup, featuring advice from veteran...