Ep. #11, Ghost Workers with Adio Dinika of DAIR Institute
In episode 11 of Generationship, Rachel is joined by Adio Dinika of The DAIR Institute to discuss ghost workers. This talk examines the challenges faced by platform laborers around the world, including unfair compensation, job insecurity, and data rights violations. Additionally, they explore the community-rooted AI research that's being done at The DAIR Institute.
Adio-Adet Dinika is a researcher at DAIR. As a social scientist, Adio delves into the intricacies of digital transformation, platform governance and platform labor, specializing in the often overlooked yet crucial working conditions of data workers involved in AI development. He is currently a doctoral candidate at the Bremen International Graduate School of Social Sciences, where he studies platform labor dynamics in Sub-Saharan Africa.
In episode 11 of Generationship, Rachel is joined by Adio Dinika of The DAIR Institute to discuss ghost workers. This talk examines the challenges faced by platform laborers around the world, including unfair compensation, job insecurity, and data rights violations. Additionally, they explore the community-rooted AI research that's being done at The DAIR Institute.
transcript
Rachel Chalmers: Today I am thrilled to welcome Adio Dinika to the show. As a social scientist, Adio delves into the intricacies of digital transformation, platform governance and platform labor specializing in the often overlooked yet crucial working conditions of data workers involved in AI development.
This pursuit leads to his current research as a doctoral candidate at the Bremen International Graduate School of Social Sciences, where he studies platform labor dynamics in Sub-Saharan Africa. He further extends this exploration as a research fellow researcher at the DAIR Institute, bringing to light the working realities of the data workers and annotators powering AI technologies.
Adio recently completed a research fellowship at the Center for Technology, Culture and Society at New York University probing into the implications of AI on human resources management, particularly through the lenses of diversity, equity, and inclusion. He's an active participant in the digital constitutionalism network, championing digital sovereignty, advocating for rigorous platform regulations to safeguard human rights.
Adio also teaches consumer culture and society at Constructor University. He has a wealth of experience in policy formulation analysis and advocacy across Sub-Saharan Africa and Europe, including work with various not-for-profit organizations and consultancies.
His contributions to significant projects includes the Zimbabwe National Constitution, the Zimbabwe National Youth Policy, and the Zimbabwe National Tourism Policy. Adio was also a visiting research fellow at the Weisbaum Institute for the Network Society in Berlin, Germany in the summer of 2023 where he collaborated with the research group Data Algorithmic Systems and Ethics.
Adio welcome to the show. I hope you build in some time for self-care because that is a punishing schedule you're on.
Adio Dinika: Thank you very much, Rachel. Well, I try, I try.
Rachel: You know we joke about our punishing schedules, but as citizens of the West, we're pretty privileged. What are some of the challenges faced by the people who perform platform labor around the world?
Adio: Well, I don't even know where to start because it's a lot, but of course I think especially when I'm doing a comparative analysis with the situation in the West and the developing world or the majority world as I prefer to call it, the first issue is of course the issue of job insecurity.
Rachel: Yes.
Adio: So when you talk about these platform workers, these are people who basically don't have a job contract or in cases where they have them, like a project we are currently working on with my colleague from DAIR, Milagros Miceli , we actually found out that some of these workers get work contracts of two weeks length.
And when this contract ends, they have no idea what's going to happen. So they will actually wait for the company to call them back. So now you can imagine living in this kind of situation where if your contract is two weeks long or a month long, you have no idea what happens after this. Then the other thing also is the issue of low wages and benefits.
So I think, I'm not sure if you're familiar with the Times article which came out saying that data workers in Kenya are earning $2 an hour. So when I traveled to Kenya on this project, I actually found out from some of the workers that... And by find out, I don't mean just their anecdotal evidence, but I mean being shown payslips, which actually show that it's not $2 an hour, it's even less than that.
Rachel: Wow.
Adio: And along with that, again talking of things that I've seen with my own eyes is the working hours. So you have a person who is working whose contract says they're supposed to work nine hours a day, and in these nine hours they have one hour of break. And by break, I mean this is what you take for your tea break, for your lunch and if you are to visit the bathroom, all of that should be within that one hour.
Rachel: Wow.
Adio: And then also, so this was from the contract, right? But in reality I had one worker in Kenya showing me evidence that they actually would work 11 hours, and only eight of those 11 hours are compensated. The rest, well, you need the job, right? So they just told, "Hey, the client needs us to deliver so we have to work today."
And you cannot say no because remember you don't have the guarantee of a job contract or a long contract. And also because already these workers are operating in the gray zone, what I would call the gray zone in terms of legislation. If you are fired, you really have very little in terms of redress. So you're on your own.
So another thing of course is, I mean, which also comes along with this, given what you were basing earlier about our punishing schedules is the health and the risks associated with that. So if you are working 11 hours a day, then what does that mean for you as a human being? And some of the content these people are seeing is horrific content.
Rachel: So these are the people who are subcontracting to some of our biggest AI vendors and reviewing the content that goes through those platforms.
Adio: Precisely.
Rachel: They are not directly employed by the western companies. So there's very little oversight of their working conditions from the West. Those conditions are set locally.
Adio: I would say there are two options. So there are two groups of people when we talk of platform workers, of course, maybe even more than two. But when we talk of data workers specifically, because I keep referring back to the DAIR project that we are running, these workers are working for a San Francisco based company, but which has an office in Kenya.
So they are recruited by this company in Kenya, but the company is really a US-based company and this is the company that sets these rules. Then there are also other workers, for example, who are freelancers on different platforms, for example, Remotasks, but their working conditions are pretty much the same. Maybe the only difference is that these ones who work for this company actually have several of job contract, even though like I already explained, that it's just a piece of paper to be honest.
Rachel: Yeah, and for us here in the West, those services are so inexpensive to use and the people working for them are so skilled that it's hard not to use them. And yet we must know at the back of our minds that those workers are not being treated the way workers in the West would be.
What are some of the approaches to mitigate these abuses? How might we here in the West improve conditions for platform workers elsewhere?
Adio: So I think they're a bunch of things, but of course this is a very difficult question to be honest, because, well, you know, sometimes when you are confronted with a very big problem, coming up with a solution is very difficult because we know tech bros don't find a problem with this.
Recently I was speaking to a colleague. and she was sharing about how there is a company which is employing refugees to do this kind of work. And for them they were like, "Come on, we are paying them more than the average person in their country. So they're better off."
So already when there is this white saviorist mentality, already we have a problem there because they feel like, hey, come on. But I'm saying you can't say if you're giving someone $3 when everyone else getting $2, you are doing them a favor when minimum wage in the U.S. or in Germany, for instance is $12 an hour. So what is $3? Yeah, it's better than the already terrible conditions they're already in, but you're not doing any favors.
But in terms of what we can do, number one, I think first is to shine a light on this because these workers are so-called ghost workers because no one knows this stuff. So I think the first thing we can do is to showcase that this was happening so that we know that when we are using whatever tool we're using ChatGPT, Facebook or whatever, we know that behind the scenes there are people who are slaving and suffering and being mistreated to do this.
The second thing, of course, is an issue of regulation. There is need for clear regulations to make sure that these companies are held responsible for the harms that they are perpetrating to these people.
So for example, if you are an American company, why are you hiring workers in Kenya, in Uganda, in South Africa and paying them less than 10% of what you would pay an American person. So clearly that is an issue that is the problematic situation there. I mean, I understand that countries differ in terms of pay structure and stuff, but at least we all know what's descent.
Rachel: Yes, how might the further spread of Generative AI complicate this situation even more? What if some of these jobs start to disappear because they can be automated? Is that likely or is this ghost labor essential to the way the platforms operate?
Adio: Maybe at some point there might be that situation of automation and displacement of jobs. I say maybe at some situation because as we speak today, that is not the case. Yes, of course there are tasks which are increasingly being automated, but I think I feel like many of the people who are hyping AI and these capabilities through most of these human tasks are overselling their product.
I'm not sure if you're aware of this situation, but maybe later I can share with you the article which actually exposed that there was a particular AI tool which was being used to monitor with cameras and stuff, anti theft in a grocery shop. And then at the back of it, there were workers in Bangladesh who were actually visually observing through the cameras in the shop. And then when they saw someone stealing, then they would alert the manager. But this was being sold as an AI tool.
Rachel: Yeah, so much of it is mechanical turk, there's somebody sitting behind the curtain.
Adio: Exactly, you see. So going back to your question on what could happen when we have this increased proliferation of AI, of course there's that issue which I mentioned. Then there also the issue of intensification of algorithmic management, which is one of the big problems in the platform labor landscape.
Rachel: Yeah, even here in the West, the conditions of the delivery drivers and the Uber drivers are terrible and complicated by algorithmic management.
Adio: Exactly so the more AI is used, the more it is. And this is a very big problem because for example, in Kenya, I spoke to a Bolt driver was saying that whenever there are issues, so for example, they are required to accept a ride no matter how far it is.
And if the ride is let's say five kilometers away, and then the person wants to move like for 500 meters or for a kilometer, if you are the nearest driver, you are mandated to go and pick them up or else if you reject the same number of rides, you will then be banned from the platform. And when you are banned from the platform, you can not appeal because you are told, hey, it's the algorithm.
Rachel: Yep, it provides a layer of deniability to the human managers.
Adio: Exactly, and another issue also, which is very key is the issue of our bias and discrimination. So the more we use this AI tools, the more we will have these issues.
So my dear colleague Timnit also authored the paper with other colleagues where they were talking about how most of these AI tools have a problem realizing darker skin tones. So we can only imagine what will happen. I think there was a case, two cases actually in the U.S. a few months ago.
One was a pregnant lady was arrested because a guy had reported that someone had stolen from him, and then this lady was arrested. Turns out this lady was eight months pregnant. And that situation, they'd relied on the AI tool to make this judgment. But if they'd asked this guy to say, hey, was the woman who robbed you pregnant? She would've said no.
And another one is another guy who was also arrested because facial recognition software had placed him at the scene of a burglary, a jewel burglary. And the guy was at the time in prison. And in both cases these were Black people, a Black man and a Black woman. You can only imagine these are just the high profile cases.
But what about in places like in the developing world in Africa, for example, when these tools are created not within Africa, but outside Africa and then imported into Africa or exported to Africa, how this situation is.
So I think the other issue which we also see is the issue of concentration of power. So we already see who are the people who are mainly pushing these AI tools, the tech bros. So already this issue that they develop these things in Silicon Valley and then export them to the rest of the world already is a very big problem.
Rachel: And the way the language models work, all of these problems were foreseeable because they predict the most likely next word based on a corpus of words that have already been spoken. So they're literally baking our existing assumptions into the models themselves.
So you've made a really strong case for platform regulation. Can you talk about some of the ways regulation might be able to mitigate or manage some of these harms?
Adio: So of course, just like we need a highway code before driving cars, I think we also need these regulations before we can begin to operate these platform companies or these AI tools. And following up with the problems that I had raised earlier, I would say the very basic thing is ensuring labor protections.
So if there are very clear rules on what is a working contract and how long should that be and how much should an employee be paid, then I think that is one way where regulation can really help. And also, again, I also hinted at the issue of how most of this labor happens in the dark and also this algorithmic management and all this practices are what has been publicly referred to as a black box.
So no one knows what's going on there. So regulation cannot only mandate for transparency, fairness and accountability with these models. So if then giving the employees a platform to contest whatever algorithmic decisions that are made because they have that platform and they know how that decision was reached.
And then also there's issue of protecting the workers' data rights. So most of these workers have their data collected, they have no idea what kind of data is collected and also where is that data stored and what is it used for.
So if there are very clear regulations, which can allow the workers number one to have access to the data that is collected about them, and also have the prerogative to decide if they want to keep that data or if they want to delete it, whatever has been collected about them, they need to know what has been collected.
And they also need to have the power to remove whatever they feel uncomfortable with. And then also I think if there is regulation, it can incentivize the development of responsible AI.
Rachel: In what way?
Adio: So for example, right now what's happening is it's a free for all, right? You do whatever you want as long as you're reaching to the top. But if there are very clear regulations with regards to the issues we raised, for example, the issue of discrimination and also inclusion.
So if we're very clear on a certain, let's say parameters on how a certain AI is supposed to behave, or certain tool is supposed to behave or a certain platform is supposed to operate, then I think that will lead to the development of more or less equitable tools and platforms.
So if we have mandates, for example, to say, okay, before you develop, just like how many products are like nowadays that they have to be issues of access. So if you develop your tool, are you catering to all the different groups of people that we have, the visually impaired, are they able to access those tools?
So if the regulations which speak to this, then we can then have a sort of more or less responsible development of AI. Because right now I feel like there is no clear don't, so which means anyone who wants to can wake up and develop the next whatever AI powered missile system, and they're not very clear rules on whether that is struggling the line of being wrong or being right. So that's where I think we need regulation.
Rachel: Yeah, so you've already talked about your colleague, Timnit. Can you tell us what drew you to the distributed AI Research Institute and can you tell us more about the kinds of work that you're doing there?
Adio: So I'm African clearly, and as an African researcher finding my positionality in the West was not exactly a very easy thing, but I felt like there are things that need to be said because if you were to be honest, the tech landscape or the tech industry is extremely male and extremely white.
Rachel: Is it? I hadn't noticed.
Adio: So because of this when I was thinking my own positionality and thinking about talking about the things that I feel like truly matter, things of inclusion, things of community involvement, things of responsible development of AI systems and platforms, and when I looked at the work that DAIR does, I felt like, okay, this is exactly the kind of work that I would imagine spending the next batch of years talking about.
And it's a platform also when you are familiar with the whole founding of the organization, and Timnit's previous role in Google, and how she raised issues of discrimination, censorship, et cetera. Then this for me was... Because at the bottom of it, I am a researcher, but I'm also an activist.
So I was looking for such a situation. So I feel like DAIR for me was the right place to be because it's a place where I can both be a researcher and do proper science, but also be an activist and make sure that my research work leads to actual change. So at DAIR, pretty much the work that we do evolves around two maintenance.
So the first one is the issue of how can we minimize or disrupt or even slow down the development of all the harms that are perpetrated by AI systems. And then the second thing is we are not just talking about disrupting these harms. So when you then, for example, go on our website, you see the kind of different projects that we're working on in trying to mitigate these harms.
So be it working with refugees and making sure that the AI tools are not being deployed to abuse refugees, which is happening by the way. How can we minimize that? Also, things like special apartheid, things like special the arrangement of cities is discriminatory. So how can we minimize those harms? Then the second aspect we are focusing on is also, so we have seen these problems, but then what can we do? What can be done about it?
So for example, one of our DAIR colleagues, Asmelash is developing a translation system, which for now is focusing on Tigrinya and Amharic, which is languages from Ethiopia, which according to independent verifiers is actually way better than Google's Translation. And the focus there is, okay, we've seen that there are certain languages which are not well represented on Google in terms of translation and everything by these big companies.
What can we do about it? So we develop our own system that solves this problem, and we make sure that we are not then exploiting the people that we work with. So for example, in these different projects, so one of them that I'm working on together with Milagros Miceli, the one I explained earlier where we are investigating the labor conditions of workers in Kenya, for example.
We are not just being helicopter researchers who go to Kenya, ask questions, fly back to the West and publish results. But we actually involve this community in participatory action research where these workers are not just informants for us, but they're actually core researchers. As you see very soon in June when we launch this project on how they were fully involved in telling their own stories and not us coming to them and say, hey, tell us your stories, then we shine and we are stars with your stories.
Rachel: So moving away from the extractive settler colonialist model to something that's much more community based.
Adio: Precisely. So basically defining the agenda of what needs to be researched has to also be a very key component of these people that we work with. So if we're working with research in Syria, in Uganda, in Brazil, then they have to have an equal say, what is it that we put on the agenda. And not us coming to them and say, hey, we know you have a problem and this is how you're going to tackle that problem.
Rachel: For those of us who support DAIR's mission and what I see is succeed in these aims, what are some of the best ways to get involved?
Adio: One of the best ways to get involved is definitely the work that we do requires money. So that is one very clear way of supporting our work. So if you go on our website, they're very clear. The ways in which you can support us financially would be great for that.
And also amplifying our work because as I mentioned earlier, the main issue is that most of these stuff, most of these harms happen because they're in the dark. So if you help amplify our work that other people know what's going on, then that will be amazing.
Rachel: And it's certainly uncomfortable to look at the conditions that uncomfortable lives in the West, but I think it's worse not to look and to allow these practices to continue.
Adio, what is some of your favorite sources for learning about AI?
Adio: Okay, so as, I mean, like I said earlier, I am a researcher. So basically journals are definitely one of the ways, my favorite ways I read. And also surprisingly, even though it's broken, Twitter is actually a very good source of learning.
Rachel: You're not on Bluesky yet?
Adio: Not yet, but I'm still on Twitter. So definitely on Twitter, when you follow certain individuals who are in the AI space and the work they produce, which is either journals or listening to podcasts like yours is also one very easy and accessible way of learning about AI.
But then there are also other platforms like Coursera where you can actually learn the practical skills about AI. Also, for me as a researcher conferences are also a wonderful way of definitely learning about AI.
Rachel: Yeah, no, the conference scene is hopping at the moment. It's a fascinating time. Adio, I'm going to make you king of the world.
For the next five years everything goes exactly the way that you would like it to go. What does the world look like in five years?
Adio: Well, if I'm to be made king of the world in the next five years and everything goes the way I want, number one, workers will be compensated fairly, they'll be in charge of their data, and the AI systems which are developed will actually be AI systems that benefit humanity, not AI systems that exploit humanity.
And what else? Of course, there'll be an end to all forms of war and violence, everyone will be living in peace.
Rachel: I would like to apply for a passport in your kingdom, please. It sounds really great.
Adio: You'll be very welcome to this kingdom.
Rachel: Thank you. And the last one, my favorite, if you had a colony ship to go to the stars to go and and inhabit Alpha Centauri, what would you name your ship?
Adio: Well, this kind of moves us away from talking about AI harms, right?
Rachel: Well, hopefully we'll leave AI harms behind us. We'll have you as our king on the colony ship.
Adio: So, I have to make a point that the whole idea of commanding or boarding a ship, a colony ship going with the stars is not actually the most appealing thing for me because I'm like, yeah, why don't we leave people on the stars alone if they're there. Let's leave them alone and do our things here. But because I like thought experiments, I'll enduldge your question.
Rachel: Thank you.
Adio: So I think I would name it the Octavia Butler ship.
Rachel: Oh, my favorite writer.
Adio: So don't need to explain why then.
Rachel: In case any of our listeners haven't read Octavia Butler, fix your lives, go and read her now. There's two series that mean a great deal to me.
One is the one that starts with the parable of the seed where fascist dictator appears in the near future with the slogan "Make America Great Again." And it's a wonderful saga of how to survive the ensuing dystopia.
The other is the Xenogenesis series, which is about a woman who ends up having children with an alien race who have come to colonize earth. And it's one of the deepest and most insightful explorations of complicity that I have ever read about negotiating power and subjectivity and how to construct a family life in a profoundly compromised world. Those are my favorites anyway. Adio, you might have different ones.
Adio: Actually, when I was thinking, when you asked me to send that, when I gave you Octavia's name, the last one was actually what I was thinking about because I feel that this in a way, kind of reflects the world we already are living in because sometimes when I hear certain tech bros talk and comment, I definitely feel like we're living with the aliens.
Because some of the thoughts that they come up with, I'm like, "Okay, if you're human, if you go outside and touch grass, you figure out that what you're talking about does not make sense." So I feel like that whole story of resilience that was trying to navigate in a very compromised world, trying to rebuild humanity again, and which I feel.
Because I feel like Octavia was really a pioneering writer, and I feel like if I'm to be a captain of this ship going to a new world somewhere, these are some of the things that I would definitely need to focus on to make sure that whatever world we inhabit is a world where there is equality, is a world where we are able to balance these conflicting needs or conflicting situations or cultures as it way, because already now, I spoke about how we already are operating this cultural minefield.
So pretty much that's was my favorite as well. And that's what I was thinking about as I was responding to this question.
Rachel: When I took my first job in Venture, my good friend Sumana Harihareswara said to me, "Are you going to reread Octavia's 'Xenogenesis Series'?" And it took me a minute and I'm like, "You're exactly right. That's exactly the guide I need to take with me." Adio, this was a delight. Thank you so much for coming on the show. It's wonderful to connect. I wish you the very best in your work. I hope that you do become king of the world. Let's stay in touch.
Adio: Thank you very much.
Content from the Library
Generationship Ep. #25, Replacing Yourself featuring Melinda Byerley
In episode 25 of Generationship, Rachel Chalmers speaks with Melinda Byerley, founder and CEO of Fiddlehead, about the...
The Kubelist Podcast Ep. #44, Service Mesh Evolution with Idit Levine of Solo
In episode 44 of The Kubelist Podcast, Marc Campbell and Benjie De Groot sit down with Idit Levine, Founder and CEO of Solo.io,...
Open Source Ready Ep. #3, The Open Source Pledge with Chad Whitacre of Sentry
In episode 3 of Open Source Ready, Brian and John sit down with Chad Whitacre from Sentry to discuss the Open Source Pledge, a...