Ep. #12, No Countermeasures with Lilly Ryan
In episode 12 of Generationship, Rachel Chalmers sits down with Lilly Ryan, an Information Security specialist based in Australia, to delve into the intricate world of generative AI and its potential misuses. Together they unpack the nuances of prompt injection and other adversarial attacks, shedding light on the dark side of AI technology. They discuss countermeasures to technological colonialism, strategies for maintaining a human-centered web, and the evolving role of AI in our workplaces. Tune in to explore the fascinating intersections of security, technology, and human interaction in the age of AI.
Lilly Ryan is a recovering historian and current Information Security specialist based in Australia. Over the last decade, Lilly has worked as a Python developer, Linux Wrangler and penetration tester specializing in web application and cloud security. She currently provides security assurance and advice to Thoughtworks delivery teams with a focus on secure development practices, threat modeling, and the attacker mindset.
As part of the information security team, Lilly helps Thoughtworks to grow their security capabilities. Her strongest expertise and main interest is in offensive security testing. She also works with Thoughtworks globally to develop security policies, education initiatives, and strategies for the business. Lilly is a fierce advocate for consumer privacy rights, a human centered web, and making tech knowledge accessible to all.
Outside of her work duties, Lilly enjoys interviewing people of many backgrounds about their perspectives on technology and security, and creating platforms for discussing the issues. She co-hosts Byte Into IT on Melbourne radio station 3RRR, interviews cybersecurity specialists on the OWASP DevSlop Show, co-founded the Technically Games conference, and is Co-Director of PyCon AU 2023. She also serves on the board of Digital Rights Watch Australia.
In episode 12 of Generationship, Rachel Chalmers sits down with Lilly Ryan, an Information Security specialist based in Australia, to delve into the intricate world of generative AI and its potential misuses. Together they unpack the nuances of prompt injection and other adversarial attacks, shedding light on the dark side of AI technology. They discuss countermeasures to technological colonialism, strategies for maintaining a human-centered web, and the evolving role of AI in our workplaces. Tune in to explore the fascinating intersections of security, technology, and human interaction in the age of AI.
transcript
Rachel Chalmers: Today I am incredibly honored to welcome to the show Lilly Ryan, a recovering historian and current information security specialist based in Australia. Over the last decade, Lilly has worked as a Python developer, Linux wrangler, and penetration tester specializing in web application and cloud security.
She currently provides security assurance and advice to Thoughtworks delivery teams with a focus on secure development practices, threat modeling, and the attacker mindset. As part of the information security team, Lilly helps Thoughtworkers to grow their security capabilities.
Her strongest expertise and main interest is in offensive security testing. She also works with Thoughtworks globally to develop security policies, education initiatives, and strategies for the business.
Lilly is a fierce advocate for consumer privacy rights, a human-centered web, and making tech knowledge accessible to all. Outside of her work duties, Lilly enjoys interviewing people of many backgrounds about their perspectives on technology and security and creating platforms for discussing the issues.
She co-hosts Byte Into IT on Melbourne radio station 3RRR, has interviewed cybersecurity specialists on the "OWASP DevSlop Show," co-founded the Technically Games conference, and is co-director of PyCon AU 2023 and 2024. She also serves on the board of Digital Rights Watch Australia.
Lilly, thanks so much for coming on the show. It's great to have you.
Lilly Ryan: Oh, thank you for having me here.
Rachel: Can we start by talking about prompt injection and other adversarial attacks on generative AI platforms? How are shadowy state actors and other black hats misusing the latest tech?
Lilly: I enjoy questions about tech misuse so much because, you know, I spend my life in the edge cases. This is where I'm at. And there's kind of two main things that are going on at the moment in terms of the actual technical uses of AI for, as in hacking AI, generative AI systems. Then there are the misuses that occur because of, you know, using the tool potentially as intended, as in generating deep fakes, misinformation, and so on.
But to start with, the ways that you would break generative AI systems themselves, and that's mostly down to prompt injection and jailbreaking. Prompt injection, I'll get to that in a moment. Jailbreaking's a bit easier. It's circumventing the safeguards in prompts to perform actions that aren't appropriate for the context that the model is running in.
It has mostly reputational impacts, where you get lots of gotcha screenshots of, oh my gosh, I made this thing say something that it shouldn't have said. You know, it's inappropriate for the context. It's going to look really bad from a PR point of view, quite a lot of the time. And, you know, that shouldn't be discounted.
Rachel: So a prehistoric example of that might be Microsoft's Tay, the voice app that was released online and very rapidly became an extremist Nazi.
Lilly: You see a lot of this with chatbots of lots of different kinds. People getting a corporate chatbot to make very uncorporate statements, to make product's guarantees about things that they would not otherwise do, providing information about a company that's false.
We have seen court judgements that would say, "Okay, look, if the chatbot has published this stuff, there's no reason that it should be any less accurate than what you have published on a static page." And that can itself be pretty damaging for a company. But that is, I think, one of the easier things to do. And a lot of what people have been putting time and effort into is to make that kind of circumvention of those safeguards harder.
What I'm more interested in is prompt injection, where you are concatenating untrusted input from a user with a trusted prompt from an app developer. It's a bit like SQL injection, that kind of thing where you would try and get it to list out the contents of the directory on the system that the model itself, the process is running in.
With a lot of the movement to retrieval augmented generation, for example, this is something that allows people to pull out the documents that have been added to augment that generative response. And because that has become increasingly popular as one of the dominant use cases for corporate knowledge bases and many of those kinds of environments, it can get fed some pretty tasty information.
So there have been stories of people pulling back legal contracts, spreadsheets full of people's salary information, all of these kinds of things that you might be able to pull back out of that system, the actual machine that the process is running on or is able to access, by misusing the prompt and getting your prompt to run inside of that context, or your command even to run inside of that context.
So those kinds of things are the main ways that I see generative AI systems being misused. And in speaking to a lot of software developers who are experimenting with deploying this kind of thing, in a variety of different contexts, for many clients in many domains, there's still a lot of that bread and butter security stuff that really matters. You know, it's still a process running on a machine somewhere a lot of the time.
And particularly when you're not going with using somebody's API to get the prompts running, when you're trying to run something local for yourself. And you run it on a server, and then you forget that the server shouldn't be open to the internet directly. You know, you should be putting security rules in front of that. You should be trying to serve it differently, blocking off ports. When people forget to do that, then your system goes down. But it doesn't matter how smart the program that you're running in it is.
So that I think has been one of the really interesting things, particularly as folks are rushing to get applications out there and skipping a lot of these checks and balances. And I think that that can be a pretty difficult place to be when you know you need to put something out there quickly. And it's combined with perhaps a little bit with this cognitive bias that many of us have around generative models, particularly when they sound so intelligent and so human. It's still a process running on a machine somewhere. And this is the thing that I see a lot of people forgetting a lot of the time.
But when you're talking about, you know, state actors, black hats, honestly, a lot of the misuse is coming from using the products for things that they can be used for generally anyway, making phishing emails, honestly, deep fakes, misinformation. And misinformation at scale I think is really the thing that has been unlocked here.
It used to be very difficult or require a lot of intensive human effort to phish somebody with specific details related to that individual so that you could point it at somebody and say, "I want to write a phishing email, and I know that this person..." You have to do the reconnaissance. " I know that this person is the director of company X, and they used to work for company Y. So I'm going to pretend to be from company Y, reach out to them, make that connection."
You can do that by asking LinkedIn to that information and getting a generative model to craft the information for you. There will be safeguards around a lot of this as well, but you can train your own models. So you can run your own models, and it doesn't have to be that sophisticated. Phishing never needs to be, but now you have all of this kind of junk happening at scale, which is something that you never used to be able to automate reliably or in a way that was variable enough that it felt like something that people might actually believe in and click on.
So there's a variety of different things, but honestly the harms are really just about what harms it enables humans to perpetrate that were maybe already going on, but now can be scaled in such a way that they become less discriminating about who they impact. And that I think can be quite damaging for society.
Rachel: I'm working with a group of developers who are pushing the amplified movement. They're saying that these tools don't replace developers; they amplify them. It occurs to me now that they also have the potential to amplify harms. So on a very closely related note, how completely does gen AI mess with our digital rights?
Lilly: That depends on what you think our rights should be. In the digital space, I think a lot of people have different views of this. But in almost every way that you want to slice it, there's going to be some kind of impact. I think that the amplification is for the amplification of any natural human tendency. And when that intersects with the internet and with digital spaces, it is accelerating a lot of the things that were already issues.
When we talk about, you know, say that you feel that we should have a right to, a digital right to access reliable information. That's a real problem. It always has been a problem. It's become an increasing problem, not just with the amount of content that's being generated thanks to generative models, and either misinformation or disinformation, like, deliberately generated or just because it amplifies biases that were already there, but also the way that it has affected discovery online--
Of the way that it has corrupted search algorithms to change the way that SEO is working to game all of that, such that in combination with a variety of other incentives, it's just really hard to find the thing you're actually looking for. We see that with the growth in alternative search at the moment, that people really want to be able to find information. And they can't anymore because it has enabled a lot of these search engine optimization tactics to be deployed at scale.
And so most of the internet, it feels like, is written by bots, for bots, not by humans, for humans. So to pivot to a different kind of search engine system, where we're seeing quite a few of them spring up now, people either aggregating the best of a variety of different indices and providing an interface on the top that does the priority ranking differently, or allows you to customize that so that you are only seeing stuff from a subset of websites, or you are able to exclude certain websites that you know are predominantly spammy, that's one thing.
People are building new indices of the internet that don't have the same kinds of priorities that things like Google and Bing do, which is fascinating and wonderful, watching that occur, because it's clear that this is something that people really need, people really value. It was one of the main things about the internet, was distribution of information. That was kind of the point of it.
And if we can't do that, then, you know, we kind of just have this system of recirculating garbage. And on the other hand, we're talking about digital rights in terms of privacy, autonomy, access and control over our own information.
And I'm not a copyright lawyer or any kind of lawyer, so I can't talk from the legal perspective, but I know that there have been lots of cases of people trying to figure their way through, as either creative individuals or just as private individuals who might keep a blog, who want to publish stuff, who have put themselves out there in some way. Even Reddit comments. All these kinds of things contribute and go into this corpus of information that is used to train a lot of the models that are now commercially available.
And from an intellectual property perspective, the people whose work has gone into that, they're not seeing any kind of compensation for any of that work. But on a more personal level, and the thing that I think worries me more, it's more about, you know, the intentions with which we put things out there into the world, and the use cases that they are now being put to.
And when you're thinking about how large language models are kind of made of people, whether those people have consented or not, they're in there. And their thoughts, their ideas, their contributions over time are being regurgitated in these forms that we would never have envisaged as being some kind of use case that we may have consented to at the time.
So there's a really big question out there about how you would go about building a corpus to enable something that functions the way that today's large language models do, with the act of participation and consent of everybody who has ever been involved in putting that information in there in the first place.
I'm not certain that that's possible, the amount of information that you have. You know, it's large. That's one of the L's in LLM. But it does mean that we need to bear that in mind when we are actually using tools like this, and thinking about the use cases that we're putting them to as well.
If we're using it to generate creative output, that output was built on the backs of a lot of output that was not consensually contributed at the time. So there's a whole bundle of things in there. Like a lot of things about the way that the world works, it was built on the back of the efforts of people who will never be acknowledged or compensated or even asked if they wanted to be part of this. And this is the world we live in.
You know, I am speaking to you from unceded indigenous Australian territory. And it's the same for a lot of us. We build our entire societies on the backs of these things.
Rachel: And when people have confronted the people who are making a lot of money off the LLMs with the fact that the contributions were uncompensated, an answer I've heard is, well, if we had to compensate all of those contributors, our business models wouldn't have been viable. To which I kind of wonder, so what?
Lilly: It does make you think, well, it should make you think about the way that we construct not only software systems, but our entire method of communication and interaction with each other, and what it would mean to put something out in the world that respected those rights.
There's a lot of work that is currently being done, and has been being done for generations, to try and repair some of the injustices of the past. And it's not something that can be reversed easily or at all. It's all about how we move forward with what we've got.
Rachel: What countermeasures are available to us? And what are the best among them?
Lilly: You can't really countermeasure colonialism. It's about acknowledging harm. It's about listening to the voices of the people that have been trampled on and working out a better way forward. And when it comes to the way that the internet specifically has been used to create the large language models that we are now building an entirely new industry on top of, it got there by breaking a lot of the old trust models of the internet: by scraping the information and not just using it for indexing purposes, but by using it to build a corpus that then trains this stuff.
There are countermeasures that are technical that personally I have little faith in anymore. Used to have robots.txt files that you could use to tell a bot not to crawl something. There are a variety of proposed alternatives for generative models. And there are certainly lots of systems that will poison data sets for you.
We've seen things like Nightshade and Glaze and Kudurru for artists that will either poison the information that's going into a corpus when it's being gathered for the new stuff, or will try and refuse to serve that information back to somebody when they notice that it's being requested by something that appears to be building a training corpus, which is one form of resistance.
I think that, in many ways, there's also going to be countermeasures that come out, and we're sort of already seeing a bit of this. I know this is a problem a lot of folks are thinking about very deeply and working on. That AI systems will be being built on top of their own output and feeding back in on itself, and that in the end, the quality of the things that we had, say, prior to 2022, it's going to be the best that we'll ever get.
So I wonder if some of those countermeasures have already been put in place in those ways. I've also seen a lot of people approach the problem by retiring from the public internet. By moving to private communities, to closed communities. People do this for a variety of reasons, especially if you're in a marginalized demographic.
But to move some of that stuff that is seen as that goldmine of human-to-human interaction to a community that is not easily scrapable by third parties is certainly a response, and I think a pretty valid response, if you are concerned about your everyday correspondence being used to feed into a model. But it is still something that we need to think about, because many of the businesses that host these things are probably also thinking about the goldmine that they're sitting on.
So the rise of, and particularly off the back of things like Twitter kind of disintegrating in a lot of meaningful senses from the way that it used to function, and people moving to services like Mastodon to self-hosted services like Matrix and exploring more about the small web and human-to-human interaction. That kind of stuff I think is really interesting, really promising.
It doesn't scale very well, and I think that's kind of the point. It's not supposed to scale. Communities don't scale. That's not how they work. And so the countermeasures I think are fundamentally, frankly, anti-capitalist in a lot of cases and anti-colonialist in a lot of other cases, and boiled down to the way that we want to connect with each other and how that works.
Rachel: Shifting our attention from the technical countermeasures to the humans in the loop, what are some strategies for keeping the web as human-centered as we can and coexisting with AI in workplaces in the future?
Lilly: My sense is that if you're going to automate the things that people actually enjoy doing, you're going to remove a lot of the utility of AI.
Because, tools are supposed to help us. These are tools that we're using. They're supposed to help us, not supplant us. And when we forget that, that's when we end up with people suffering, people not enjoying what they're doing.
My most enjoyable online experiences these days are in spaces like Mastodon that tend to focus on that human-to-human interaction. And I think in a workplace there's a lot that could be done to foster people talking to people, especially when it comes to the way that we engage with each other, if we're engaging remotely, even in person.
Anything that enables people to make those really human connections I think is important. I think generative AI can have a role to play in that, particularly when it comes to folks who don't have the facility of written expression in a particular language that other folks do.
AI can augment a lot of this kind of stuff. It can even the playing field. It can help people who don't understand or don't have an innate understanding of how tone might be conveyed in a written piece, to change that or to build that understanding. But also, I think that we don't want to use that to get rid of those layers in front of people either, but to help people understand each other as they are as well.
If there's anything that could sit in that space, where it is providing human beings with the ability to do things that they really enjoy and making that easier to do, and thinking about the use of tools as tools, getting rid of the drudgery and automating the things that nobody enjoys doing, that's really where I think we can keep things human centered, that we can coexist easily with AI in a workplace context.
Anything besides that, where you are displacing the things that human beings find valuable, is not going to be a place that focuses on what human beings can do. Not all places do that, but the ones that will I think will understand that you need to use the right tools for the right job, and that you need to use the tools in a way that acknowledges that they are tools, that they themselves are not workers in that way.
Rachel: Yeah, the old joke, I don't want AI to create art and write and counsel people for me. I want it to file my taxes and renew my driver's license and other operations that I dread.
Lilly, you and I share an interest in death, and I wanted to ask you about digital death in the context of AI. When I die, will you make a chatbot of me, as Laurie Anderson has done for Lou Reed? In the event of your death, would you like me to make one of you?
Lilly: I've been thinking about this a lot, even prior to the advent of generative models, because it's something that I really, really do not want for myself. Personally, I find it very creepy, but I also know that a lot of what people do for themselves posthumously, when they themselves are the ones that are grieving, doesn't have a lot to do with the person who's been lost. It has a lot to do with the people who remain.
And if the people who remain want that kind of chatbot set up, and evidently, many do, there are so many startups that focus on these kinds of things, then I also don't really get much say in whether or not that will happen. Having given talks publicly about my intention that I would prefer that this not occur I think is certainly one thing, but most people are probably not going to do that.
And it's going to become easier and easier to do this. You know, the uncanny valley is already something that is maybe being circumvented or bridged in a lot of contexts, perhaps not the ones with video involved yet, but certainly text chatbots. And that can be kind of healing I think for some folks in that grieving process.
So personally, no, this isn't really something that I want. It's not something that I want to do for the people in my life either. It's also something where I think people grieve in different ways. And some of those things that may be generative AI mediated are things that people will find helpful and useful and can be used healthily.
But the market for them does suggest that this is a thing that people want, that people think is good. And I think then the line has to become about what the purpose of those bots is for. Is it about assisting somebody in the grieving process, or is it about getting enough of a simulacrum of somebody that they can continue to perform their work?
We have a lot of unfinished works by famous authors, for example. If we were able to have the amount of data that we do about the way that they work, the way that they browse the web, the way that they write all of these other things, and we were able to use that as a chatbot, I've seen proposals for people to build bots of authors with unfinished works so that they could posthumously finish them.
I think that finishing works posthumously is something that we have done for a long time. You only have to look at things like the quilting community, for example, finishing off a quilt that somebody has been working on over time. There's something really communal I think about coming together to finish something that somebody was working on, and I'd hate to see us lose that.
But I know that this is also the direction in which some people's minds roll. And so what really matters is what purpose those bots are serving, both in terms of the wishes of the deceased and also for the needs predominantly of the bereaved.
Rachel: How do we honor people's legacies without taking advantage of people at an extremely vulnerable time of their lives, when they're mourning a loved one?
Lilly: Well, that's it. Taking advantage of people is something that I think is, regrettably, not unheard of in a lot of death care industry work. There's been an enormous amount of work in the last couple of decades trying to reframe that, to look at it as something positive, to look at it as something that centers mourners and really helps people to move through that, again, in a really human-mediated way.
But you also get folks running funeral businesses who will say, "Well, you should get the really expensive coffin. It's what Mum would've wanted, you know?" And people often don't make the best decisions when they are in a heightened emotional state.
So I can also see this becoming something, that I think it already is, honestly, playing into a lot of the social engineering manipulation that we do get as a result of, amplifying human behaviors at scale through these technologies.
But it does also mean that we have to think about the things that we're building, and not just the use cases, but the abuse cases, particularly if our service is intended to help the bereaved, but could be turned to other purposes that are less benevolent.
Rachel: Let me turn the question around again. What might it mean for a language model die? Should language models that do cause harms face the death penalty?
Lilly: Personally, I don't believe in the death penalty, but it does raise the question of what it means for a language model to die, as you've asked. And death implies life. So that's an interesting question in itself. This podcast has a finite amount of time, so we probably need to put that one to the side.
But what we've seen in a couple of different cases has been people who've formed really strong emotional attachments, especially to chatbots, to text chatbots, in a variety of contexts. And when those bots are discontinued or the model weights are tinkered with and they behave differently, or they forget something, which is something that we, you know, that problem of the context and memory is something that many people have been spending a lot of time working on because it's one of the major limitations of a lot of the current AI models. They just don't remember things for long enough.
If we look at literary analogies, the Wayfarers Series by Becky Chambers was one that I really enjoyed. And there's a fairly poignant example of this in one of the books where there is a ship's AI that the crew has been on many adventures with, had a lot of experiences with. And there's a part of the story where the AI is factory reset and the crew's just devastated.
Like, they end up having to install an entirely different model because having this crew member, this person that they've come to see as part of their crew just forget everything that they've been through. You know, it has access to the data. You know, we traveled from point A to point B. We, you know, picked up cargo X. But not any of the interpersonal stuff.
None of those actual active things that we would as human beings consider to be memories. That's a kind of death. And you see that play out in the real world as well in many of these contexts. So I think that memory has an enormous amount to do with it. Really.
And we talk about dead languages too. Whether something falls out of use due to obsolescence is another thing. And there are certainly those that will fall out of use because they may be less helpful. I personally prefer to use GPT-3.5 for a lot of stuff than GPT-4, even though it's supposed to be simpler, because I think that it gives less florid answers, I guess.
And I think that eventually, over time, the models that people prefer to use will, you know, will rise up, and the ones that people are not using will eventually fade away. There's that kind of death from collectively being forgotten, as well as the model itself losing that context history.
Rachel: Yeah, when the last person who knows your name dies and the last person who remembers a particular software platform dies, that software platform is dead. Lilly, what are some of your favorite sources for learning about AI?
Lilly: I get a lot from talking to my friends, from talking to my colleagues. At work I have a lot of people on all sides of the conversation. People who are AI optimists, who are really keen to try and implement things and try all of the new stuff and see how it could be really helpful, or see how it might be able to change what they're doing in a really positive way.
As part of the security community, I deal a lot in edge cases, and so I also see a lot of that pessimism thinking as well. And I think that hearing about a lot of these different kinds of takes is one of the best things that I have done for deepening my understanding of the way that the field is moving. Because you're able to plot this course between becoming too effusive about it versus too cynical.
I tend towards the too cynical, but I do see that there are a lot of advantages as well. So I am very fortunate to have access to a lot of people who know what they're talking about to engage in these conversations one-on-one. But apart from that, I think, and it's been mentioned on this show before, the Distributed AI Research Network is something that I really enjoy seeing the output of.
They put out a lot of really useful bits and pieces of analysis from a very critical point of view that caused me to think a lot about the way that we're moving. And particularly listening to computational linguists and also to computational neuroscientists as well, who have been involved in building a lot of these models over the course of the last decade, or to get us to the point where we are now, to see their takes on the way that these things are being used now is very instructive and informative, because they definitely have a particular view of what large language models were initially developed for and how they see them used now.
Those points of comparison are really interesting. I'm also a really big fan, and have been for a long time, of the work that Janelle Shane has been doing. The AI awareness blog and a lot of that kind of stuff that also predates generative models. It feels like a lot of the world is catching up with the work that she was pioneering about a decade ago in pushing those edges of AI and machine learning.
I really enjoy, as soon as I am playing with a new model, to find out where those edges are, whether that's the edges of the prompt that it's been given, or where it's the edges of its limits in some other way. What kind of languages it's been trained on versus what kind of languages it doesn't do so well in, for example.
And Janelle Shane has been just doing such good work understanding and exploring, in a very funny and interesting way, the limits of a lot of models over time and why those things really matter, even now when they do have a veneer of, like, intellectual presence, I suppose.
I think also 404 Media has been doing a really good work probing the ways that AI has been having an impact on businesses and communities.
I don't spend an awful lot of time on places like Facebook or Instagram, but they've done a lot of research and reporting on the way that viral content has been enabled by generative AI technologies and how that impacts groups on Facebook. The weird LinkedIn posts that are going on. Uber Eats spam with AI-generated images of food that doesn't exist.
All of this kind of stuff I think is really interesting in having a look at the way that this is impacting people outside of the tech sector, and what that means for the way the world is going. There are plenty of things. But honestly, keeping my inputs fairly broad has helped me to understand the breadth of the views that are out there and use those to inform my own.
Rachel: Yeah, I think that's incredibly important, even if it's just as simple as varying your diet between the optimists and the pessimists, as you were saying.
Lilly, I'm going to make you god emperor of the Solar System. If everything goes the way you'd like it to for the next five years, what does the future look like?
Lilly: I would love to see people using the right tools for the right job. Using them thoughtfully, understanding what the tools are for, understanding that the tools are tools. My sense is that many of the issues that we have are down to people trying to fit square pegs into round holes and all of that kind of thing.
It can be really interesting to see what happens, but when you have people doing that en masse and not thinking about the impacts, I think that's where we get a lot of the issues. So right tools for the right job is one thing.
And if I could magically increase media literacy, I would. This is such a difficult problem. It doesn't have a great solution. So many people who are way smarter than I am have worked on it for so long. But if I were, you know, god emperor of the universe, then sure.
This I think would also really help, especially when it comes to the deluge of information that we have that just doesn't really make sense. And for people who have not had the privilege of developing media literacy skills in some other context, it's going to become increasingly difficult to do this in any other way. So look, if I had a magic wand, whatever it is, this is what I would want.
Rachel: Yeah, hard agree. Make everybody do a literature degree.
Lilly: Yes, please.
Rachel: Becky Chambers question for the last one. If you had your own colony shipped to the stars, if you were the pilot of your own generation ship, what would you name it?
Lilly: I considered ubuntu, but that word has too many associations in the software world for me to really seriously want to go with it. But if you talk about it in the sense of what it means, I am because we are, I couldn't think of a better name for a colony ship, to think about the collective over the individual and how we share what we have to make the best of our own futures.
But if I'm going to choose a name and I can't have that one, I'd use another science fiction reference. I'd probably go with perihelion, which is, first off, the point of the orbit where, you know, a planetary body is closest to the sun. And, you know, the sun represents warmth, growth, new beginnings, I think, but also because it's a, you know, it's a Murderbot Diaries reference. And I think that ship in the Murderbot Diaries is a great symbol of what it means for us all to take care of each other.
Even when we may not agree with all of the things that everybody does, we get on each other's nerves all of the time, that's humanity. That's how it goes. But that's the best name I could think of.
Rachel: Not one, but two fabulous names. Perihelion is a personal favorite AI character of mine. And ubuntu, I think we've talked about it on the show before, I love it. I love what it really means.
Lilly, I am because you are. Thank you so much for coming on the show. I am really grateful to have you as a friend and as a guest.
Lilly: Well, thank you so much for having me along.
Content from the Library
Generationship Ep. #25, Replacing Yourself featuring Melinda Byerley
In episode 25 of Generationship, Rachel Chalmers speaks with Melinda Byerley, founder and CEO of Fiddlehead, about the...
Generationship Ep. #24, Nudge with Jacqueline-Amadea Pely and Desiree-Jessica Pely, PhD
In episode 24 of Generationship, Rachel Chalmers speaks with Dr. Desiree-Jessica Pely and Jacqueline-Amadea Pely, co-founders of...
Open Source Ready Ep. #1, Introducing Open Source Ready
In this inaugural episode of Open Source Ready, Brian Douglas and John McBride embark on a technical and philosophical...