1. Library
  2. Podcasts
  3. Unintended Consequences
  4. Ep. #3, The Emergent Consequences of Feedback Loops with Paul Biggar of Dark
Unintended Consequences
36 MIN

Ep. #3, The Emergent Consequences of Feedback Loops with Paul Biggar of Dark

light mode
about the episode

Heidi Waterhouse speaks with Paul Biggar, founder of CircleCI and Dark, about what makes coding live surprising, and how that guides what we think of continuous delivery. They also discuss the definition of “done” for software and how that has changed over time.

Paul Biggar is the CTO and Co-Founder of Dark. He previously founded CircleCI, and co-hosts the popular Heavybit podcast To Be Continuous with Edith Harbaugh.

transcript

Heidi Waterhouse: I have a lot of questions about how we ended up in the place that we're at.

Both with computer science generally, and the way we think of software in companies that never thought of themselves as software companies.

Paul Biggar: Oh yeah. Software is eating the world and all that shit.

Heidi: Exactly. So it ate the world and your animal feed manufacturer needs hundreds of thousands of dollars worth of software, but it's really interesting to me how we describe to somebody who's used to making things why software is useful to them.

Paul: The entire history of our industry has piled on top of each other to the point where even when you're making software and you're describing to that feed manufacturer that maybe they need software, but try describing to them why they need NPM or like what Docker is or why that's useful.

The work that we do in a day-to-day is just so far removed from the value that we create. Yeah, this industry is kind of fucked.

Heidi: Yeah. DHH said we are alienated from the products of our labor and I'm like, "You don't hear a lot of millionaires quote Marx." but he's not wrong.

We are alienated from the products of our labor because we don't understand so often how people are using us out in the world.

Paul: Mm-hmm.

Heidi: I think that's one of the interesting things that this podcast can do is, not just what are we thinking, but why do we think it matters?

Paul: Well.

It's funny because software is one of the few places where we can actually gather feedback in an automated way, a bit what people actually do.

If you're producing a vaccine, let's say, you have to send out armies of people to track all the people who are taking these vaccines to see the results.

Whereas in software, we have all these feedback mechanisms and the feedback mechanisms can be used to as inputs to more things which create more feedback.

There's a lot of ability to see the fruits of our labors and the metrics of it, and I think that that's something that's still incredibly young to our industry and still hasn't really pervaded how we think about building systems, apart from operationally.

Heidi: Yeah. I really like that idea.

To think about not just the product managers getting to see things, but people with their hands on the keyboard making the code, getting to see the results of what they're doing.

Paul: Yeah. The feedback loop is the most important thing I can- I created CircleCI.

Part of the reason for that is that the faster you can get your code from developer into production, the faster you can have a feedback loop and the faster you can see "does my thing actually work?"

Heidi: Yeah. I think that continuous integration and continuous delivery is super exciting that way because we are giving developers this chance to get an immediate response.

I'm old enough that I remember when I was working at Microsoft and we had this giant whiskey bash because we had released a version of Windows and it was three years in the making and everybody just sort of went, "Okay, I'm done now."

No. No, you're not done. It was really interesting.

I was talking to Microsoft last year and they were like, "Yeah, code isn't finished until it's in production and returning metrics."

Paul: Mm-hmm.

Heidi: I'm like, "Returning metrics?" It's such an interesting way to say finished, to talk about what the actual state of software needs to be.

Paul: Mm-hmm. Yeah. I remember this talk by Noah Zoschke from Heroku.

His definition of done was that we have sunsetted the old system.

Heidi: Oh.

Paul: It's like whatever we're building, presumably it replaces an old thing or is meant to fix the problem with an old thing or it might at least.

I think they specifically talked about at one point they had five different implementations of SSL or of how people would get certificates or attachments at something at Heroku.

And it was like, at some point you got to say if you're building a new thing, you have to turn off all of those old things, so that became part of their release process and their definition of done.

Heidi: That's really interesting given that you are working on making this new thing, Darklang. How do you know if it's done if there isn't an old version?

Paul: Yeah. Yeah. So Dark is this idea that how we make software sucks and that we can make software development a hundred times easier.

The way that we're doing that is we're systematically looked at all the things that make up software development and tried to categorize them as is this essential complexity, is this a thing that we obviously need or is it accidental complexity, and if so, do we removed accidental complexity?

One of the things is we realized what is the lowest possible delivery time? Well, it's zero seconds, right? Or 50 milliseconds, right? However long it takes for a character stroke on your keyboard to make its way to the client somewhere.

We designed a system where you're coding live in production and where there is, as you said, there's no old version.

Well, there's sort of an old version.

We use feature flags to have old versions and to be able to switch from the old version to the new version obviously, but yeah.

Heidi: It's super interesting to think about the idea that it really could be instant.

What are the things that keep us from being instant?

Paul: I mean, it literally isn't.

I remember I was trying to launch some sort of event and we realized that we didn't have a code of conduct up on the website.

What I did was I went into Dark and I added new URL that's /COC and I literally just pasted the text in and then we had a working URL on darklang.com that had our code of conduct and it took a minute.

You could also have done it via-- Get the website and make a React component and all that sort of thing, but there was a material difference between that being something which goes through this long process and to this being a thing which is just done and that I can carry on and continue with the other thing that I was actually trying to do.

Heidi: Yeah. I think about when I was learning HTML and there was this tool called HotDog Pro and, yeah, you've used it, you can-- I know that chuckle.

You would type a tag in one-half of the split-screen and you would see how it was going to look in the other half of the split screen.

It wasn't live, but it was certainly the closest thing that we had at the time.

Paul: Yeah.

Heidi: It was so much faster to learn that rather than typing a tag, save it, FTP it to a site, reload your page, see if it was going to work.

Paul: Well, the thing that I remember from back in the old days, people have mixed responses to this because for some people it was the worst thing ever and for some people it was the best thing ever, and I think in truth it's probably the mix, was live editing the websites--

But you SSH'ed into the server and you opened up VI and you changed, I think this predominantly happened with PHP, you just changed the PHP code, and that kind of worked.

If someone wasn't hitting the server at that exact moment, then you could actually make a change pretty quickly.

It caused all sorts of problems, which I think are why we have feature flags these days to have a little more control over it, but the instantaneous nature of it, people-

Heidi: Yeah. I love the idea that we might be able to get back to that.

I always feel like technology is this oscillation where we go, the one that I'm finding hilarious right now is everybody was like "Cloud everything, everything in the cloud all the time."

And now we're like, "Hmm, sometimes the cloud doesn't work. Maybe we should have thin clients."

And I'm like, "Oh, we're on our way back to a lot of local computing."

Paul: Mm-hmm. I mean, a React app is a local app running in your browser that occasionally talks to a server.

Heidi: Right. It's interesting to me that technology does that by the nature of humans, we go back and forth.

Paul: Well, there's advantages to everything and there's disadvantages to everything and once you're experiencing how great it is to make an instant change to a PHP file, then you're also experiencing the disadvantages of, "Well, actually this isn't really safe and wouldn't it be great if we had a process that we automatically tested things?"

"Yeah, let's switch to that. The whole world switches to that."

"Oh remember when things were instant. Wouldn't that be great?"

Heidi: Yeah. I think that's a really great point.

One of the things that I was thinking about when I was prepping for this was the problem of scaling, especially with CircleCI.

How do you deal with the fact that you are in people's essential workflow?

If you scale badly, you take them down and they can't push code.

Paul: Yeah, I remember the first time that I realized that, because at the start we were mostly continuous integration.

It was mostly get an email when the thing fails or it became a green check box on GitHub or a red X, and yeah--

At some point it's like, "Oh, actually people are not able to get code into production except through CircleCI."

I think it was about two years into Circle when that really started happening.

Yeah, it created a lot of problems whenever we had dead time.

There was one year where GitHub was down all the time and that ended up taking us down or at least causing huge backlogs for our customers, and it's like, "We..."

Well, I won't name any names, but I remember specifically that there was one of our big customers and they were working all weekend to have this release, and it's like, "We can't ship any of the software," and it's like, "Yeah, that's bad."

Heidi: Yeah. What do you think that you as a company or us as an industry can do to understand scaling problems before they happen?

We know that it's hard to scale up.

We know that black Friday is a deal. What do we do about it?

Paul: Yeah. I mean, I wish I had a nice quip or a cute little answer to the entire problems of scale of our industry.

Unfortunately, I don't. I think the main thing probably is that separates organizations that do it well is being reactive versus being proactive.

You can often get quite far by looking at the problems that you just had and fixing them, papering over them on until they go away.

But I think that there's a substantial amount of stuff where you can only really level up how your team responds and how your scale is going by sitting down at the front and saying this actually isn't working and we need a new solution to this.

A very obvious one, I remember when I was working in the same building as Airbrake, which was one of the exception tracking services that were all the rage in 2011, and they rewrote their ingestion engine in Go, basically, and that made them scale 20 times.

Heidi: Wow.

Paul: It was written in Ruby before and Ruby is slow and Go is not, and that's not a thing that you can get by like optimizing or profiling or whatever.

At some point someone has to say, "Here, look. We can do this rewrite, which is expensive, which takes many person months, but if we do that we will get this benefit that there is no other way to get by making small reactive changes."

Heidi: That's a great story. When you think about making that bet or changing a level, what kind of things do you think people need to consider?

Paul: The thing that I always think about is how do you know that this is going to work or how can you prove to yourself that it's going to work in some sense?

It's a matter of like writing up all the risks and prioritizing which risks matter.

The thing where I spend a lot of time nowadays is trying to get Dark to product market fit.

The process of getting to product market fit is sort of an example of this year. You're betting on something new and in order to do that, you need to say, "Which of these things is the most unbelievable," or, "Which one is the most likely to fail and what is the smallest, tiniest little thing that we can do to prove that this is possible?"

I'm thinking about that Airbrake/Go/Ruby thing that I was just talking about.

You could turn on a server, on your own machine and just see how many HTTP requests it responds to and in the second or whatever and just benchmark the concept, like is Go really faster than Ruby, which you sort of assume is true, but perhaps you're wrong for any sort of reasons.

Then when you progressively go through the risks and at some point you're going to get to, "Yeah, I believe strongly in this," and obviously you need a little bit of conviction as well in what you're working on and a bit of knowledge to know that your conviction is correct.

Heidi: That's a great point. It's interesting because as we speak RubyConf is going on and-

Paul: Oh, sorry Ruby, didn't mean to be mean to you.

Heidi: It's not mean. I think Ruby as a community is perfectly aware of their strengths and weaknesses, and it's a great first platform.

A lot of the apps that we use today started on Ruby and then had to scale in different directions depending on what they needed.

That thing that you're talking about, how do I prove out this bet is a really important thing for companies to think about because whoever you talk to, they're like, "Yeah, when we started, this is how we started and here's how we figured out we needed to change."

It's almost like technical debt, but it's like platform technical debt.

How do we write things, how do we understand things and how does our company work?

I think that's the part that is easy to miss is to say, "How does our organization reflect our software or vice versa, and is that part of what we need to level up?"

Paul: That's funny, everything you're saying is reminding me of why we ended up with microservices.

We're going to change the small part of our system from Ruby to Go make a new service, but also the organizational thing.

This part of the organization is, let's call it, architected in this way, so we end up with a system that is architected in that way, and then microservices are all these tools to allow the separate teams, the separate system, separate organizations to have their own software interfaces that reflect the organizational interfaces that they have.

Heidi: Yeah, absolutely. What is that? Conway's law. The product will end up resembling the org chart.

Paul: Right, right, right.

Heidi: It's really interesting to me how Liz Fong-Jones and Charity Majors keep talking about socio-technical systems.

Our software is such a clear representation of that, but we very seldom have the distance to see it because we're in the socio-technical system and we're stuck in it.

Paul: Mm-hmm. I haven't worked very much on the social software, but I use it a lot and it's very obvious when you're looking at something like Twitter, how the technology and the social side of it overlap.

It's so easy to edit a tweet, right?

You can just go in the database and change the text of it, but there are technical reasons why that's extremely challenging and there are social reasons why you would never want your product to do something, I mean, maybe, maybe not, but why you would not want your product to be able to do that.

Heidi: Yeah. When you think about your companies, what do you do to make sure that you aren't just reaching product market fit, but are reaching some kind of--

An advance of what the market needs, because you were talking about how being reactive isn't sufficient, so how do you work on being proactive about market need?

Paul: I'm going to try to answer this without sounding too conceited.

I think the thing is to know what you're building, like have a vision of where you're going.

I think it was very easy for CircleCI. We, and 50 other companies, had the same vision at roughly the same time and everyone was super aware that once you've got Heroku and it's in the cloud, and you've got to GitHub and it's in the cloud, that there's a gap in between and it makes no sense to be running a server to do the process in between, which is CI.

So that one wasn't too difficult.

I've said many times before we product market fit was just, it was just there just waiting for us. There was a, I think I've said there was a CI shaped hole in the market and we just came along to it.

It still required a little bit of recognition of what is it that we're building.

The big one for us with CircleCI was recognizing that only web ops mattered.

Everyone who was ready to use a new cloud CI thing was building a web app, mostly in Ruby, but often in Python and Node.

But also that all other software would be subsumed in some way by web apps and would change to the technology that the web apps were using.

So I think having some sort of direction or documents strategy that tells you exactly where you're going.

With Dark, it was a similar sort of thing.

I described earlier what Dark aimed to do, this removing of accidental complexity, and I think it's not obvious the solution that we came to, which is that we're building a programming language.

The two of them are a not an obvious connection.

The place where we came in with this vision of the future is to say the cause of the problem is all these intersections between the different tools, so we have to build a holistic unified integrated tool, and that tool is going to incorporate infrastructure and it's going to incorporate deployment and it's going to incorporate code editing and it's going to incorporate the programming language.

So we were building this one tool, which has all of those things.

That's not a thing that you can reactively find your way to.

It's not like you're going to, let's say Glitch, for example, is one of our, I don't know if it's exactly a competitor, but it's in the space and Glitch is a real-time code editor for Node.

Glitch isn't going to build the thing and be like one day, "You know what we really need? We need a programming language as well."

It's not how you get there.

Heidi: That's a really interesting comparison. I hadn't thought about it.

But yeah, you are sort of in the same space. It's not the no-code space, it's the no-friction space.

Paul: Right, right. I have so much to say about that. But yeah, the phrase we're using for this is just code."

But yeah, I think that the reason that the whole no-code space exists is because of how we, as sort of the dev tool makers, kind of fucked up and just left so much complexity in the software development thing.

I think almost everyone who's making no-code could just as easily make code.

They can write formulas in Excel, for example, which is coding, but they're over no-code because no-code actually remove the friction and that's actually an important thing for most people to be able to make code.

Heidi: That totally makes sense.

I am not a heavy coder and every time I have to try and set up an environment and fork and commit properly and the hole that I see in the world is it is time for us to talk about new source control.

Paul: Mm-hmm. Well, unsurprisingly, source control is part of Dark.

Heidi: Yes.

Paul: Yeah, there is no setting up environments.

There's no figuring it that your version of Node is too old and that you need to update every package under the sun to get this thing to work again.

All of that is just built-in, it's magical.

Heidi: It is.

I thought it was really a cool project when I first looked at it and I'm like, "I have this thing where I've been talking about how we upgraded how we do code and we upgraded testing and we upgraded deployment, and we forgot everything about upgrading once we got to source control."

Paul: Right. We have to keep our source in text files.

Heidi: Yeah.

Paul: I actually think that's the fundamental problem, it's everything is text.

Heidi: That might be the fundamental problem.

I thought the fundamental problem was that we were assuming that there could only be one valid change at a time, that we think of it as being very sequential.

Paul: Tell me more about that.

Heidi: When you think about how branch-based source code works, every branch is a commitment to have a merge conflict later.

Paul: Okay.

Heidi: It means that at its base, I feel like source code is thinking extremely sequentially.

Sometimes it interleaves the sequences, but it's always like a thing came first and then another thing came.

Paul: Right.

Heidi: And the second thing is always correct.

Paul: Mm-hmm.

Heidi: That may be true, but doing this really sequential thinking--

While it works for open source, which is really what Git was created for because it's distributed teams that don't communicate with each other, distributed people, I don't think it's very practical for people who are trying to figure out how to do something extremely complex with the kind of real-time communication that we have now--

Where we could just test something live as we're working on it.

Paul: Yeah. I think there's a lot of overlapping concerns there.

There's the mono repo versus microservices. There's the team thing.

There's how does an individual developer on a team get their code into production while someone else is working on, well, I guess you could say working on the same thing or working on different things, and those are different use cases.

Heidi: Mm-hmm.

Paul: Have you done trunk-based development much? I've never tried this, but I see people talking about it.

Heidi: It's certainly something that LaunchDarkly advocates, where we're just like let's not have branches, let's branch by abstraction using feature flags.

Paul: Do people just push to the main branch?

Heidi: Yep. There's only one branch.

Paul: There's only- Oh.

Heidi: Oh, there's no branch, there's only one code.

Paul: How do you do code reviews?

Heidi: You do them before push.

Paul: Okay. Okay. There is a pre-push system, that's let's call it.

Heidi: Yes.

Paul: Yeah, pull requests.

Heidi: Hmm?

Paul: No?

Heidi: Sometimes it's a pull request, like frequently, but I think that's an artifact.

Paul: Yeah, I think that's an artifact as well.

Heidi: Yeah, so you share a snippet, however, and I've seen some cool tools for sharing a code snippet with somebody for review before you save it, really.

Paul: Mm-hmm. GitHub had this thing awhile back--

They wrote about their continuous delivery process, and one of the things they did is that before you merge code into the main branch, you have it in production for some subset of users.

Which is obviously the opposite way that almost the entire industry does it.

Everyone else tests and gets it into main and then maybe enables it for some subset of users, but they deployed it to some subset of machines or users or whatever.

But yeah, I'm really liking what you're saying about how--

I mean, it's real, going deep on the concept of feature flags and on dogfooding your system and your services.

I kind of love it.

Heidi: It feels very strange, I think, is part of the problem because if you talk to anybody who's learned to code in the last 10 years, which is a lot of people, it's axiomatic that you sit down and you start a branch, and I'm like, "But why?"

Paul: With Dark we're straddling both worlds because the way that we've developed Dark itself is using fairly standard tools.

It's using GitHub and pull requests and CI and Docker and Kubernetes and the cloud and all that sort of thing.

But the way that you write code in dark is a lot closer to what you said about how LaunchDarkly does it, that you're creating, let's call it a sandbox, and we use feature flags to create those sandboxes.

It's sort of equivalent to creating a branch, but it's not an actual branch.

Then you can edit your code there, and we haven't built code review tools, but once you've got the thing over there you can have another human look at it, and it can be in production behind a header or for specific users or something along those lines.

But it's sort of that trunk-based development where there's lots of, I think, sort of the concept of a branch and the concept of feature flag are inherently the same except that Git branches specifically have a lot less flexibility in being deployed, like they're all in, or at least they're all in on a bundle, on a container, on a machine level or something like that.

But they fundamentally do the same thing as a feature flag, except a feature flag you can enable for one person or you can roll it back instantly and that kind of thing, but I think they're the same concept.

Heidi: Yeah, and it's that granular control over what you're building and how fast you can get feedback.

I think we keep coming back around to that because it's so key to say the way that you learn to build a better product is to get faster feedback on it.

Paul: Yeah, actually, this was one of the insights of Dark and it's--

You describing the LaunchDarkly software development process is really interesting because what is that code review for?

Heidi: By the way, I wouldn't say that this is a hundred percent how LaunchDarkly works. This is more like the idealized version.

Paul: Sure, sure, sure. It's a bit of a rhetorical question, but what is that code review for?

Well, it's to make sure that you don't break the system, right?

It's no longer a case of trying to figure out is this the right feature, is this how code should be written, does this adhere to our style guidelines?

Maybe it does a little bit of that, but for the most part if it's going to be controlled by a feature flag once it's in production anyway, you're just saying what is the risk of me actually pressing deploy on this thing?

The premise with Dark is that if you can make it so that there is no risk, then that step is unnecessary.

Heidi: Yes, exactly. I think a lot of our testing comes down to a misunderstanding of what we're scared of.

Testing is essentially a fear-based reaction to failure. If we say it's tested, then we can say I did my best.

Paul: I'm not sure I agree with you.

Heidi: All right. Take it apart.

Paul: I think that there's some validity to your point.

I think we often do that.

In the way that I code, for the most part, I don't write tests because I use static types and they fill in the same function and give you that, "Does it mostly work?"

But I do find, especially with anything finicky, like our code editor at Dark has a thousand unit tests maybe, and those tests allow us to keep it working because they prevent regressions and they prevent our brain from having to internalize the entire state of the product to be able to do the mental gymnastics to determine it.

Heidi: That's fair.

Paul: I think there's a little more of a fear, but I think fear certainly has a component of it.

Heidi: I stated this poorly, so thank you for taking that apart.

What I'm really thinking about when I say that is not the unit test and not the test driven development, but the idea that test coverage will save you.

Paul: Oh, for sure. Yeah, test coverage is-

Heidi: It's a blankie.

Paul: Yeah. I've never really used test coverage all that much.

Often you have a system which has no test coverage at all and it works perfectly and no one touches it because it works perfectly.

You could argue that no one touches it because they're afraid because there's no test coverage.

Heidi: Chicken and the egg.

Paul: Yeah.

Heidi: But if it works perfectly, that is the test. Isn't it?

Paul: So long as no one touches it.

Heidi: Right. I guess the thing is that I think about a lot is who are we testing for?

Are we testing to preserve the system, which is sort of the argument you're making, or are we testing to preserve the user experience?

Paul: I tend to think of testing as preserving my time.

Heidi: Okay.

Paul: Where it is faster to not test often I will prefer to not test.

That system I described the had the thousand unit tests?

The reason that I had a thousand unit tests was because I just couldn't keep it working when I was building it.

Heidi: Mm-hmm.

Paul: It started as 50 unit tests and that gave me a lot of confidence.

Then because we had built a system for super easily adding unit tests, it was able to grow to that and was able to keep it all working.

I think that if we didn't have those, we would constantly be going back and being like, "Oh, backspace doesn't work when you're on a curly brace. Someone go fix backspace on curly brace."

It's faster to have a test for everything that you break basically.

Heidi: It makes me think, my daughter makes cookies all the time, but the containers that we keep the powdered sugar and the flour in are identical, and I finally labeled them and I'm like, "Why are you still tasting it before you put it in the cookies?"

And she's like, "Because otherwise the cookies come out badly. Why would I trust the labels if this two second test will keep me from making bad cookies?"

I think that's sort of what you're saying is that the testing needs to serve a function for you where it saves you time.

Paul: Right. If it's flour and sugar, then it's probably fine, but if you're in a medical lab and there's a thousand beakers, you probably want to label them.

Heidi: Thank you for expanding my thinking on that because so many times I have seen people use tests as a talisman, as if their test passing meant that their software was delivering value.

Paul: Mm-hmm.

I think it's related to what you were saying earlier about everyone coming up in this open source world, because you can sort of think of open source as free labor, or at least you can think of it as a place where you can make requests on people's time that costs you nothing.

Someone comes in and they say, "Here's some code."

A very easy thing to say is, "Oh, it doesn't have a test,"and you could write the test yourself.

You could download it and you could manually test it, but even if that takes you 10 seconds, it's cheaper to make someone else do it and it takes them five minutes or 30 minutes or whatever.

I think when you work on a team that really considers the speed of output, you're much more likely to make trade offs that say, "I think this test is going to take you half a day to write and it's not worth writing that test."

Heidi: That's an interesting, it's almost like a seriousness quality check.

Paul: I'm not sure what you mean.

Heidi: If you say, "Why should I take your software seriously? You haven't even written a test for it."

Paul: Yeah. Maybe it's trust related because when you're on a team, or at least you're on a high trust team, you know that your correctness values have been taken into account.

You value the same thing as the other people on the team to the same degree as the other people on the team.

If you're on, let's say, a move fast and break things team, then you know that the other people on your team are also prioritizing those values and that they made the decisions that they made using the same rubrics that you made.

Whereas if you're taking contributions from outside, you don't know that, so I think a lot of--

Another thing about a lot of the open source process, when you make a pull request and there's an automatic checklist that's created because these people outside don't think about building software the way that you did.

And if we force them to write a test, then they have been forced to think about it the way that we did.

Heidi: That's a really interesting point and ties into something else I've been thinking about.

Not so much Dark, but other software, we're incorporating a ton of things from other people, open source, proprietary, something two-thirds of most people's software is other people's software.

Paul: Oh yeah.

Heidi: How do we do a credit check on our dependencies?

This is sort of like the left pad problem, right?

How do we deal with the fact that there are so many dependencies all the way down that we didn't get to contribute to and don't have that correctness feeling about?

I've thought it would be really interesting to do like a package credit rating.

Paul: I feel like there are companies that do this in a sense, maybe not credit rating.

They do dependency checking and-- But yeah, credit rating, like how many people work on the software, when was it last--

Yeah, actually we have proxies for this. Right?

We look at the number of stars that the package has on GitHub.

That's an- is it effective?

It's not a terrible proxy for how maintained it as or how many other people are depending on it.

Heidi: Right. I just think it's interesting.

The thing I was thinking about, because I'm always thinking about feature flags, is what if you could have a gating function that says, "I'm not going to ingest anything that's under a B."

This is maintained by one person in Romania.

That's not stable enough for my enterprise software.

Paul: The problem is that then you can't invest to sell.

Heidi: Yes.

Paul: Or you couldn't a year ago. It's probably been longer.

Heidi: Well, no, it's a giant problem. This is my pipe dream, right?

But I think it's interesting to think about, especially as you're building a language.

You don't necessarily want, maybe you want package signing, but maybe what you want is crowdsourcing.

Paul: There's a couple of thoughts that I've had about this.

A lot of what I think about is what can be done statically?

What can you tell automatically from the system?

Let's say you're building a Stripe package on Dark, and I intend to implement this, it might not be in the first version.

There's going to be something that says, "This thing makes HTTP requests to stripe.com."

That thing is also going to say, "This thing does not make HTTP requests to anywhere else."

It doesn't make database calls.

The only thing that this can do is send information to stripe.com, or receive it, or whatever.

If it also says, "This sends HTTP requests to mymalwareserver.com," then you know not to trust it.

Heidi: Sort of like the little kid phones where you can only call five numbers.

Paul: Yeah, exactly. Your Stripe package should only be able to call stripe.com and your Twitter package should only be able to call twitter.com.