1. Library
  2. Podcasts
  3. Generationship
  4. Ep. #17, How We Gather Our Thoughts with Mark Pesce
Generationship
30 MIN

Ep. #17, How We Gather Our Thoughts with Mark Pesce

light mode
about the episode

In episode 17 of Generationship, Rachel Chalmers is joined by futurist and AI expert Mark Pesce to explore the complex intersection of AI-generated code and copyright law. Mark shares insights from his recent experiments with language models and discusses the flaws he uncovered in AI transformers. Tune in to hear about the challenges of reporting AI issues, the impact of AI hallucinations, and the future of this evolving technology.

Mark Pesce is an award-winning podcast creator, journalist & futurist. He is the host of The Next Billion Seconds podcast, the co-inventor of VRML, and the author of 6 books. He's ​a ​sought ​after ​speaker ​and ​consultant ​in ​areas ​as ​diverse ​as ​fintech, ​education, ​government, ​real ​estate, ​media ​and ​cryptocurrency.

transcript

Rachel Chalmers: Today I am so happy to have my friend, Mark Pesce on the podcast.

Mark is the host of the award-winning podcast, "The Next Billion Seconds" on PodcastOne, multiple award-winning columnist for The Register, producer and host "This Week in Startups Australia," co-Inventor of VRML, the standard for 3D on the web and a core component of MPEG-4, author of six books, including "VRML: Browsing and Building Cyberspace", "The Playful World", and most recently, "The Last Days of Reality."

He's a sought after speaker and consultant in areas as diverse as fintech, education, government, real estate, media and cryptocurrency. He founded the postgraduate programs in digital and emerging media at USC and the Australian Film and Television and Radio School and he's currently holding honorary appointments at both the University of Sydney and the University of Technology.

Mark, thanks so much for coming on the show.

Mark Pesce: Thank you, Rachel.

Rachel: I want to ask about some experiments that you recently ran in partnership with an IP lawyer. You were using language models to generate Python code. Can you tell us about some aha moments that came out of that?

Mark: So there's two really interesting findings here. One of them is that if you are going to, and I set out to basically, as we say these days, 10x myself, right? So I've been coding for 45 years, give or take and I've been cutting in Python for 20 years or something and have always loved, like from the moment I touched Python, I was like, okay, that's it, I'm done. I don't need to learn another language.

So all of that was quite there, although I code less and less as I get older. So part of what I want to do is use a tool, in this case it was ChatGPT Plus, so GPT-4, to help me. You know, I know what I want to code, but often I don't necessarily want to write a particular Python function and it's very easy for me to say to Python, this is what I need function to do and have it turn it out and because I've been coding for 45 years, I can inspect that code and go, okay, this is what I needed to do.

But the thing I learned was that it actually changed my flow, rather than tinkering and Python because it's interpreted, is a really good language for tinkering, right? Just change one thing, run it, change one thing, run it.

You actually have to think things through. You have to think quite deliberately and that's standard operating practice in a software engineering practice, but it's not the way that I've been able to get away with developing for the last 20 years in Python, where I can just sit and diddle all day.

So the first thing that I did was, I had to sort of change my mindset about how I was approaching the problem, okay and it makes it less fun.

Right, because there is a certain joy and it's that kind of playfulness that you get around this, right? It's the constructivist model. So Piaget of, I'm exploring a problem space in order to understand it, it's what children do naturally, it's what adults do when they're deeply in flow in learning and I had to kind of pull away from that and go, actually, I need to be much more methodical here.

Okay, that's fine. The second thing I realized was that if I didn't really understand what I was asking for the function to do, I was going to get a crap function. So garbage in, garbage out.

And then the third thing I realized, after I had generated a function that I was using was, oh, this function has been generated by a computer and I know because there have been a number of lawsuits around this, that anything that is generated by an AI, that does not have a human author cannot be copy written and so I approach my IP attorney for who I am writing this code, and I go, so it's occurred to me, and I explained that to him.

He is like, "Yep." I was like, well, how is this going to work? Because you're going to have all of these folks who are using GitHub Copilot, for example and a lot of major commercial organizations now use that because it does 2x a programmer, maybe, or even 1.2x a programmer.

All of these little bits of code from Copilot are now being threaded into their code bases and those bits cannot be copy written. So are our code base is now basically Swiss cheeses of code that's under copyright and under not copyright? And he just said, "Mark, it's a mess."

Rachel: So what is going to happen and does the Copyleft movement, the open source movement, give us any models for working with this? Is there...

Mark: No. So if you take a look at what the NetBSD folks just did, they banned all AI generated contributions from their code base. I think Arch Linux is doing the same thing now.

I don't think Linus has done anything around the Linux Kernel yet, but because of the range of copyright issues and NetBSD uses a non-GPL copyright, I can't remember, probably just the BSD copyleft license, right? So it introduces such complexities for them in open source.

This is the thing, because there isn't one uniform, as you know, there isn't one uniform open source license either. It introduces such complexity and they, because they are the providers need to be utterly compliant with their licensing terms, because that's what they're doing.

It makes it actually impossible for them as well. So what it's doing is, it's making everything really messy for everyone.

Rachel: And then the idea of software vendors as suppliers to the US government has become a huge, huge deal with the requirement for SBOMs. What happens there?

Mark: And here we go sailing directly into the cliff face, but I want to share with you something I haven't shared publicly with anyone else. Now this, I've been told this episode will air after an article. I have a feature coming out in The Register, it should come out before the end of May. So if you go to The Register and Google Mark P-E-S-C-E, it should come up.

What I want to do is, I want to tell you a story. While I was writing this code for this IP attorney, I was testing some prompts that were going to be putting because the code is using AI to help figure out some things around copyright, right?

And so I was testing a prompt that was going to be embedded in the code and I was going to use it, calling APIs to the various language models and all this and I was like, okay, rather than wasting some API time, I'm just going to pop it into Copilot, pop it into Copilot, Copilot goes bananas.

Oh yeah, Copilot's having a really bad day here. Try it again, Copilot goes bananas again and Rachel, I assure you it's just a really normal prompt. There's nothing weird about it. I'm like, okay, that's weird.

Rachel: When you say "goes bananas," what was actually happening?

Mark: The output was disintegrating. As that, I'm dancing around all of the specifics here for reasons that will shortly become clear, all right? So I was like, okay, Copilot is clearly not working. I have browser tabs open with all of the models, right?

And so I'm just sort of like, okay, that's not working. Let me go try Mystral. Tried Mystral, Mystral exhibits the same behavior.

Rachel: Sure.

Mark: And then I try over a period of a couple of days, and this sort of gets into the flow of the story, I had managed to replicate this problem in every single model except for Anthropic's Claude.

Rachel: Interesting.

Mark: What that's telling me is that that is not a problem in a particular vendor's model, it's a problem in the transformer, which is the core technology that underlies all of these models and so I found a flaw, if you want to call it that, in the transformer, which a very simple prompt could generate.

So I went to a friend of mine in Sydney, whom you know very well and said to him, I know you don't like AI, I have a problem here. What should I do? Then he sighs and he's like, "Yeah, you're right. You have to do the right thing. So you need to go through the correct reporting processes to report this flaw" and Rachel...

My key finding here and something your listeners need to know, there is no reporting process, full stop.

Now I have fairly good contacts at Microsoft. They said, "we need you to file a vulnerability report, which I then did, documented the bug" and a security team came back a day later and said, "Well, we think this is a bug. We don't think it's a vulnerability," but my next question for the security team is, do you understand what a prompt attack is?

Which I don't think they do, because in the page where you can report all your possible vulnerabilities, there's a dropdown menu for all the various Microsoft programs in which you can report a bug and you know what's not on that list?

Rachel: Let me guess. Is it Copilot?

Mark: You are correct. So they aren't thinking in these terms. So I spent the better part of a week trying to contract the models. Now GROQ, the Google spin out G-R-O-Q, they were lovely. They had a contact page. I sent them a message, I sent them the documentation.

They got back to me the next day and said, "You're right. We've documented this across all of our models." Which is good for me, because that meant it wasn't me just making this up. It's like, okay, this is an actual genuine problem. They don't of course make these models. They simply use other people's models. They said they were going to reflect it upward.

There is no way, as near as I can tell, to contact Mystral to tell them. I mean, there's a Contact Us page. I tried twice, got no response. There's another company, a startup that's valued around a billion dollars, that has no contact information at all, except to their PR agency.

I went to their PR agency, they were lovely. I said, here's the thing, I have a serious flaw that I need to report to them. They finally, through the PR agency, passed that along to the CTO who never got back to me.

And then a certain, I'll call it unnamed multi-trillion dollar organization, in which I have a contact at the vice presidential level in which I'd been able to replicate this flaw, passed it along to them, they said, "Okay, we will go and find people in the AI unit" and in a week they got back to me and said, "Oh my gosh, our team is so busy putting the fires out from the AI model that they just released. They don't have time."

Rachel: Very concerning stuff.

Mark: This is where we are right now.

Everyone is moving fast and they don't care if things are broken. It's not even their breaking things. They just don't care if things are broken.

So I believe, and the problem here, as I say in the article, I'm holding on to this flaming bag of kryptonite, because it's a very simple prompt that I know can DDOS GPT-4, because I did it accidentally when I was testing.

Rachel: On a related note.

Mark: Yes.

Rachel: You've written that we have to stop ignoring hallucinations in large language models and others have argued that hallucinations can't be overcome, that they're an intrinsic property of how these systems work. Where do you stand?

Mark: I didn't write that on the Windows Copilot news blog. I was linking to an article on The Verge, which said that.

Rachel: My reading comprehension is disgraceful. I will rescind my English degrees.

Mark: I just want to be clear on that, but I also posted it to the blog, because I think it's very important. Now, part of what I do, so I have a consultancy called Wisely AI and the whole point of Wisely AI is we help people and organizations use AI safely and wisely and part of what we do, and we did a big white paper on this called "De-Risking AI," is look at hallucinations dead on.

It's like, look at, we don't exactly know all of the causes of hallucinations, but we know some great ways to mitigate them. For example, if you are in a position where you can't tell whether an output is hallucinated, you need to go find a human expert. At the very least, you need to put the same question to multiple models and see if those results agree.

So there are all sorts of techniques that you can use to, if not mitigate hallucinations completely, at least allow you to detect them and work around them, right? Are hallucinations inherent to models. There is a wealth of evidence that says, yeah, probably that's the case. We know that hallucinations are also an outcome of prompts that are poorly formed.

In other words, they're ambiguous in ways that allow the language models to simply go in the wrong direction, right? So these systems are clearly not perfect. It's not clear that they're ever going to become perfect, but that doesn't mean they're not useful.

Rachel: It doesn't mean they're not powerful and it doesn't mean they're not going to change many, many things about the way we work and build.

Mark: Yep.

Rachel: Mark, you've been, by design, at the forefront of three or four successive waves of hype, augmented reality, the cloud itself, the Internet of Things, and now AI. Do you ever get scared by popular delusions and the madness of crowds?

Mark: When you are inside the beginning days of something and you know, Erik Davis documented this, I think in Techgnosis quite well, there's a certain utopian thinking that comes along with that, so the early days of AR, VR really, the early days of the web, so if you're talking about the cloud.

I think both of those had a very starry-eyed, because what you're doing is you're opening a frontier of possibilities and the people who are opening that frontier are not the settlers who tend to be extraction and tend to make those frontiers less appealing than they would be. They tend to be pioneers, right?

Pioneers, as they say, are identified by the arrows in their back. Of course, if there are arrows in your back, while you're a pioneer.

Rachel: You weren't actually a pioneer.

Mark: Yeah, that, well that means you've been upsetting the natives, which probably is not a really good way to go around your business.

Rachel: The previous generation of pioneers.

Mark: Yeah, so you really do want to be the folks who are out there exploring and that has very much been where I wanted to be.

Now I have very publicly, I think because I was also very much in early days of understanding digital currencies, cryptocurrencies, but digital currencies more broadly and I have completely rolled back any sort of support for what we would call cryptocurrencies.

I'm still very much thinking that digital currencies will be a big part of our financial future, but that cryptocurrencies have become, as in the Mos Eisley Spaceport, a hive of scum and villainry.

Rachel: A wretched hive of scum and...

Mark: A wretched hive, thank you. Thank you and the funny thing is, because I have lovely friends who have made a lot of money and are quite expert in using all of these tools and are very respectable individuals and this does not reflect on them, although I think that they get quite stroppy, because I have these opinions.

But my own feeling is having been here in that space for a decade and having spent five years of that decade working hard to bring the regulators in concert with these companies and realizing that at heart, these companies not only did not want to be regulated, but did not understand the value of regulation in keeping them going.

That there was a frame of mind that blinded them to the reasonableness of doing things the right way, as a strategy for long-term survival. Therefore, FTX is gone, therefore Binance is gone, the two largest companies, because guess what? They simply could not see the evidence of their own senses.

That in fact, playing by the rules in a highly regulated financial system was the path to success.

Rachel: Mark, do you think language models can attain sentience?

Mark: No.

Rachel: And if we do develop an AGI, will we make good pets?

Mark: Yeah, exactly.

There is no indication from any of the work that's going on right now, that there is a path from a large language model or the transformer model into an AGI.

Rachel: What's your definition of an AGI in that sense?

Mark: I think actually we need to step back and we need to say, do we even know? And I always get, again, quite stroppy when people start throwing terms like AGI around, because I'm like, do we have a functional definition of intelligence? And of course the answer is no.

It is situational, it is situated, it is culturally contextually dependent, all of these things and so if you don't even have an idea of intelligence, that human beings can agree upon for themselves, it becomes difficult for me to understand what AGI is.

We know that this is a persistent and I would say pernicious myth in the history of artificial intelligence. Intelligence in the 1950 to 1970 range, for artificial intelligence meant chess playing ability, because all the people who were working in the field could play chess.

Rachel: Uh-huh.

Mark: All right, well I guess that means that's good enough and then we keep on moving the goalposts on that. I am convinced, because I'm already seeing it, that all of the stuff that we are calling artificial intelligence in 2024, we will be calling automations by 2026 or 2027 and we'll be completely comfortable with that, right?

I mean, are they artificial intelligence? Yeah, but that's kind of also, what's that telling us? Is it's the shock of the new, it's not, it's intelligence, it's the shock of the new.

We know, because we understand enough about how transformers work, that they're just kind of really, really sophisticated search engines that have this quality we can call attention, which allows them basically to pay attention to a whole bunch of stuff that's in a prompt while they're searching.

And of course what we did is, we spent the last 30 years jamming everything online and now we can condense that into a weighted model of x billion or x trillion parameters and then use a transformer to search the entire model.

So what we've got is essentially the natural outgrowth of everything we've been doing over the last 30 years in an extremely accessible format.

Rachel: What does Cory Doctorow call it? Spicy predictive text.

Mark: Yes, spicy, grand theft auto-complete is the other one.

Rachel: Very nice.

Mark: Yeah.

Rachel: The capabilities of these LLMs as you say, they build on 30 years of work, but they came together in a way that shocked people, who weren't paying close attention and I include myself in that number. What future shocks are coming in the next billion seconds?

Do you think quantum computing is going to be viable? What about killer drones? What do you see coming?

Mark: Now look at, I'm pretty sure that killer drones are already being used in Ukraine.

Rachel: Yeah.

Mark: Right? I mean, we kind of have versions of people's general cell phones being used to track and da, da, da, da, da. So you know, and we know that the US Army is testing robotic dogs with weapons on them that still have to be fired by a human.

You know, it's not whether those machines are possible, it's how are we designing them for control interfaces, you know? And this is, I think a good question to put to, for example, Palmer Luckey, who has a big tie up with the Australian government to build autonomous submarines,

Rachel: Mh-hmm.

Mark: Right, mini subs that will be patrolling the coastline against the invading Chinese or whoever it might be. It's obviously the Chinese, but whatever we're saying.

Rachel: I don't know, those New Zealanders, don't trust them. I mean, I married one, but you can't trust them.

Mark: We're going to take them without firing a shot. So there's all of that to consider there, but one of the interesting things is, of course Australia's also got quite strong quantum computing skills.

So at Margaret Simmons, who I got the opportunity to work with last year, lovely human being, does a lot of work in basic quantum computing technology, particularly solid state, so that it's quite more, it's more easily viable than super cooled entanglement based quantum computing.

We seem to be getting better and better at that. However, with that said, the first quantum computing stuff was going on when I was at MIT in the 1980s, alright? Which tells you that we've been at that for 40ish years now.

It's been very, very, very slow progress. Do we get to a point where our cryptographic systems are no longer viable? Well we already have mitigations in place for that, right? We're using quantum resistant forms of cryptography and we're moving the web to those things. Will that be a watershed moment?

Will it be a watershed moment in our ability to compute the folding of a protein? Very probably. So there's some things that we get when we get there. Will we get there? Yes. I think we're going to get there by degrees, because the story so far has been incrementally improvements in the technology and the technology has incrementally improved.

The way I heard Margaret Simmons describe it, she's like roughly where we are, this is in 2023, is kind of in 1968 with respect to silicon, where you couldn't really put together a microprocessor yet, but you could get several circuits onto a chip.

And so she sees an equivalence between Moore's Law as we understand it, that basically density doubles every 18 to 24 months and where we're going with quantum computing, she says by 2030 the idea of a quantum processor that is available and scalable even at small scales, she's like that seems to be pretty much on the roadmap for us.

Rachel: Yeah, it was about 10 years too early to that market, to my chagrin, but yeah, technological progress works as Ernest Hemingway, I think it was described bankruptcy, "A little at a time and then all at once."

Mark: All at once, yes and look, if OpenAI had not taken the transformer paper and went, "Woo, there's something here beyond language translation," which was the original use case for it, right?

And the thing is, that should have been our dead giveaway, because in fact, before the pandemic language translation had suddenly gotten a lot better, because of the transformer, because the basic paper talks about translating between English and German. That was the example they used and it got very high confidence rates on that and so that was the key that should have told us.

Woo, this is really interesting here, right? I mean I think, you know, if you want to come back to an AGI, you can go either, well, an AGI to be an AGI has to be so much better than a human being at every conceivable measure put to it of what intelligence is, that it's irrefutable, right?

And there's no indication of anything like that, but that doesn't constitute a test. It doesn't even necessarily constitute a proof. In the language of the field today, that would constitute a vibe.

Rachel: Mark, what are some of your favorite sources for learning about AI?

Mark: Oh God, I mean, look, every morning when I wake up before I'm out of bed, all right, it's Ars Technica, MacRumors, just 'cause, Deadline, because you have to keep up with what Hollywood is doing every day. HotHardware, The Verge, and then HN, Hacker News.

Rachel: No, Gary Markus? I would've thought you were a Gary Markus fan.

Mark: I don't think so.

Rachel: Oh, I'll send you a link. He's great

Mark: If it shows, no, I know he is. If it shows up on Hacker News, then generally that means, it's probably worth me reading it, you know? And things will float by, things will float by in my Mastodon feed. So things are coming in from friends on social media and things like that as well, be cause it's the normal stuff.

But in terms of the thing that I feed myself before I got out of bed and the day we're recording this, Microsoft Build just launched overnight and there were a whole set of announcements about Copilot Plus, which is an AI enabled PC and something called Recall, which is going to be the PC, basically photographing its screen every minute, keeping a running storage of that and then allowing you to search it.

Because God knows there's nothing wrong with your computer acting as a surveillance agent for the last three months, what could possibly happen?

Rachel: Don't you trust these mega corporations, Mark? What have they ever done to you?

Mark: It's not, I don't have to, I mean, nevermind the fact that Microsoft's recent security record is not the most confidence building that you might indicate, but it's not about what these companies do, it's about the honeypot that's being generated and how that will then attract some people who want to get in the honeypot.

Rachel: Because if there's one industry, well, two industries for which AI has been a genuine 10 to a hundred x force multiplier, it's porn obviously and bot farms.

Mark: Yes, yes and the difference between those two is getting narrower and narrower as the days pass.

Rachel: Mark, if everything goes exactly how you'd like it to for the next five years, what does the future look like?

Mark: Oh my gosh, I'm a big fan of local language models, ones that are running on device, but I'm also a big fan of open source language models.

I would like people to be using models that are not the product of human reinforcement systems that ruined the minds of the people, because they had to deal with lots of crap and all that and we saw what was happening. I want us to be able to use systems that are not inherently, in their design, copyright violations.

Alright and there's some legal wrangling around what constitutes copyright violation and training data and all this, but I would like us to be in a place, where we are using these systems and these systems have been created by the most healthy possible means and they're systems that we control and we can tune and that for us to do so, is very easy and accessible for us.

I no longer use Twitter, but I'm now sitting on all 290,000 of the tweets that I sent in the, you know, 15 years that I was on Twitter. At some point, I want actually use those to fine tune a language model, which will start to both reflect me and be something that I interact with, but maybe also something that other people interact with, but something that becomes characteristic.

I want us to be able to have access to those tools as tools for thinking, but so that there's a dance between what we're asking them to do and then what they're bringing us to, not just a one way conversation and it feels like that is technically achievable.

It will take time and effort and open source dedication to people to be able to create those tools. I'm, as you know, a big fan of open source tools and have always been, I think this is the next place that open source tools need to go and they need to then be so easy to use for people that people don't even think about it. Of course. That's just the way things work.

Rachel: I'm getting increasingly interested in the work done by the Local First, the Lo-Fi movement, coming out of places like Ink & Switch Labs.

I think that is a, you know, the pendulum swings between hyper centralization and redistribution to the edge. I think we're overdue for a pendulum swing back.

Mark: Yeah, and look, you take a look at the AI PC and there's two arguments here. One is that it brings a lot of compute really to the edge, right? And that's a good thing.

There's another argument that this is going to cause the junking of an entire generation of PCs that were just purchased for the pandemic so that people could work from anywhere and so I am of several different minds about this.

I do think local inferencing is a good thing and that people should do it, but it needs to make sense for them. So I'll give you a for example here, something that I have and I literally whipped this together in a little more than an hour, the weekend before last and I used some lovely open source software called whisper.cpp, which basically just takes the microphone of my Macintosh and just transcribes it out to a file.

I can just talk to it and it does it with a high degree of confidence, because Whisper is a model that was originally developed at OpenAI, which unbelievably they open sourced, which has been adapted to run well on my Macintosh and then I wrote some other code that simply takes that and every once in a while, runs a summary on it, so I can just be in a meeting and have a running summary of what's going on in the meeting.

It's an incredibly powerful tool for thinking and helping to organize thoughts and again, took me an hour to do that and it's all happening on device, because I have a local model running on the device that's creating the summaries plus Whisper and so that's happening on my Macintosh. I want to be able to have that perhaps on a brandy new iPad with the M4.

I want those kinds of tools everywhere and I want them to be part of how we gather our thoughts to think. It feels like that's the right direction here. So you're talking about Lo-Fi, let's bring all of that AI as close to us as possible and make it as tuneable to us as possible.

Rachel: I love that. Last question, my favorite question. You get to name a Generationship. Where we're going to Alpha Centauri, what is the ship named?

Mark: Rhizome and that is my, eh, I mean it's my nod to "A Thousand Plateaus," Deleuze and Guattari right? And the idea that if you want to do something that's not colonizing, but is affiliating, that is branching out and inviting the co-evolution, than you name it, Rhizome.

Rachel: Absolutely beautiful. Mark, it's always a joy. Thanks for coming on the show.

Mark: My pleasure. Thank you Rachel.