Back to Index

E124: AutoGPT's massive potential and risk, AI regulation, Bob Lee/SF update


Chapters

0:0 Bestie intros!
1:49 Understanding AutoGPTs
23:57 Generative AI's rapid impact on art, images, video, and eventually Hollywood
37:38 How to regulate AI?
72:35 Bob Lee update, recent SF chaos

Transcript

Welcome to Episode 124 of the all in podcast. My understanding is there's going to be a bunch of global fan meetups for Episode 125. If you go to Twitter and you search for all in fan meetups, you might be able to find the link. But just to be clear, we're not they're not official all in this.

They're fans. It's self organized, which is pretty mind blowing. But we can't vouch for any particular organization, right? Nobody knows what's going to happen at these things. You can get robbed. It could be a setup. I don't know. But I retweeted it anyway, because there are 31 cities where you lunatics are getting together to celebrate the world's number one business technology podcast.

It is pretty crazy. You know, what this reminds me of is in the early 90s, when Rush Limbaugh became a phenomenon. There used to be these things called rush rooms where like restaurants and bars would literally broadcast rush over their speakers during I don't know, like what for the morning through lunch broadcast, and people would go to these rush rooms and listen together.

What was it like sex when you were about 1617 years old at the time? What was it like when you hosted this? It was a phenomenon. But I mean, it's kind of crazy. We've got like a phenomenon going here where people are organizing. You've said phenomenon three times instead of phenomenon.

He said phenomenon. phenomenal. Why sacks in a good mood? What's going on? There's a specific secret toe tap that you do under the bathroom stalls when you go to a rush room. I think you're getting confused about a different event you went to. There's a lot of actual news in the world and generative AI is taking over the dialogue and it's moving at a pace that none of us have ever seen in the technology industry.

I think we'd all agree the number of companies releasing product and the compounding effect of this technology is phenomenal. I think we would all agree a product came out this week called auto GPT. And people are losing their mind over it. Basically, what this does is it lets different GPT is talk to each other.

And so you can have agents working in the background and we've talked about this on previous podcasts. But they could be talking to each other essentially, and then completing tasks without much intervention. So if let's say you had a sales team and you said to the sales team, hey, look for leads that have these characteristics for our sales software, put them into our database, find out if they're already in the database, alert a salesperson to it, compose a message based on that person's profile on LinkedIn or Twitter or wherever.

And then compose an email, send it to them if they reply, offer them to do a demo and then put that demo on the calendar of the salesperson, thus eliminating a bunch of jobs and you could run these, what would essentially be cron jobs in the background forever, and they can interact with other LLM in real time sacks.

I've just gave but one example here. But when you see this happening, give us your perspective on what this tipping point means. Let me take a shot at explaining it in a slightly different way. Sure. Not that your explanation was wrong. But I just think that maybe explain it in terms of something more tangible.

So I had a friend who's a developer who's been playing with auto GPT. By the way, so you can see it's on GitHub. It's kind of an open source project. It was sort of a hobby project. It looks like that somebody put up there. It's been out for about two weeks.

It's already got 45,000 stars on GitHub, which is a huge number. Explain what GitHub is for the audience. It's just a code repository. And you can create, you know, repos of code for open source projects. That's where all the developers check in their code. So you know, for open source projects like this, anyone can go see it and play with it.

It's like Pornhub, but for developers. It would be more like amateur Pornhub because you're contributing your scenes as it were your code. Yes, continue. But this thing has a ton of stars. And apparently just last night, I got another 10,000 stars overnight. This thing is like, exploding in terms of popularity.

But in any event, what you do is you give it an assignment. And what auto GPT can do that's different is it can string together prompts. So if you go to chat GPT, you prompt it one at a time. And what the human does is you get your answer.

And then you think of your next prompt, and then you kind of go from there and you end up in a long conversation that gets you to where you want to go. So the question is, what if the AI could basically prompt itself, then you've got the basis for autonomy.

And that's what this project is designed to do. So what you'll do is when my friend did it, he said, Okay, you're an event planner, AI. And what I would like you to do is plan a trip for me for a wine tasting in Healdsburg this weekend. And I want you to find like the best place I should go and it's got to be kid friendly, not everyone's going to drink, we're gonna have kids there.

And I'd like to be able to have other people there. And so I'd like you to plan this for me. And so what auto GPT did is it broke that down into a task list. And every time I completed a task, it would add a new task to the bottom of that list.

And so the output of this is that it searched a bunch of different wine tasting venues, it found a venue that had a bocce ball and lawn area for kids, it came up with a schedule, it created a budget, it created a checklist for an event planner. It did all these things.

And my friend says he's actually in a book the venue this weekend and use it. So we're going beyond the ability just for a human to just prompt the AI we're now the AI can take on complicated tasks. And again, it can recursively update its task list based on what it learns from its own previous prompt.

So what you're seeing now is the basis for a personal digital assistant. This is really where it's all headed is that you can just tell the AI to do something for you pretty complicated. And it will be able to do it, it will be able to create its own task list and get the job done in quite complicated jobs.

So that's why everyone's losing their shit over this free burger thoughts on automating these tasks and having them run and add tasks to the list. This does seem like a sort of seminal moment in time that this is actually working. I think we've been seeing seminal moments over the last couple of weeks and months, kind of continuously, every time we chat about stuff, or every day, there's new releases that are paradigm shifting, and kind of reveal new applications and perhaps concepts structurally that we didn't really have a good grasp of before some demonstration came across chat GPT was kind of the seed of that.

And then all of this evolution since has really, I think, changed the landscape for really how we think about our interaction with the digital world and where the digital world can go and how it can interact with the physical world. It's just really profound. One of the interesting aspects that I think I saw with some of the applications of auto GPT, where these almost like autonomous characters in, in like a game simulation that could interact with each other or these autonomous characters that would speak back and forth to one another, where each instance has its own kind of predefined role.

And then it explores some set of discovery or application or prompt back and forth with the other agent, and that the kind of recursive outcomes with this agent to agent interaction model, and perhaps multi agent interaction model, again, reveals an entirely new paradigm for, you know, how things can be done simulation wise, you know, discovery wise engagement wise, where one agent, you know, each agent can be a different character in a room.

And you can almost see how a team might resolve to create a new product collaboratively by telling each of those agents to have a different character background or different set of data or a different set of experiences or different set of personality traits. And the evolution of those that multi agent system outputs, you know, something that's very novel, that perhaps any of the agents operating independently, we're not able to kind of reveal themselves.

So again, like another kind of dimension of interaction with these with these models. And it again, like every week, it's a whole nother layer to the onion. It's super exciting and compelling. And the rate of change and the pace of kind of, you know, new paths being being defined here, really, I think makes it difficult to catch up.

And particularly, it highlights why it's gonna be so difficult, I think, for regulators to come in and try and set a set of standards and a set of rules at this stage, because we don't even know what we have here yet. And it's going to be very hard to kind of put the genie back in the box.

Yeah. And you're also referring, I think, to the Stanford and Google paper that was published this week, they did a research paper where they created essentially the Sims, if you remember that video game, put a bunch of what you might consider NPCs non playable characters, you know, the merchant or the whoever in a in a video game, and they said, each of these agents should talk to each other, put them in a simulation, one of them decided to have a birthday party, they decided to invite other people, and then they have memories.

And so then over time, they would generate responses like I can't go to your birthday party, but happy birthday, and then they would follow up with each player and seemingly emergent behaviors came out of this sort of simulation, which of course, now has everybody thinking, well, of course, we as humans, and this is simulation theory are living in a simulation, we've all just been put into this trauma is what we're experiencing right now.

How impressive this technology is, or is it? Oh, wow, human cognition, maybe we thought was incredibly special, but we can actually simulate a significant portion of what we do as humans. So we're kind of taking the shine off of consciousness. I'm not sure it's that, but I would make two comments.

I think this is a really important week. Because it starts to show how fast the recursion is with AI. So in other technologies, and in other breakthroughs, the recursive iterations took years, right? If you think about how long did we wait for from iPhone one to iPhone two, it was a year, right?

We waited two years for the app store. Everything was measured in years, maybe things when they were really, really aggressive, and really disruptive were measured in months. Except now, these incredibly innovative breakthroughs are being measured in days and weeks. That's incredibly profound. And I think it has some really important implications to like the three big actors in this play, right?

So it has, I think, huge implications to these companies, it's not clear to me how you start a company anymore. I don't understand why you would have a 40 or 50 person company to try to get to an MVP. I think you can do that with three or four people.

And that has huge implications then to the second actor in this play, which are the investors and venture capitalists that typically fund this stuff, because all of our capital allocation models were always around writing 10 and 15 and $20 million checks and $100 million checks, then $500 million checks into these businesses that absorbs tons of money.

But the reality is like, you know, you're looking at things like mid journey and others that can scale to enormous size with very little capital, many of which can now be bootstrapped. So it takes really, really small amounts of money. And so I think that's a huge implication. So for me, personally, I am looking at company formation being done in a totally different way.

And our capital allocation model is totally wrong size. Look, fun for for me was $1 billion. Does that make sense? No, for the next three or four years, no, the right number may actually be $50 million invested over the next four years. I think the VC job is changing.

I think company startups are changing. I want to remind you guys have one quick thing as a tangent. I had this meeting with Andre carpety, I talked about this on the pod, where I said I challenged him, I said, Listen, the real goal should be to go and disrupt existing businesses using these tools, cutting out all the sales and marketing, right, and just delivering something and I use the example of stripe, disrupting stripe by going to market with an equivalent product with one 10th the number of employees at one 10th the cost.

What's incredible is that this auto GPT is the answer to that exact problem. Why? Because now if you are a young industrious entrepreneur, if you look at any bloated organization that's building enterprise class software, you can string together a bunch of agents that will auto construct everything you need to build a much much cheaper product that then you can deploy for other agents to consume.

So you don't even need a sales team anymore. This is what I mean by this crazy recursion that's possible. Yeah. So I'm really curious to see how this actually affects like all of this all of these, you know, it's a singular company. I mean, it's a continuation chamath of and then the last thing I just want to say is related to my tweet.

I think this is exactly the moment where we now have to have a real conversation about regulation. And I think it has to happen. Otherwise, it's going to be a shit show. Let's put a pin in that for a second. But I want to get saxes response to some of this.

So, sax, we saw this before it used to take two or $3 million to commercialize a web based software product app, then it went down to 500k, then 250. I don't know if you saw this story. But if you remember the hit game on your iPhone, flappy birds, flappy birds, you know, was a phenomenon.

It you know, hundreds of millions of people played this game over some period of time. Somebody made it by talking to chat GPT for a mid journey in an hour. So the perfect example and listen, it's a game. So it's something silly. But I was talking to two developers this weekend.

And one of them was an okay developer. And the other one was an actual 10x developer who's built, you know, very significant companies. And they were coding together last week. And because of how fast chat GPT and other services were writing code for them. He looked over at her and said, you know, you're basically a 10x developer now, my superpower is gone.

So where does this lead you to believe company formation is going to go? Is this going to be, you know, massively deflationary companies like stripe are going to have 100 competitors in a very short period of time? Or are we just going to go down the long tail of ideas and solve everything with software?

How's this going to play out in the in the startup space? David Sacks? Well, I think it's true that developers and especially junior developers get a lot more leverage on their time. And so it is going to be easier for small teams to get to an MVP, which is something they always should have done anyway, with their seed round, you shouldn't have needed, you know, 50 developers to build your v1, it should be, you know, this the founders really.

So that, that I think is already happening. And that trend will continue. I think we're still a ways away from startups being able to replace entire teams of people. I just, you know, I think right now, we're at the final ways, final ways, months, years, decades, or it's in the years, I think, for sure.

We don't know how many years. And the reason I say that it's just very hard to replace, you know, 100% of what any of these particular job functions do 100% of what a sales rep does 100% of what a marketing rep does, or even what a coder does. So right now, I think we're still at the phase of this, where it's a tool that gives a human leverage.

And I think we're still a ways away from the, you know, human being completely out of the loop. I think right now, I see it mostly as a force for good, as opposed to something that's creating, okay, a ton of dislocation. Friedberg, your thoughts, if we follow the trend line, you know, to make that video game that you shared took probably a few hundred human years than a few dozen human years, then, you know, with other toolkits coming out, maybe a few human months, and now this person did it in one human day, using this tooling.

So if you think about the implication for that, I mentioned this probably last year, I really do believe that at some point, the whole concept of publishers and publishing maybe goes away, where, you know, much like we saw so much of the content on the internet today being user generated, you know, most of the content is made by individuals posted on YouTube or Twitter, that's most of what we consume nowadays, or Instagram or Tick Tock, in terms of video content.

We could see the same in terms of software itself, where you no longer need a software startup or a software company to render or generate a set of tools for a particular user, but that the user may be able to define to their agent, their AI agent, the set of tools that they would individually like to use or to create for them to do something interesting.

And so the idea of buying or subscribing to software, or even buying or subscribing to a video game, or to a movie or to some other form of content starts to diminish. As the leverage goes up with these tools, the accessibility goes up, you no longer need a computer engineering degree or computer science degree, to be able to harness them or use them.

And individuals may be able to speak in simple and plain English, that they would like a book or a movie that does that looks and feels like the following or a video game that feels like the following. And so when I open up my iPhone, maybe it's not a screen with dozens of video games, but it's one interface.

And the interface says, What do you feel like playing today. And then I can very clearly and succinctly state what I feel like playing and it can render that game and render the code, render the engine, render the graphics and everything on the fly for me. And I can use that.

And so you know, I kind of think about this as being a bit of a leveling up that the idea that all technology again, start central and move to kind of the edge of the network over time. That may be what's going on with computer programming itself now, where the toolkit to actually use computers to generate stuff for us is no longer a toolkit that's harnessed and controlled and utilized by a set of centralized publishers, but it becomes distributed and used to the edge of the network by users like anyone.

And then the edge of the network technology can render the software for you. And it really creates a profound change in the entire business landscape of software and the internet. And I think it's, you know, it's, it's really like, we're just starting to kind of see have our heads unravel around this notion.

And we're sort of trying to link it to the old paradigm, which is all startups are gonna get cheaper, smaller teams. But it may be that you don't even need startups for a lot of stuff anymore. You don't even need teams. And you don't even need companies to generate and render software to do stuff for you anymore.

Chama when we look at this, it's kind of a pattern of augmentation, as we've been talking about here, we're augmenting human intelligence, then replacing this replication or this automation, I guess might be a nice way to say it says augmentation, then automation. And then perhaps deprecation, where do you sit on this?

It seems like sacks feels it's going to take years. And freeberg thinks, hey, maybe startups and content are over? Where do you sit on this augmentation, automation, deprecation journey we're on? I think that humans have judgment. And I think it's going to take decades for agents to replace good judgment.

And I think that's where we have some defensible ground. And I'm going to say something controversial. I don't think developers anymore have good judgment. developers get to the answer, or they don't get to the answer. And that's what agents have done. Because the 10x engineer had better judgment than the one x engineer.

But by making everybody a 10x engineer, you taking judgment away, you're taking code paths that are now obvious and making it available to everybody. It's effectively like what you did in chess, and AI created a solver. So everybody understood the most efficient path in every single spot to do the most EV positive thing, the most expected value positive thing.

Coding is very similar that way you can reduce it and view it very, very reductively. So there is no differentiation in code. And so I think freeberg is right. So for example, let's say you're going to start a company today. Why do you even care what database you use?

Why do you even care? Which cloud you're built on? To freeberg's point, why do any of these things matter? They don't matter. They were decisions that used to matter when people had a job to do. And you paid them for their judgment. Oh, well, we think GCP is better for this specific workload.

And we think that this database architecture is better for that specific workload. And we're going to run this on AWS, but that on Azure. And do you think an agent cares? If you tell an agent, find me the cheapest way to execute this thing. And if it ever gets not, you know, cheaper to go someplace else, do that for me as well.

And, you know, ETL all the data and put it in the other thing, and I don't really care. So you're saying it will, it will swap out stripe for ad yen or it does node for Amazon Web Services, it's going to be ruthless, it's going to be ruthless. And I think that the point of that that and that's the exact perfect word, Jason, AI is ruthless, because it's emotionless.

It was not taken to a steak dinner. It was not brought to a basketball game. It was not sold into a CEO. It's an agent that looked at a bunch of API endpoints figured out how to write code to it to get done the job at hand that was passed to it within a budget, right.

The other thing that's important is these agents execute within budgets. So another good example was, and this is a much simpler one. But a guy said, I would like seven days worth of meals. Here are my constraints from a dietary perspective. Here are also my budgetary constraints. And then what this agent did was figured out how to go and use the Instacart plugin at the time and then these other things and execute within the budget.

How is that different when you're a person that raises $500,000 and says, I need a full stack solution that does x, y and z for $200,000. It's the exact same problem. So I think it's just a matter of time until we start to cannibalize these extremely expensive, ossified large organizations that have relied on a very complicated go to market and sales and marketing motion.

I don't think you need it anymore in a world of agents and auto GPT. And I think that to me is quite interesting, because a, it creates an obvious set of public company shorts. And then be, you actually want to arm the rebels and arming the rebels to use the Toby Lutke analogy here would mean to seed hundreds of one person teams, hundreds and just say go and build this entire stack all over again using a bunch of agents.

Yeah, recursively, you'll get to that answer in less than a year. Interestingly, when you talk about the emotion of making these decisions, if you look at Hollywood, I just interviewed on my other podcast, the founder of you have another podcast. I do. It's called startups. Thank you. So you've been on her four times please.

Don't give him an excuse to plug it. Listen, I'm not going to plug this week in startups available on Spotify and iTunes and youtube.com slash this weekend. runway is the name of this company I interviewed. And what's fascinating about this is he told me on everything everywhere all at once the award winning film, they had seven visual effects people on it, and they were using his software.

The late night shows like Colbert and stuff like that are using it. They are ruthless in terms of creating crazy visual effects now, without and you can do text prompt to get video output. And it is quite reasonable what's coming out of it. But you can also train it on existing data sets.

So they're going to be able to take something sacks, like the Simpsons, or South Park, or Star Wars, or Marvel, take the entire corpus of the comic books and the movies and the TV shows, and then have people type in have Iron Man do this have Luke Skywalker do that.

And it's going to output stuff. And I said, Hey, when would this reach the level that the Mandalorian TV show is and he said within two years now he's talking his own book, but it's quite possible that that all these visual effects people from industrial light magic on down are going to be replaced with director sacks who are currently using this technology to do what do they call the images like that go with the script storyboards storyboards Thank you.

They're doing storyboards in this right now, right? The difference between the storyboard sacks and the output is closing in the next 30 months, I would say, right? I mean, maybe you could speak to a little bit about the pace here, because that is the perfect ruthless example of ruthless AI.

I mean, you could have the entire team at industrial light magics or Pixar be unnecessary. This decade? Well, I mean, you see a bunch of the pieces already there. So you have stable diffusion, you have the ability to type in the image that you want, and it spits out, you know, a version of it, or 10 different versions of it.

And you can pick which one you want to go with, you have the ability to create characters, you have the ability to create voices, you have the ability to replicate a celebrity voice, the only thing that's not there yet, as far as I know, is the ability to take static images and string them together into a motion picture.

But that seems like it's coming really soon. So yeah, in theory, you should be able to train the model, where you just give it a screenplay, and it outputs, essentially an animated movie. And then you should be able to fine tune it by choosing the voices that you want, and the characters that you want.

And, you know, and that kind of stuff. So yeah, I think we're close to it. Now, I think that the question, though, is, you know, every nine, let's call it a reliability is a big advancement. So yeah, it might be easy to get to 90% within two years, but it might take another two years to go from 90 to 99%.

And then it might take another two years to get to 99.9, and so on. And so to actually get to the point where you're at this stage where you can release a theatrical quality movie, I'm sure it will take a lot longer than two years. Well, but look at this, sex, I'm just gonna show you one image.

This is the input was aerial drone footage of a mountain range. And this is what it came up with. Now, if you were watching TV in the 80s, or 90s, on a non HDTV, this would look indistinguishable from anything you've seen. And so this is at a pace that's kind of crazy.

There's also opportunity here, right, Friedberg. I mean, if we were to look at something like the Simpsons, which has gone on for 30 years, if young people watching the Simpsons could create their own scenarios, or with auto GPT, imagine you told the Simpsons, stable diffusion instance, read what's happening in the news, have Bart Simpson respond to it have the South Park characters parody, whatever happened in the news today, you could have automated real time episodes of South Park, just being published on to some website.

Before you move on, did you see the the wonder studio demo, we can pull this one up. It's really cool. Yeah, please. This is a startup that's using this type of technology. And the way it works is you film a live action scene with a regular actor, but then you can just drag and drop an animated character onto it.

And it then converts that scene into a movie with that character, like Planet of the Apes or Lord of the Rings, right? Yeah, yeah. Dacus, it was he the person who kept winning all the Oscars. So there it goes after the robot has replaced the human. Wow, you can imagine like every piece of this just eventually gets swapped out with AI, right?

Like you should be able to tell the AI give me a picture of a human leaving a building, like a Victorian era building in New York. And certainly it can give you a static image of that. So it's not that far to then give you a video of that, right.

And so I yeah, I think we're, we're pretty close for, let's call it hobbyists or amateurs be able to create pretty nice looking movies, using these types of tools. But again, I think there's a jump to get to the point where you're just all together replacing. One of the things I'll say on this is we still keep trying to relate it back to the way media narrative has been explored and written by humans in the past, very kind of linear storytelling, you know, it's a two hour movie, 30 minute TV segment, eight minute YouTube clip, 30 second Instagram clip, whatever.

But one of the enabling capabilities with this set of tools is that these stories, the way that they're rendered, and the way that they're explored by individuals can be fairly dynamic. You could watch a movie with the same story, all four of us could watch a movie with the same story, but from totally different vantage points.

And some of us could watch it in an 18 minute version or a two hour version or a, you know, three season episode episodic version, where the way that this opens up the potential for creators and also, so now I'm kind of saying, before I was saying, hey, individuals can make their own movies and videos, that's going to be incredible.

There's a separate, I think, creative output here, which is the leveling up that happens with creators, that maybe wasn't possible to them before. So perhaps a creator writes a short book, a short story. And then that short story gets rendered into a system that can allow each one of us to explore it and enjoy it in different ways.

And I, as the creator can define those different vantage points, I, as the creator can say, here's a little bit of this fast personality, this character trait. And so what I can now do as a creator is stuff that I never imagined I could do before. Think about old school photographers doing black and white photography with pinhole cameras, and then they come across Adobe Photoshop, what they can do with Adobe Photoshop with stuff that they could never conceptualize of in those old days, I think what's going to happen for creators going forward.

And this is going back to that point that we had last week or two weeks ago about the guy that was like, hey, I'm out of a job. I actually think that the opportunity for creating new stuff in new ways is so profoundly expanding, that individuals can now write entire universes that can then be enjoyed by millions of people from completely different lengths and viewpoints and, and models, they can be interactive, they can be static, they can be dynamic.

And that the personalised, personalised, but the tooling that you as a creator now have, you could choose which characters you want to define, you could choose which content you want to write, you could choose which content you want the AI to fill in for you and say, hey, create 50 other characters in the village.

And then when the viewer reads the book or watches the movie, let them explore or have a different interaction with a set of those villagers in that village. Or you could say, hey, here's the one character everyone has to meet, here's what I want them to say. And you can define the dialogue.

And so the way that creators can start to kind of harness their creative chops, and create new kinds of modalities for content and for exploration, I think is going to be so beautiful and incredible. I mean, Freiburg. Yeah, you can choose the limits of how much you want the individual to enjoy from your content, versus how narrowly you want to define it.

And my guess is that the creators that are going to win are going to be the ones that are going to create more dynamic range and meet creative output. And then individuals are going to kind of be stuck, they're going to be more into that than they will with the static, everyone watches the same thing over and over.

So there will be a whole new world of creators that you know, maybe have a different set of tools than just just realizing the last to build on what you're saying, very, very, she thinks incredibly insightful. Just think about the controversy around two aspects of a franchise like James Bond.

Number one, who's your favorite bond? We grew up with Roger Moore, we lean towards that, then we discover Sean Connery, and then all of a sudden you see, you know, the latest one, he's just extraordinary. And Daniel Craig, you're like, you know what, that's the one that I love most.

But what if you could take any of the films, you'd say, let me get you know, give me the spy who loved me, but put Daniel Craig in it, etc. And that would be available to you. And then think about the next controversy, which is Oh, my God, does Daniel, does James Bond need to be a white guy from the UK?

Of course not. You can release it around the world and each region could get their own celebrity, their number one celebrity to play the lead and controversy over, you know, the old story, the epic of Gilgamesh, right. So like, that story was retold in dozens of different languages. And it was told through the oral tradition.

It was like, you know, spoken by bards around a fire pit and whatnot. And all of those stories were told with different characters and different names and different experiences. Some of them were 10 minutes long, some of them were multi hour sagas explained through the story. But ultimately, the morality of the story, the storyline, the intentionality of the original creator of that story, yes, through the Bible is another good example of this, where much of the underlying morality and ethics in the Bible comes through in different stories read by different people in different languages.

That may be where we go like, my kids want to have a 10 minute bedtime story. Well, let me give them Peter Pan at 10 minutes, I want to do you know, a chapter a night for my older daughter for a week long of Peter Pan. Now I can do that.

And so the way that I can kind of consume content becomes different. So I guess what I'm saying is there's two aspects to the way that I think the entire content, the realm of content can be rewritten through AI. The first is like individual personalized creation of content, where I as a user can render content that was of my liking and my interest.

The second is that I can engage with content that is being created that is so much more multidimensional than anything we conceive of today, where current centralized content creators now have a whole set of tools. Now from a business model perspective, I don't think that plot publishers are really the play anymore.

But I do think that platforms are going to be the play. And the platform tooling that enables the individuals to do this stuff and the platform tooling that enables the content creators to do this stuff are definitely entirely new industries and models that can create multi hundred billion dollar outcomes.

Let me hand this off to sacks because there has been the dream for everybody, especially in the Bay Area of a hero coming and saving Gotham City. And this has finally been realized David sacks, I did my own little Twitter AI hashtag and I said to Twitter AI, if only please generate a picture of David sacks is Batman crouched down on the peak thing a bridge, the amount of creativity sacks that came from this and this is something that you know, if we were talking about just five years ago, this would be like a $10,000 image you could create by the way, it's a birthday.

These were not professional quote unquote artists, these individuals, individuals that were able to harness a set of platform tools to generate this incredible new content. And I think it speaks to the opportunity ahead. And by the way, we're in inning one, right? So, sacks, when you see yourself as Batman, do you ever think you should take your enormous wealth and resources and put it towards building a cave under your mansion that lets you out underneath the Golden Gate Bridge and you could go fight crime?

So good. So good. Do you want to go fight this crime in Gotham? I think San Francisco has a lot of Gotham like qualities. I think the villains are more real than the heroes. Unfortunately, we don't have a lot of heroes. But yeah, we got a lot of jokers.

jokers. Yeah, that's a whole separate topic. I'm sure we'll separate topic we'll get to at some point today. You guys are talking about all this stupid bullshit. Like there are trillions of dollars of software companies that could get disrupted. And you're talking about making fucking children's books and fat pictures of sacks.

It's so dumb. No, it's a conversation. Great job. No, it nobody cares about entertainment anymore, because it's totally okay. So why don't you talk about industries? Where the money is? Why don't you teach people where there's going to be actual economic destruction? This is going to be amazing economic destruction and opportunity.

You spend all this time on the most stupidest fucking topics. Listen, it's an illustrative example. No, it's an elitist example that you know, it's fucking circle. It's Batman's not nobody. Nobody cares. Let's bring nobody Lord tweet over everybody. I mean, I think I think us box office is like 20.

Here I remember when like, like 100 billion a year payment volume, and now it's like hundreds of billions. So add in and Stripe are going to process $2 trillion almost once you talk about that disruption, you nanny market size of US media and entertainment industry 717 billion. Okay, it's not insignificant.

Video games are nearly half a trillion a year. Yeah, I mean, this is not insignificant. But let's pull up Chamath tweet. Of course, the dictator wants to dictate here all this incredible innovation is being made. And a new hero has been born Chamath Palihapitiya, a tweet that went viral over 1.2 million views already.

I'll read your tweet for the audience. If you invent a novel drug, you need the government to vet and approve it FDA before you can commercialize it. If you invent a new mode of air travel, you need the government to vet and approve it. FAA. I'm just going to edit this down a little bit.

If you create new security, you need the government to vet it and approve it sec more generally, when you create things with broad societal impact, positive and negative, the government creates a layer to review and approve it. AI will need such an oversight body. The FDA approval process seems the most credible and adaptable into a framework to understand how a model behaves and it's counter factual.

Our political leaders need to get in front of this sooner rather than later and create some oversight before the eventual big avoidable mistakes happen. And genies are let out of the bottle. Chamath, you really want the government to come in. And then when people build these tools, they have to submit them to the government to approve them.

That's what you're saying here. And you want that to start now. Here's the alternative. The alternative is going to be the debacle that we know as Section 230. So if you try to write a brittle piece of legislation or try to use old legislation to deal with something new, it's not going to do a good job because technology advances way too quickly.

And so if you look at the Section 230 example, where have we left ourselves, the politicians have a complete inability to pass a new framework to deal with social media to deal with misinformation. And so now we're all kind of guessing what a bunch of eight 70 and 80 year old Supreme Court justices will do in trying to rewrite technology law when they have to apply it on Section 230.

So the point of that tweet was to lay the alternatives. There is no world in which this will be unregulated. And so I think the question to ask ourselves is do we want a chance for a new body? So the FDA is a perfect example why, even though the FDA commissioner is appointed by the President, this is a quasi organization, it's still arms length away.

It has subject matter experts that they hire, and they have many pathways to approval. Some pathways take days, some pathways are months and years, some pathways are for breakthrough innovation, some pathways are for devices. So they have a broad spectrum of ways of of arbitrating what can be commercialized and what cannot.

Otherwise, my prediction is we will have a very brittle law that will not work. It'll be like the Commerce Department and the FTC trying to gerrymander some old piece of legislation. And then what will happen is it'll get escalated to the Supreme Court. And I think they are the last group of people who should be deciding on this incredibly important topic for society.

So what I have been advocating our leaders and I will continue to do so is don't try to ram this into an existing body. It is so important, it is worth creating a new organization like the FDA and having a framework that allows you to look at a model and look at the counterfactual judge how good how important how disruptive it is, and then release it in the wild appropriately.

Otherwise, I think you'll have these chaos GPT things scale infinitely. Because again, as Friedberg said in a sex that you're talking about one person that can create this chaos, multiply that by every person that is an anarchist or every person that just wants to sow seeds of chaos and I think it's going to be all avoidable.

I think regulating what software people can write is a near impossible task. Number one, I think you can probably put rules and restrictions around commerce, right? That's certainly feasible in terms of how people can monetize but in terms of writing and utilizing software, it's going to be as challenged as trying to monitor and demand oversight and regulation around how people write and use tools for for genome and biology exploration.

Certainly, if you want to take a product to market and sell a drug to people that can influence their body, you have to go get that approved. But in terms of you know, doing your work in a lab, it's very difficult. I think the other challenge here is software can be written anywhere.

It can be executed anywhere. And so if the US does try to regulate, or does try to put the brakes on the development of tools where the US can have kind of a great economic benefit and a great economic interest, there will be advances made elsewhere, without a doubt.

And those markets and those those places will benefit in an extraordinarily out of pace way. As we just mentioned, there's such extraordinary kind of economic gain to be realized here, that if we're not, if the United States is not leading the world, we are going to be following and we are going to get disrupted, we are going to lose an incredible amount of value and talent.

And so any attempt at regulation, or slowing down or telling people that they cannot do things when they can easily hop on a plane and go do it elsewhere, I think is is fraught with peril. So you don't agree with regulation, sacks? Are you on board with the Chamath plan?

Are you on board with the free bird? I'll say I think I think just like with computer hacking, it's illegal to break into someone else's computer. It is illegal to steal someone's personal information. There are laws that are absolutely simple and obvious and you know, no nonsense laws, those laws are legal to get rid of 100,000 jobs by making a piece of software, though.

That's right. And so I think trying to intentionalize how we do things versus intentionalizing the things that we want to prohibit happening as an outcome, we can certainly try and prohibit the things that we want to happen up as an outcome and pass laws and institute governing bodies with authority to oversee those laws.

With respect to things like stealing data, but you can jump on a plane and go do it in Mexico, Canada, or whatever region you get to sacks. Where do you stand on this? Yeah, I'm saying like, there are ways to protect people, there's ways to protect society about passing laws that make it illegal to do things as the output of the outcome.

What law do you pass on chaos GPT? Explain chaos GPT? Give an example, please. Yeah. Do you want to talk about it real quick? It's a recursive agent that basically is trying to destroy itself. Try to destroy humanity. Yeah. But I guess by first becoming all powerful and destroying humanity and then destroying itself.

Yeah, it's a tongue in cheek. Auto GPT. But it's not it's not it's not a tongue in cheek. Auto GPT. The guy, the guy that created it, you know, put it out there and said, like, he's trying to show everyone to your point, what intentionality could arise here, which is negative intentionality.

I think it's very naive for anybody to think that this is not equivalent to something that could cause harm to you. So for example, if the prompt is, hey, here is a security leak that we figured out in Windows. And so why don't you exploit it? So look, a hacker now has to be very technical.

Today with with these auto GPT is a hacker does not need to be technical, exploit the zero day. exploit in Windows, hack into this plane and bring it down. Okay, the GPT will do it. So who's going to tell you that those things are not allowed, who's going to actually vet that that wasn't allowed to be released in the wild.

So for example, if you work with Amazon and Google and Microsoft and said, you're going to have to run these things in a sandbox, and we're going to have to observe the output before we allow it to run on actual bare metal in the wild. Again, that seems like a reasonable thing.

And it's super naive for people to think it's a free market. So we should just be able to do what we want. This will end badly quickly. And when the first plane goes down, and when the first fucking thing gets blown up, all of you guys will be like, Oh, sorry, facts.

Pretty compelling example here by Chamath. Somebody puts out into the wild chaos GPT, you can go do a Google search for it and says, Hey, what are the vulnerabilities to the electrical grid, compile those and automate a series of attacks and write some code to probe those until we and success in this mission, you get 100 points and stars every time you Jason do this such a such a beautiful example, but it's even more nefarious.

It is. Hey, this is an enemy that's trying to hack our system. So you need to hack theirs and bring it down. You know, like you can easily trick these GPT. Right? Yes, they have no judgment. They have no judgment. And as you said, they're ruthless in getting to the outcome.

Right? So why do we think all of a sudden, this is not going to happen? I mean, it's literally the science fiction example, you say, Hey, listen, make sure no humans get cancer and like, okay, well, the logical way to make sure no humans get cancer is to kill all the humans.

But can you just address the point? So what do you think you're regulating? Are you regulating the code that here's what I'm saying to right? If you look at the FDA, no, you're allowed to make any chemical drug you want. But if you want to commercialize it, you need to run a series of trials with highly qualified measurable data, and you submit it to like minded experts that are trained as you are to evaluate the viability of that.

And but no, it's how long there are pathways that allow you to get that done in days under emergency use. And then there are pathways that can take years depending on how gargantuan the task is at hand. And all I'm suggesting is having some amount of oversight is not bad in this specific example.

I get what you're saying. But I'm asking tactically how what are you overseeing? You're overseeing chat GPT, you're overseeing the model you're doing exactly what chips. Okay, look, I used to run the Facebook platform, we used to create sandboxes, if you submit code to us, you would we would run it in the sandbox, we would observe it, we would figure out what it was trying to do.

And we would tell you this is allowed to run in the wild. There's a version of that that Apple does when you submit an app for review and approval. Google does it as well. In this case, all the bare metal providers, all the people that provide GPUs will be forced by the government, in my opinion, to implement something.

And all I'm suggesting is that it should be a new kind of body that essentially observes that has PhDs that has people who are trained in this stuff, to develop the kind of testing and the output that you need to figure out whether it should even be allowed to run in the wild on bare metal.

Sorry, but you're saying that the mod the model, sorry, I'm just trying to understand too much points, you're saying that the models need to be reviewed by this body. And those models, if they're run on a third party set of servers, if they're run in the wild, right, so if you're on a computer on the on the open internet, freeberg, you cannot run an app on your computer, you know that, right?

It needs to be connected to the internet, right? Like if you wanted to run an auto GPT, it actually crawls the internet, it actually touches other API's, it tries to then basically send a push request, sees what it gets back parses the JSON figures out what it needs to do.

All of that is allowed because it's hosted by somebody, right? That code is running not locally, but it's running. So the host becomes sure, if you want to run it locally, you can do whatever you want to do. But evil agents are going to do that, right. So if I'm an evil agent, I'm not going to go use AWS to run my evil agent, I'm gonna set up a bunch of servers and connect to the internet.

How I could use VPNs. The internet is open, there's open in another rogue country, they can do whatever. I think that what you're gonna see is that if you, for example, try to VPN and run it out of like, to Gika stand back to the United States, it's not going to take years for us to figure out that we need to IP block rando shit coming in push and pull requests from all kinds of IPs that we don't trust anymore, because we don't now trust the regulatory oversight that they have for code that's running from those IPs that are not us domesticated.

Just to let me steal man, Chamath position for a second, Jason, hold on, I think the ultimate if what Chamath is saying, is the point of view of Congress, and if Chamath has this point of view, then there will certainly be people in Congress that will adopt this point of view.

The only way to ultimately do that degree of regulation and restriction is going to be to restrict the open internet, it is going to be to have monitoring and firewalls and safety protocols across the open internet. Because you can have a set of models running on any set of servers sitting in any physical location.

And as long as they can move data packets around, they're going to be able to get up to their nefarious activities. Let me still man that for you, freeberg. I think, yes, you're correct. The internet has existed in a very open way. But there are organizations and there are places like the National Highway Traffic Safety Administration, if I were to steal mention mods position, if you want to manufacture a car, and you want to make one in your backyard and put it on your track and on your land up in Napa somewhere, and you don't want to have brakes on the car and you don't want to have, you know, a speed limiter or airbags or seatbelts and you want to drive on the hood of the car, you can do that.

But once you want it to go on the open road, the open internet, you need to get you need to submit it for some safety standards like NHT sa like Tesla has to afford has to. So sacks, where do you sit on this? Or is, let's assume that people are going to do very bad things with very powerful models that are becoming available.

Amazon today said they'll be Switzerland, they're going to put a bunch of LLM and other models available on AWS, Bloomberg's LLM, Facebook's, Google barred, and of course, chance upt opening and being all this stuff's available to have access to that. Do you need to have some regulation of who has access to those at scale powerful tools?

Should there be some FDA or NHTSA? I don't think we know how to regulate it yet. I think it's too early. And I think the harms that we're speculating about we're making the AI more powerful than it is. And I believe it will be that powerful. But I think that it's premature to be talking about regulating something that doesn't really exist yet take the chaos GPT scenario.

The way that would play out would be you've got some future incarnation of auto GPT. And somebody says, Okay, auto GPT, I want you to be, you know, WMD AI, and figure out how to cause like a mass destruction event, you know, and then it creates like a planning checklist and that kind of stuff.

So that's basically the the type of scenario we're we're talking about. We're not anywhere close to that yet. I mean, the chaos GPT is kind of a joke. It doesn't produce it doesn't produce a checklist. I can give an example that would actually be completely plausible. One of the first things on the chaos GPT checklist was to stay within the boundaries of the law because it didn't want to get prosecuted.

Got it. So the person who did that had some sort of good intent. But I can give you an example right now. That could be done by chat GPT and auto GPT that could take down large swaths of society and cause massive destruction. I'm almost reticent to say it here.

Say it. Well, I'll say and then maybe we'll have to delete this. But if somebody created this, and they said, figure out a way to compromise as many powerful peoples and as many systems, passwords, then go in there and delete all their files and turn off as many systems as you can.

chat GPT and auto GPT could very easily create phishing accounts create billions of websites to create billions of logins, have people log into them, get their passwords, log into whatever they do, and then delete everything in their account. Chaos, you're right to be done today. I don't think we've done today.

simpler than this. How about how about you fish? website? Yeah, pieces of it can be created today. But you're you're accelerating the progress. Yeah, but you can automate what fishing now to an hour days. Yeah, exactly. And by the way, I'm accelerating it in weeks. Why don't you just spoof the bank accounts and just steal the money like that's even simpler, like people will do this stuff because they're trying to do it today.

Holy cow, they just have a more efficient way to solve the problem about bank accounts. So number one, this is a tool. And if people use a tool in nefarious ways, you prosecute them. Number two, the platforms that are commercializing these tools do have trust and safety teams. Now in the past, trust and safety has been a euphemism for censorship, which it shouldn't be.

But you know, open AI has a safety team and they try to detect when people are using their tech in a nefarious way and they try to prevent it. Do you trust? Well, no, not on censorship. But I think that they're probably million people are using they're policing it.

Are you willing to abdicate your work societal responsibility to to open AI to do the trust and say, what I'm what I'm saying is I'd like to see how far we get in terms of the system. Yeah. So you want to see the mistakes, you want to see where the mistakes are, and how bad the mistakes are.

I'm saying it's still very early to be imposing regulation, we don't even know what to regulate. So I think we have to keep tracking this to develop some understanding of how it might be misused, how the industry is going to develop safety guardrails. Okay. And then you can talk about regulation.

Look, you create some new FDA right now. Okay. First of all, we know what would happen. Look at the drug process. As soon as the FDA got involved in slow down massively. Now it takes years, many years to get a drug approved. Appropriately. So yes, but at least with a drug, we know what the gold standard is, you run a double blind study to see whether it causes harm or whether it's beneficial.

We don't know what that standard is for AI yet. We have no idea. You can study in AI. What? No, we don't have somebody review the code. You have two instances in a sandbox use a code to do what? Oh, sacks. Listen, point, auto GPT. It's benign. I mean, by friend use it to book a wine tasting.

So who's going to review that code and then speculate and say, Oh, well, he's 99.9% of cases. It's perfectly benevolent and fine. And innocuous. I can fantasize about some cases someone might do. How are you supposed to resolve that? Very simple. There are two types of regulation that occur in any industry.

You can do what the movie industry did, which is they self regulate and they came up with their own rating system. Or you can do what happens with the FDA and what happens with cars, which is an external government based body. I think now is the time for self regulation, so that we avoid the massive heavy hand of government having to come in here.

But these tools can be used today to create massive harm. They're moving at a pace we just said in the first half of the show that none of us have ever seen every 48 hours something drops. That is mind blowing. That's never happened before. And you can take these tools.

And in the one example that Truman and I came up with the top of our head in 30 seconds, you could create phishing sites, compromise people's bank accounts, take all the money out, delete all the files and cause chaos on a scale that has never been possible by a series of Russian hackers or Chinese hackers working in a boiler room.

This can scale and that is the fundamental difference here. And I didn't think I would be sitting here steel manning trumans argument. I think humans have a high level ability to compound. I think people do not understand compound interest. And this is a perfect example, where when you start to compound technology at the rate of 24 hours, or 48 hours, which we've never really had to acknowledge, most people's brains break, and they don't understand what six months from now looks like.

And six months from now, when you're compounding at 48, or 72 hours, is like 10 to 12 years in other technology solutions. This is compounding. This is this is different because of the compounding. I agree with that the pace of evolution is very fast. We are on a bullet train to something.

And we don't know exactly what it is. And that's disconcerting. However, let me tell you what would happen if we create a new regulatory body like the FDA to regulate this, they would have no idea how to arbitrate whether a technology should be approved or not. Development will basically slow to a crawl just like drug development.

There is no double blind stand. I agree. What regulation can we do? What self regulation can we do? There is no double blind standard in AI that everyone can agree on right now to know whether something should be approved. And what's going to happen is the thing that's made software development so magical and allowed all this innovation over the last 25 years is permissionless innovation.

Any developer, any dropout from a university can go create their own project, which turns into a company. And that is what has driven all the innovation and progress in our economy over the last 25 years. So you're going to replace permissionless innovation with going to Washington to go through some approval process.

And it will be the politically connected, it'll be the big donors who get their projects approved. And the next Mark Zuckerberg, who's trying to do his little project in a dorm room somewhere will not know how to do that will not know how to compete in that highly political process.

Come on, I think you're mixing a bunch of things together. So first of all, permissionless innovation happens today in biotech as well. It's just that it's what Jason said, when you want to put it on the rails of society, and make it available to everybody, you actually have to go and do something substantive.

In the negotiation of these drug approvals, it's not some standardized thing, you actually sit with the FDA, and you have to decide what are our endpoints? What is the mechanism of action? And how will we measure the efficacy of this thing? The idea that you can't do this today in AI is laughable?

Yes, you can. And I think that smart people so for example, if you pit deep minds team versus open AI team, to both agree that a model is good and correct, I bet you they would find a systematic way to test that it's fine. I just want to point out, okay, so basically, in order to do what you're saying, okay, this entrepreneur, who just dropped out of college to do their project, they're gonna have to learn how to go sit with regulators, have a conversation with them, go through some complicated approval process.

And you're trying to say that that won't turn into a game of political connections. Of course it will, of course it will. Which is self regulation. Yeah, well, let's get to that. Hold on a second. And let's look at the drug approval process. If you want to create a drug company, you need to raise hundreds of millions of dollars.

It's incredibly expensive. It's incredibly capital intensive. There is no drug company that is two guys in their garage. Like many of the biggest companies, like many of the biggest companies in Silicon Valley started. That is because you're talking about taking a chemical or biological compound and injecting into some hundreds or thousands of people who are both racially gender based, age based, highly stratified all around the world, or at a minimum all around the country.

You're not talking about that here, David, I think that you could have a much simpler and cheaper way where you have a version of the internet that's running in a huge sandbox someplace that's closed off from the rest of the internet, and another version of the internet that's closed off from everything else as well.

And you can run on a parallel path, as it is with this agent, and you can easily, in my opinion, actually figure out whether this agent is good or bad, and you can probably do it in weeks. So I actually think the approvals are actually not that complicated. And the reason to do it here is because I get that it may cause a little bit more friction for some of these mom and pops.

But if you think about what's the societal and consequences of letting the worst case outcomes happen, the AGI type outcomes happen, I think those are so bad. They're worth slowing some folks down. And I think like, just because you want to, you know, buy groceries for $100, you should be able to do it, I get it.

But if people don't realize and connect the dots between that and bringing airplanes down, then that's because they don't understand what this is capable of. I'm not saying we're never going to need regulation. What I'm saying is, it's way too early. We don't even know what we're calculating. I don't know what the standard would be.

And what we will do by racing to create a new FDA is destroying American innovation in the sector, and other countries will not slow down. They will beat us to the punch here. Got it. I think there's a middle ground here of self regulation and thoughtfulness on the part of the people who are providing these tools at scale.

To give just one example here, and this tweet is from five minutes ago. So to look at the pace of this five minutes ago, this tweet came out, a developer who is an AI developer says AI agents continue to amaze my GPT for coding and says learn how to build apps with authenticated users that can build and design a web app, create a back end, handle off, logins, upload code to GitHub and deploy.

He literally while we were talking is deploying websites. Now if this website was a phishing app, or the one that Chamath is talking about, he could make a gazillion different versions of Bank of America, Wells Fargo, etc, then find everybody on the internet's email, then start sending different spoofing emails, determine which spoofing emails work, iterate on those, and create a global financial collapse.

Now this sounds insane, but it's happening right now. People get hacked every day at 123%. Saks fraud is occurring right now in the low single digit percentages identity theft is happening in the low single identity percentages. This technology is moving so fast that bad actors could 10x that relatively easy.

So if 10% of us want to be hacked and have our credit card attacked, this could create chaos. I think self regulation is the solution. I'm the one who brought up self regulation. What I said first, I brought up first I get credit. No, good. I'm not it's not about credit.

I'm no regulation. About it because you interrupted you talked for eight minutes. So if you have a point to make you should have got in the eight minutes. Oh my god, you guys kept interrupting me. Go ahead. What I said is that there are trust and safety teams at these big AI companies, these big foundation model companies like open AI.

Like I said, in the past, trust and safety has been a euphemism for censorship. And that's why people don't trust it. But I think it would be appropriate for these platform companies to apply some guardrails on how their tools can be used. And based on everything I know they're doing that.

So websites on the open web with chat GP four, and he's going to have it do it automated, you're basically postulating capabilities that don't yet exist. I just tweeted the guy's doing it. He's got a video of himself doing it on the web. What do you think? That's a far cry from basically running like some fishing expedition that's going to bring down the entire banking system.

A literally a fishing a fishing site and a site with OAuth are the same thing. Go ahead. Freeberg. I think that that guy is doing something illegal if he's hacking into computers, into people's emails and bank accounts. That's illegal. You're not allowed to do that. And so that action breaks the law, that person can be prosecuted for doing that.

The tooling that one might use to do that can be used in a lot of different ways. Just like you could use Microsoft Word to forge letters, just like you could use Microsoft Excel to create fraudulent financial statements. I think that the application of a platform technology needs to be distinguished from the technology itself.

And while we all feel extraordinarily fearful, because the unbelievable leverage that these AI tools provide, again, I'll remind you that this chat GPT-4 or this GPT-4 model, by some estimates, is call it a few terabytes, you could store it on a hard drive, or you can store it on your iPhone.

And you could then go run it on any set of servers that you could go set up physically anywhere. So you know, it's a little bit naive to say we can go ahead and, you know, regulate platforms and we can go regulate the tools. Certainly, we should continue to enforce and protect ourselves against nefarious actors using, you know, new tools in inappropriate and illegal ways.

You know, I also think that there's a moment here that we should all kind of observe just how quickly we want to shut things down, when, you know, they take away what feels like the control that we all have from one day to the next. And, you know, that the real side kind of sense of fear that seems to be quite contagious for a large number of people that have significant assets or significant things to lose is that, you know, tooling that's, that's, you know, creating entirely newly disruptive systems and models for business and economics.

And opportunity for so many needs to be regulated away to minimize, you know, what we claim to be some potential downside when we already have laws that protect us on the other side. So, you know, I just kind of want to also consider that this set of tools creates extraordinary opportunity, we gave one sort of simple example about the opportunity for creators, but we talked about how new business models, new businesses can be started with one or two people, you know, entirely new tools can be built with a handful of people, entirely new businesses, this is an incredible economic opportunity.

And again, if the US tries to regulate it, or the US tries to come in and stop the application of models in general or regulate models in general, you're certainly going to see those models of continue to evolve and continue to be utilized in very powerful ways are going to be advantageous to places outside the US, there's over 180 countries on earth, they're not all going to regulate together.

It's been hard enough to get any sort of coordination around financial systems to get coordination around climate change to get coordination around anything on a global basis to try and get coordination around the software models that are being developed, I think is is pretty naive. You don't want to have a global organization, I think you need to have a domestic organization that protects us.

And I think Europe will have their own thing. Again, FDA versus Emma, Canada has its own Japan has its own, China has its own, and they have a lot of overlap and a lot of commonality and then the guard rules they use. And I think that's what's going to happen here.

This will be beneficial only for political insiders who will basically be able to get their projects and their apps approved with a huge deadweight loss for the system because innovation will completely slow down. But to me build on freeberg's point, which is that we have to remember that AI won't just be used by nefarious actors, it'll be used by positive actors.

So there will be new tools that law enforcement will be able to use. And if somebody is creating phishing sites at scale, they're going to be probably pretty easy for you know, law enforcement AI is to detect. So let's not forget that there'll be co pilots written for our law enforcement authorities, they'll be able to use that to basically detect and fight crime.

And a really good example of this was in the crypto space. We saw this article over the past week that chain analysis has figured out how to basically track, you know, illicit Bitcoin transactions. And there's now a huge number of prosecutions that are happening of illegal use of Bitcoin.

And if you go back to when Bitcoin first took off, there was a lot of conversations around Silk Road. And the only thing that Bitcoin was good for was basically illegal transactions, blackmailing, drug trafficking, and therefore we had to stop Bitcoin. Remember, that was the main argument. And the counter argument was that will know Bitcoin, like any technology can be used for good or bad.

However, there will be technologies that spring up to combat those nefarious or illicit use cases. And sure enough, you had a company like chain analysis come along. And now it's been used by law enforcement to basically crack down on the illicit use of Bitcoin. And if anything, it's cleaned up the Bitcoin community tremendously.

And I think it's dispelled this idea that the only thing you'd use Bitcoin for is black market transactions. Quite the contrary. I think you'd be really stupid now to use Bitcoin in that way. It's actually turned Bitcoin into something of a honeypot now. Because if you used it for nefarious transactions, your transactions record in the blockchain forever just waiting for chain analysis to find it.

So again, using Bitcoin to do something illegal be really stupid. I think in a similar way, you're going to see self regulation by these major AI platform companies combined with new tools are used new AI tools that spring up to help combat the nefarious uses. And until we let those forces play out.

I'm not saying regulate never, I'm just saying we need to let those forces play out. Before we leap to creating some new regulatory body that doesn't even understand what its mandate emissions supposed to be. The Bitcoin story is hilarious, by the way. Oh, my gosh, rejournal story. It's unbelievable.

Pretty epic. It took years. But basically, this guy was buying blow on Silk Road. And he deposited his Bitcoin. And then when he withdrew it, he there was a bug that gave him twice as many Bitcoin. So he kept creating more accounts putting more money into Silk Road and getting more Bitcoin out.

And then years later, the authorities figured this out again with you know, chain analysis type things. Look at James Zong over there. Look at him. James Zong, the accused, had a Lamborghini, a Tesla, a lake house, and was living his best life apparently, when the feds knocked on his door and found the digital keys to his crypto fortune in a popcorn tin in his bathroom, and in a safe in his basement floor.

So they have a phone. The reason the reason I posted this was I was like, what if this claim that you can have all these anonymous transactions actually fooled an entire market? Because it looks like that this anonymity has effectively been reverse engineered, and there's no anonymity at all.

And so what Bitcoin is quickly becoming is like the most singular honeypot of transactional information that's complete and available in public. And I think what this article talks about is how companies like chain analysis, and others have worked now, for years, almost a decade, with law enforcement to be able to map all of it.

And so now every time money goes from one Bitcoin wallet to another, they effectively know the sender and the recipient. And I just want to make one quick correction here. It wasn't actually exactly popcorn. It was Cheetos spicy flavored popcorn. And there's the tin where he had a motherboard of a computer that held is there a chance that that this project was actually introduced by the government?

I mean, there's been reports of tour on anonymous or network that the CIA had their hands all over tour. to our if you don't know it, which is an anonymous like multi relay, peer to peer web browsing system, and people believe it's a CIA honeypot, an intentional trap for criminals to get themselves caught up in.

All right, as we wrap here, what an amazing discussion, my lord, I didn't I never thought I would be. I want to say one thing. Yes. We saw that someone was arrested for the murder of Bob Lee. That's what I was about this morning. Yeah, which turns out that the report of the SF PDs arrest is that it's someone that he knew that also works in the tech industry, someone possibly know, right.

So still breaking news. Yes, possibly. But I want to say two things. One, obviously, based on this arrest and the storyline, it's quite different than what we all assumed it to be, which was some sort of homeless robbery type moment that has become all too commonplace in SF. It's a commentary for me on two things.

One is how quick we all were to kind of judge and assume that, you know, a homeless robber type person would do this in SF, which I think speaks to the condition in SF right now, also speaks to our conditioning that that we all kind of lacked or didn't even want to engage in a conversation that maybe this person was murdered by someone that they knew.

Because we wanted to kind of very quickly fill our own narrative about how bad SF is. And that's just something that I really felt when I read this this morning, I was like, man, like, I didn't even consider the possibility that this guy was murdered by someone that he knew, because I am so enthralled right now by this narrative that SF is so bad, and it must be another data point that validates my point of view on SF.

So you know, I kind of want to just acknowledge that and acknowledge that we all kind of do that right now. But I do think it also does, in fact, unfortunately speak to how bad things are in SF, because we all are. We've all had these experiences of feeling like we're in danger and under threat all the time we're walking around in SF, in so many parts of San Francisco, I should say, where things feel like they've gotten really bad.

I think both things can be true that we can kind of feel biased and fill our own narrative by kind of latching on to our assumption about what something tells us. But But it also tells us quite a lot about what is going on. So I just wanted to make that point.

In fairness, and I think it's fine for you to make that point. I am extremely vigilant on this program to always say when something is breaking news, withhold judgment, whether it's the Trump case or Jesse Smollett or anything in between January 6, let's wait until we get all the facts.

And in fact, quote from Sacks, we don't know exactly what happened yet. Correct. Literally, Sacks started with that. We do that every fucking time on this program. We know when there's breaking news to withhold judgment, but you can also know two things can be true. A tolerance for ambiguity is necessary.

But I'm saying I didn't even do that. As soon as I heard this, I was like, I was like, Oh, assumption. But you know, David, that is a fine assumption to make. That's a fine assumption. It's a logical assumption. Listen, you make that assumption for your own protection. We got all these reporters who are basically propagandists trying to claim that crime is down in San Francisco.

They're all basically seeking comment from me this morning, sending emails or trying to God on us because we basically talked about the bubbly case in that way. Listen, we said that we didn't know what happened. But if we were to bet, at least what I said is I bet this case, it looks like a lot like the Brianna Kupfer case.

That was logical. That's not conditioning or bias. That's logic. And you need to look at what else happened that week. Okay, so just the same week that Bob Lee was killed, let me give you three other examples of things that happened in Gotham City, aka San Francisco. So number one, former fire commissioner, Don carminiani was beaten within an inch of his life by a group of homeless addicts in the marina.

And one of them was interviewed in terms of why it happened. And basically, Don came down from his mother's house and told them to move off his mother's front porch, because they were obstructing her ability to get in and out of her apartment. They interpreted that as disrespect. And they beat him with a tire iron or a metal pipe.

And one of the hoodlums who was involved in this apparently admitted this. Yeah, play the video. Somebody over the head like that and attack him. As he was he was this disrespectful. We who was disrespectful. There was a big old bald haired old man. Don Don. So he was being disrespectful.

And then but is that enough to beat him off? Yeah, sometimes. Oh, my Lord. I mean, so this is case number one. And apparently in the reporting on that person who was just interviewed, he's been in the marina kind of terrorizing people, maybe not physically, but verbally. So you have, you know, bands of homeless people encamped in front of people's houses.

Don carminiani gets beaten within an inch of his life. You then had the case of the Whole Foods store on Market Street shut down in San Francisco. And this was not a case of shoplifting like some of the other store closings we've seen. They said they were closing the store because they could not protect their employees.

The bathrooms were filled with needles and pipes that were drug paraphernalia. You had drug addicts going in there using it they were engaging in altercations with store employees. And Whole Foods felt like that to close the store because again, they cannot protect their employees. Third example, Board of Supervisors had to disband their own meeting because their internet connection got vandalized.

The fiber for the cable connection to provide their internet got vandalized. So they had to basically disband their meeting Aaron Preskin was the one who announced this and you saw in the response to this. Yeah, my retweeting him went viral. There were lots of people said, Yeah, I've got a small business and the fiber, the copper wire, whatever was vandalized.

And in a lot of cases, I think it's basically drug addicts stealing whatever they can they steal $10 of copper wire, sell that to get a hit. And it causes $40,000 of property damage. Here's the insincerity, sex, literally, the proper response when there's violence in San Francisco is, hey, we need to make this place less violent.

Is there a chance that it could be people who know each other? Of course, that's inherent in any crime that occurs that there'll be time to investigate it. But literally, the press is now using this as a moment to say there's no crime in San Francisco, or that we're reacting.

And like, I just have the New York Times email me during the podcast. Heather Knight from the Chronicle, San Francisco Chronicle, in light of the Bob Lee killing appearing to be an interpersonal dispute. She still doesn't know, right? We don't have all the facts with another tech leader. Do you think the tech community jumped to conclusions?

Why are so many tech leaders painting San Francisco as a dystopian hellscape with the reality with the reality is more nuanced. I think it's a little typo there. Yes. Of course, the reality is nuanced. Of course, it's a hellscape. Walk down the street. Heather, can I give you a theory, please?

I think it was most evident in the way that Ilan dismantled and manhandled the BBC reporter. Oh my god, that was brutal. This is a small microcosm of what I think media is. So I used to think that media had an agenda. I actually now think that they don't particularly have an agenda, other than to be relevant, because they see waning relevance.

And so I think what happens is whenever there are a bunch of articles that tilt the pendulum into a narrative, they all of a sudden become very focused on refuting that narrative. And even if it means they have to lie, they'll do it. Right. So, you know, I think for months and months, I think people have seen that the quality of the discourse on Twitter became better and better.

Ilan is doing a lot with bots and all of this stuff, cleaning it up. And this guy had to try to establish the counter narrative, and was willing to lie in order to do it, then he was dismantled. Here, you guys, I don't have a bone to pick so much with San Francisco.

I think I've been relatively silent on this topic. But you guys, as residents and former residents, I think have a vested interest in the quality of that city. And you guys have been very vocal. But I think that you're not the only ones Michelle Tandler, you know, Schellenberger, there's a bunch of smart, thoughtful people who've been beating this drum, Gary tan.

And so now I think reporters don't want to write the end plus first article saying that San Francisco is a hellscape. So they have to take the other side. And so now they're going to go and kick up the counter narrative. And they'll probably dismantle the truth and kind of redirect it in order to do it.

So I think that what you're seeing is, they'll initially tell a story, but what then there's too much of the truth, they'll go to the other side, because that's the only way to get clicks and be seen. So I think that that's what you guys are a part of right now, they are in the business of protecting the narrative.

But I do think there's a huge ideological component to the narrative, both in the long case where they're trying to claim that there was a huge rise in hate speech on Twitter. The reason they're saying that is because they want Twitter to engage in more censorship. That's the ideological agenda here.

The agenda is this radical agenda of decarceration, they actually believe that more and more people should be let out of prison. And so therefore, they have an incentive to deny the existence of crime in San Francisco and the rise in crime in San Francisco. If you pull most people in San Francisco, large majorities of San Francisco believe that crime is on the rise, because they can see it, they hear it.

And what I would say is, look, I think there's a pyramid of activity, a pyramid of criminal or anti social behavior in San Francisco, that we can all see the base level is you've got a level of chaos on the streets, where you have open air drug markets, people doing drugs, sometimes you'll see, you know, a person doing something disgusting, you know, like people defecating on the streets or even worse, then there's like a level up where they're chasing after you, or, you know, harassing you people have experienced that I've experienced that, then there's a level up where there's petty crime, your car gets broken into, or something like that, then there's the level where you get mugged.

And then finally, the top of the pyramid is that there's a murder. And it's true that most of the time, the issues don't go all the way to the top of the pyramid where someone is murdered. Okay, but that doesn't mean there's not a vast pyramid underneath that, of basically quality of life issues.

And I think this term quality of life was originally used as some sort of way to minimize the behavior that was going on saying that they weren't really crimes, we shouldn't worry about them. But if anything, what we've seen in San Francisco is that when you ignore quality of life crimes, you will actually see a huge diminishment in what it's like to live in these cities, like quality of life is real.

And that's the issue. And I think what they're trying to do now is that say that because Bob Lee wasn't the case that we thought it was that that whole pyramid doesn't exist, doesn't exist pyramid exists, we can all experience Oh, my God. And that's the insincerity of this.

It is insincere. And the existence of that pyramid that we can see, and hear and feel and experience every day is why we're willing to make a bet. We called it a bet that the Bob Lee case was like the Brianna Kupfer case. And in that with a disclaimer, with a disclaimer, and we always do a disclaimer here.

And just to George Hammond from the Financial Times who emailed me, here's what he asked me, there's a lot of public attention lately on whether San Francisco status has one of the top business and technology hubs in the US is at risk in the aftermath of the pandemic. Duh, obviously it is.

I wonder if you had a moment to chat about that. And whether there is a danger that negative perceptions about the city will damage its reputation for founders and capital allocators in the future. So essentially, the and it says the obviously a lot of potential for hysteria in this conversation, which I'm keen to avoid.

And it's like, hey, have you walked down the street? And I asked him, have you walked down the street and San Francisco, Jason, the best response is send him the thing that sacks and which is the amount of available office space in San Francisco. People are voting for hours, companies are voting with their feet.

So it's already if the quality of life wasn't so poor, they stay. This is the essence of gaslighting is what they do is that people who've actually created the situation, San Francisco with their policies, their policies of defunding the police, making it harder for the police to do their job decriminalizing theft under $950, allowing open air drug markets, the people who have now created that matrix of policies that create the situation, what they then turn around and do is say no, the people who are creating the problem are the ones who are observing this.

That's all we're doing is observing and complaining about it. And what they try to do is say, Well, no, you're you're running down San Francisco, we're not the ones creating the problem, we're observing it. And just this week, another data point is that the mayor's office said that they were short more than 500 police officers in San Francisco.

Yeah, nobody who's going to become a police officer here. Are you crazy? Well, and there was another article just this week about how there's a lot of speculation, rumors are swirling of an unofficial strike, an informal strike by police officers who are normally on the force who are tired of risking life and limb.

And then you know, they basically risk getting out of physical altercation with a homeless person, they bring them in, and then they're just released again. So there's a lot of quiet quitting that's going on in the job. It's like this learned helplessness, because why take a risk and then the police commission doesn't have your back.

It seems like the only time you have prosecutorial zeal by a lot of these prosecutors is when they can go after a cop, not one of these repeat offenders. And you just saw that, by the way, in LA, look, motherboard and New York Times just emailed and DM me.

And then and then did you guys say that instead of solving these issues, the Board of Supervisors was dealing with a wild parrot? What was it? The meeting that was suspended, they had, or Yeah, they had scheduled a meeting to vote on whether the wild parrots are the official animal of the city of San Francisco.

So that was the scheduled meeting that got disbanded. Also, can I just clarify what Chumash talked about with the Elon interview, a BBC reporter interviewed Elon and said, there is much more race and hate and hate speech in the feeds on Twitter. And he said, Can you give me an example?

And he said, Well, I don't have an example. But people are saying this is that which people are saying it. And the BBC reporter said, Well, just different groups of people are saying it. And you know, I've certainly seen he said, Okay, you saw it. And for you, he goes, No, I stopped looking at for you.

He said, so give me one example of hate speech that you've seen in your feed. Now, we without speaking about any inside information, which I do not have much of, they've been pretty deliberate of removing hate speech from places like for you. And, you know, it's a very complicated issue when you have an open platform, but the people may say a word, but it doesn't reach a lot of people.

So if you were to say something really nasty, it doesn't take a genius to block that and not have it reach a bunch of people. This reporter kept insisting to Elon that this was on the rise with no factual basis for it that other people said it. And then he said, But I don't look at the feed.

He said, So you're telling me that there's more hate speech that you've seen, but you just admitted to me that you haven't looked at the for you feed in three months. And it was just like this completely weird thing. I just had mother calling it a lie. He called it a lie.

caught him. And this is the thing. If you're a journalist, just cut it down the middle. Come prepared with a position either way. I want to connect one dot, please, which is that he filled in his own narrative, even though the data wasn't necessarily there. In the same way that, you know, we kind of filled in our narrative about San Francisco, with the Bob Lee, you know, murder, being another example, disclaimer on it.

He said, Well, we didn't hold on a second. We so we knew we didn't know. And furthermore, we're taking great pains this week to correct the record and explain what we now know. Yeah. He was intellectually honest. This is just intellectual honesty. Honestly, you're you're you're going soft here.

Freeberg, you're getting gaslit by all these people. I'm not getting gaslit by anyone. I think the guy totally the guy totally had zero data. By the way, when you're journalist, you're supposed to report on data and evidence. So he's certainly you know, I think just replacing Bob Lee with Don carminiani.

It's the same story. Yeah. This is that Don Don happened to survive. Guys. I love I love you. But I gotta go. Goodbye. Here's what Maxwell from mother body. Have fun. There's been a lot of discussion about the future of San Francisco and the death has quickly become politicized.

Has that caused any division or disagreement from what you've seen? Or has that not been the case? The press is gleeful right now. They're gleeful like, oh my god. Oh, just like the right was gleeful with Jesse Smollett. Having gotten himself beaten up or you know, setting up his own.

All right, everybody for the Sultan of science currently conducting experiments on a beach to see exactly how burned he can get with his SPF 200 under an umbrella wearing a sun shirt and pants. Freeberg freeberg on the beach was the same outfit astronauts wear when they do spacewalks. Hey, stable diffusion.

Make me an image of David freeberg wearing a full body baiting suit covered in SPF 200 under three umbrellas. Sunny Beach. Thank you. Oh my god. For the dictator. Chamath Palihapitiya creating regulations. And the regular the regulator you can call me the regular regulator. See you tonight when we'll eat our orchelons what's left of them.

The final four or five orchelons in existence. Don't be late. Otherwise I'm putting you on the B list today if you're like I will be there. I'll be there. I promise. I promise. I promise. Can't wait to be there and the rain man himself namaste didn't even get to putting Ron Oh, well, versus Nikki.

I think you should ask auto GPT how you can eat more endangered animals. Yes, we have a plan for you. Yes. And then have it go kill those animals. In the real world. Put something on the dark web to go kill the remaining rhinos and bring them to Chamath's house for poker night.

I don't think rhinos would taste good. Was that the Plava movie? It was a Oh, did you guys see is cocaine bear out yet? No, it was a Matthew Broderick Marlon Brando movie right where they're doing the takeoff on the godfather was the first father. Yeah, yeah, yeah, yeah.

It's like a conspiracy to eat. Endangered animals. Yes, the freshman. The pressure came out in 1990. Yeah, Marlon Brando did it with Matthew Broderick. And like Bruno Kirby. They actually they that was the whole thing. No Kirby. That's a deep they were actually they were eating endangered animals. What do you what do you think he too?

Is that going to be good? Saks? I know he's one of your favorite films. Me too. It's awesome. Is there a sequel coming? They're gonna do he took in the novels already come out. Adam. I saw the novel. Yeah, he's amazing. He does. He's one of those movies where when it comes on, you just can't stop watching.

Yes. To screener. Best bank robbery slash shootout in movie history. You know, that is literally the best film ever. Like it's up there with like the Joker with reservoir dogs. The Joker in that Batman movie where he robs the bank like, I mean, what I love you guys. All right.

Love you besties and for blah, blah, blah, blah, blah. This is gonna be all in podcast 124. If you want to go to the fan meetups and hang out with other blah, blah, blah, blah, blah, blah. Bye bye. Let your winners ride. Rain Man David we open source it to the fans and they've just gone crazy with it.

Love you. West. I squeen of besties are gone. We should all just get a room and just have one big huge orgy because they're all just like this like sexual tension that they just need to release. What you're about to be your feet waiting to get Merchies are back.

I'm going all in. I'm going all in. Going all in (music) (Music plays)