back to index

E124: AutoGPT's massive potential and risk, AI regulation, Bob Lee/SF update


Chapters

0:0 Bestie intros!
1:49 Understanding AutoGPTs
23:57 Generative AI's rapid impact on art, images, video, and eventually Hollywood
37:38 How to regulate AI?
72:35 Bob Lee update, recent SF chaos

Whisper Transcript | Transcript Only Page

00:00:00.000 | Welcome to Episode 124 of the all in podcast. My understanding
00:00:04.960 | is there's going to be a bunch of global fan meetups for
00:00:08.680 | Episode 125. If you go to Twitter and you search for all
00:00:12.880 | in fan meetups, you might be able to find the link.
00:00:14.720 | But just to be clear, we're not they're not official all in
00:00:17.240 | this. They're fans. It's self organized, which is pretty
00:00:20.480 | mind blowing. But we can't vouch for any particular
00:00:23.240 | organization, right?
00:00:24.680 | Nobody knows what's going to happen at these things. You can
00:00:27.080 | get robbed. It could be a setup. I don't know. But I retweeted
00:00:31.120 | it anyway, because there are 31 cities where you lunatics are
00:00:34.880 | getting together to celebrate the world's number one business
00:00:38.520 | technology podcast.
00:00:40.440 | It is pretty crazy. You know, what this reminds me of is in
00:00:43.360 | the early 90s, when Rush Limbaugh became a phenomenon.
00:00:46.680 | There used to be these things called rush rooms where like
00:00:50.120 | restaurants and bars would literally broadcast rush over
00:00:54.320 | their speakers during I don't know, like what for the morning
00:00:56.960 | through lunch broadcast, and people would go to these rush
00:00:59.720 | rooms and listen together.
00:01:01.000 | What was it like sex when you were about 1617 years old at the
00:01:04.000 | time? What was it like when you hosted this?
00:01:05.680 | It was a phenomenon. But I mean, it's kind of crazy. We've got
00:01:08.360 | like a phenomenon going here where people are organizing.
00:01:11.800 | You've said phenomenon three times instead of phenomenon. He
00:01:15.840 | said phenomenon. phenomenal. Why sacks in a good mood? What's
00:01:19.760 | going on? There's a specific secret toe tap that you do under
00:01:22.760 | the bathroom stalls when you go to a rush room.
00:01:24.560 | I think you're getting confused about a different event you
00:01:31.520 | went to.
00:01:49.600 | There's a lot of actual news in the world and generative AI is
00:01:54.560 | taking over the dialogue and it's moving at a pace that none
00:01:59.560 | of us have ever seen in the technology industry. I think
00:02:01.720 | we'd all agree the number of companies releasing product and
00:02:06.920 | the compounding effect of this technology is phenomenal. I
00:02:10.760 | think we would all agree a product came out this week
00:02:14.880 | called auto GPT. And people are losing their mind over it.
00:02:20.360 | Basically, what this does is it lets different GPT is talk to
00:02:26.560 | each other. And so you can have agents working in the background
00:02:30.360 | and we've talked about this on previous podcasts. But they could
00:02:33.640 | be talking to each other essentially, and then completing
00:02:37.760 | tasks without much intervention. So if let's say you had a sales
00:02:42.040 | team and you said to the sales team, hey, look for leads that
00:02:47.480 | have these characteristics for our sales software, put them
00:02:50.480 | into our database, find out if they're already in the database,
00:02:53.840 | alert a salesperson to it, compose a message based on that
00:02:56.640 | person's profile on LinkedIn or Twitter or wherever. And then
00:02:59.960 | compose an email, send it to them if they reply, offer them
00:03:03.200 | to do a demo and then put that demo on the calendar of the
00:03:05.640 | salesperson, thus eliminating a bunch of jobs and you could run
00:03:08.440 | these, what would essentially be cron jobs in the background
00:03:12.200 | forever, and they can interact with other LLM in real time
00:03:16.720 | sacks. I've just gave but one example here. But when you see
00:03:19.480 | this happening, give us your perspective on what this tipping
00:03:23.280 | point means.
00:03:24.040 | Let me take a shot at explaining it in a slightly different way.
00:03:27.280 | Sure. Not that your explanation was wrong. But I just think that
00:03:30.280 | maybe explain it in terms of something more tangible. So I
00:03:34.600 | had a friend who's a developer who's been playing with auto
00:03:38.200 | GPT. By the way, so you can see it's on GitHub. It's kind of an
00:03:41.360 | open source project. It was sort of a hobby project. It looks
00:03:43.800 | like that somebody put up there. It's been out for about two
00:03:46.640 | weeks. It's already got 45,000 stars on GitHub, which is a huge
00:03:51.040 | number.
00:03:51.480 | Explain what GitHub is for the audience.
00:03:53.200 | It's just a code repository. And you can create, you know,
00:03:56.000 | repos of code for open source projects. That's where all the
00:03:58.360 | developers check in their code. So you know, for open source
00:04:01.520 | projects like this, anyone can go see it and play with it.
00:04:03.720 | It's like Pornhub, but for developers.
00:04:07.000 | It would be more like amateur Pornhub because you're
00:04:09.320 | contributing your scenes as it were your code. Yes, continue.
00:04:12.840 | But this thing has a ton of stars. And apparently just last
00:04:17.560 | night, I got another 10,000 stars overnight. This thing is
00:04:19.840 | like, exploding in terms of popularity. But in any event,
00:04:23.080 | what you do is you give it an assignment. And what auto GPT
00:04:27.560 | can do that's different is it can string together prompts. So
00:04:31.560 | if you go to chat GPT, you prompt it one at a time. And
00:04:34.640 | what the human does is you get your answer. And then you think
00:04:36.600 | of your next prompt, and then you kind of go from there and
00:04:38.840 | you end up in a long conversation that gets you to
00:04:41.240 | where you want to go. So the question is, what if the AI
00:04:45.000 | could basically prompt itself, then you've got the basis for
00:04:49.240 | autonomy. And that's what this project is designed to do. So
00:04:52.720 | what you'll do is when my friend did it, he said, Okay, you're an
00:04:55.920 | event planner, AI. And what I would like you to do is plan a
00:05:01.320 | trip for me for a wine tasting in Healdsburg this weekend. And
00:05:07.240 | I want you to find like the best place I should go and it's got
00:05:09.800 | to be kid friendly, not everyone's going to drink, we're
00:05:11.680 | gonna have kids there. And I'd like to be able to have other
00:05:13.800 | people there. And so I'd like you to plan this for me. And so
00:05:17.680 | what auto GPT did is it broke that down into a task list. And
00:05:22.880 | every time I completed a task, it would add a new task to the
00:05:26.480 | bottom of that list. And so the output of this is that it
00:05:30.680 | searched a bunch of different wine tasting venues, it found a
00:05:34.080 | venue that had a bocce ball and lawn area for kids, it came up
00:05:38.120 | with a schedule, it created a budget, it created a checklist
00:05:42.600 | for an event planner. It did all these things. And my friend says
00:05:45.960 | he's actually in a book the venue this weekend and use it.
00:05:48.080 | So we're going beyond the ability just for a human to just
00:05:52.680 | prompt the AI we're now the AI can take on complicated tasks.
00:05:58.120 | And again, it can recursively update its task list based on
00:06:01.240 | what it learns from its own previous prompt. So what you're
00:06:04.800 | seeing now is the basis for a personal digital assistant. This
00:06:08.720 | is really where it's all headed is that you can just tell the AI
00:06:11.880 | to do something for you pretty complicated. And it will be able
00:06:15.160 | to do it, it will be able to create its own task list and get
00:06:17.800 | the job done in quite complicated jobs. So that's why
00:06:22.080 | everyone's losing their shit over this
00:06:23.600 | free burger thoughts on automating these tasks and
00:06:27.160 | having them run and add tasks to the list. This does seem like a
00:06:32.520 | sort of seminal moment in time that this is actually working.
00:06:36.120 | I think we've been seeing seminal moments over the last
00:06:41.640 | couple of weeks and months, kind of continuously, every time we
00:06:46.320 | chat about stuff, or every day, there's new releases that are
00:06:50.120 | paradigm shifting, and kind of reveal new applications and
00:06:55.000 | perhaps concepts structurally that we didn't really have a
00:06:59.600 | good grasp of before some demonstration came across chat
00:07:02.400 | GPT was kind of the seed of that. And then all of this
00:07:05.560 | evolution since has really, I think, changed the landscape for
00:07:09.720 | really how we think about our interaction with the digital
00:07:12.520 | world and where the digital world can go and how it can
00:07:15.800 | interact with the physical world. It's just really
00:07:17.800 | profound. One of the interesting aspects that I think I saw with
00:07:22.480 | some of the applications of auto GPT, where these almost like
00:07:26.440 | autonomous characters in, in like a game simulation that
00:07:32.360 | could interact with each other or these autonomous characters
00:07:34.560 | that would speak back and forth to one another, where each
00:07:38.040 | instance has its own kind of predefined role. And then it
00:07:42.560 | explores some set of discovery or application or prompt back
00:07:46.440 | and forth with the other agent, and that the kind of recursive
00:07:50.280 | outcomes with this agent to agent interaction model, and
00:07:53.160 | perhaps multi agent interaction model, again, reveals an
00:07:57.320 | entirely new paradigm for, you know, how things can be done
00:07:59.960 | simulation wise, you know, discovery wise engagement wise,
00:08:04.160 | where one agent, you know, each agent can be a different
00:08:07.040 | character in a room. And you can almost see how a team might
00:08:10.200 | resolve to create a new product collaboratively by telling each
00:08:14.280 | of those agents to have a different character background
00:08:16.480 | or different set of data or a different set of experiences or
00:08:19.440 | different set of personality traits. And the evolution of
00:08:22.200 | those that multi agent system outputs, you know, something
00:08:25.560 | that's very novel, that perhaps any of the agents operating
00:08:28.320 | independently, we're not able to kind of reveal themselves. So
00:08:30.880 | again, like another kind of dimension of interaction with
00:08:34.760 | these with these models. And it again, like every week, it's a
00:08:38.640 | whole nother layer to the onion. It's super exciting and
00:08:42.400 | compelling. And the rate of change and the pace of kind of,
00:08:46.040 | you know, new paths being being defined here, really, I think
00:08:50.680 | makes it difficult to catch up. And particularly, it highlights
00:08:54.760 | why it's gonna be so difficult, I think, for regulators to come
00:08:58.000 | in and try and set a set of standards and a set of rules at
00:09:02.520 | this stage, because we don't even know what we have here
00:09:04.160 | yet. And it's going to be very hard to kind of put the genie
00:09:06.760 | back in the box.
00:09:07.520 | Yeah. And you're also referring, I think, to the Stanford and
00:09:12.440 | Google paper that was published this week, they did a research
00:09:15.520 | paper where they created essentially the Sims, if you
00:09:17.960 | remember that video game, put a bunch of what you might consider
00:09:21.760 | NPCs non playable characters, you know, the merchant or the
00:09:25.320 | whoever in a in a video game, and they said, each of these
00:09:30.840 | agents should talk to each other, put them in a simulation,
00:09:33.400 | one of them decided to have a birthday party, they decided to
00:09:35.440 | invite other people, and then they have memories. And so then
00:09:38.400 | over time, they would generate responses like I can't go to
00:09:42.080 | your birthday party, but happy birthday, and then they would
00:09:45.400 | follow up with each player and seemingly emergent behaviors
00:09:49.760 | came out of this sort of simulation, which of course, now
00:09:53.560 | has everybody thinking, well, of course, we as humans, and this is
00:09:56.560 | simulation theory are living in a simulation, we've all just been
00:09:59.160 | put into this trauma is what we're experiencing right now.
00:10:02.960 | How impressive this technology is, or is it? Oh, wow, human
00:10:09.680 | cognition, maybe we thought was incredibly special, but we can
00:10:13.520 | actually simulate a significant portion of what we do as humans.
00:10:17.080 | So we're kind of taking the shine off of consciousness.
00:10:20.480 | I'm not sure it's that, but I would make two comments. I think
00:10:23.040 | this is a really important week. Because it starts to show how
00:10:28.680 | fast the recursion is with AI. So in other technologies, and in
00:10:35.160 | other breakthroughs, the recursive iterations took years,
00:10:39.400 | right? If you think about how long did we wait for from iPhone
00:10:42.720 | one to iPhone two, it was a year, right? We waited two years
00:10:47.600 | for the app store. Everything was measured in years, maybe
00:10:51.920 | things when they were really, really aggressive, and really
00:10:55.240 | disruptive were measured in months. Except now, these
00:10:59.200 | incredibly innovative breakthroughs are being measured
00:11:01.920 | in days and weeks. That's incredibly profound. And I think
00:11:07.360 | it has some really important implications to like the three
00:11:11.160 | big actors in this play, right? So it has, I think, huge
00:11:14.680 | implications to these companies, it's not clear to me how you
00:11:18.320 | start a company anymore. I don't understand why you would have a
00:11:24.600 | 40 or 50 person company to try to get to an MVP. I think you
00:11:29.960 | can do that with three or four people. And that has huge
00:11:35.880 | implications then to the second actor in this play, which are
00:11:39.160 | the investors and venture capitalists that typically fund
00:11:41.600 | this stuff, because all of our capital allocation models were
00:11:45.240 | always around writing 10 and 15 and $20 million checks and $100
00:11:49.200 | million checks, then $500 million checks into these
00:11:52.160 | businesses that absorbs tons of money. But the reality is like,
00:11:56.760 | you know, you're looking at things like mid journey and
00:11:59.560 | others that can scale to enormous size with very little
00:12:03.560 | capital, many of which can now be bootstrapped. So it takes
00:12:08.080 | really, really small amounts of money. And so I think that's a
00:12:12.520 | huge implication. So for me, personally, I am looking at
00:12:15.200 | company formation being done in a totally different way. And our
00:12:20.280 | capital allocation model is totally wrong size. Look, fun
00:12:23.040 | for for me was $1 billion. Does that make sense? No, for the
00:12:28.000 | next three or four years, no, the right number may actually be
00:12:30.640 | $50 million invested over the next four years. I think the VC
00:12:34.640 | job is changing. I think company startups are changing. I want to
00:12:37.560 | remind you guys have one quick thing as a tangent. I had this
00:12:41.600 | meeting with Andre carpety, I talked about this on the pod,
00:12:44.200 | where I said I challenged him, I said, Listen, the real goal
00:12:47.520 | should be to go and disrupt existing businesses using these
00:12:50.480 | tools, cutting out all the sales and marketing, right, and just
00:12:54.560 | delivering something and I use the example of stripe,
00:12:56.880 | disrupting stripe by going to market with an equivalent
00:12:59.920 | product with one 10th the number of employees at one 10th the
00:13:03.240 | cost. What's incredible is that this auto GPT is the answer to
00:13:07.960 | that exact problem. Why? Because now if you are a young
00:13:12.360 | industrious entrepreneur, if you look at any bloated organization
00:13:17.040 | that's building enterprise class software, you can string
00:13:20.760 | together a bunch of agents that will auto construct everything
00:13:25.200 | you need to build a much much cheaper product that then you
00:13:29.280 | can deploy for other agents to consume. So you don't even need
00:13:32.920 | a sales team anymore. This is what I mean by this crazy
00:13:35.720 | recursion that's possible. Yeah. So I'm really curious to see how
00:13:39.880 | this actually affects like all of this all of these, you know,
00:13:43.360 | it's a singular company. I mean, it's a continuation chamath of
00:13:47.000 | and then the last thing I just want to say is related to my
00:13:49.200 | tweet. I think this is exactly the moment where we now have to
00:13:53.440 | have a real conversation about regulation. And I think it has
00:13:56.080 | to happen. Otherwise, it's going to be a shit show.
00:13:57.680 | Let's put a pin in that for a second. But I want to get saxes
00:14:00.120 | response to some of this. So, sax, we saw this before it used
00:14:04.040 | to take two or $3 million to commercialize a web based
00:14:06.720 | software product app, then it went down to 500k, then 250. I
00:14:11.200 | don't know if you saw this story. But if you remember the
00:14:14.000 | hit game on your iPhone, flappy birds, flappy birds, you know,
00:14:19.160 | was a phenomenon. It you know, hundreds of millions of people
00:14:22.760 | played this game over some period of time. Somebody made it
00:14:27.320 | by talking to chat GPT for a mid journey in an hour. So the
00:14:32.040 | perfect example and listen, it's a game. So it's something silly.
00:14:34.920 | But I was talking to two developers this weekend. And one
00:14:38.280 | of them was an okay developer. And the other one was an actual
00:14:41.400 | 10x developer who's built, you know, very significant
00:14:44.000 | companies. And they were coding together last week. And because
00:14:47.880 | of how fast chat GPT and other services were writing code for
00:14:51.640 | them. He looked over at her and said, you know, you're basically
00:14:56.840 | a 10x developer now, my superpower is gone. So where
00:15:01.640 | does this lead you to believe company formation is going to
00:15:05.760 | go? Is this going to be, you know, massively deflationary
00:15:09.640 | companies like stripe are going to have 100 competitors in a
00:15:12.840 | very short period of time? Or are we just going to go down the
00:15:15.080 | long tail of ideas and solve everything with software? How's
00:15:20.120 | this going to play out in the in the startup space? David Sacks?
00:15:24.680 | Well, I think it's true that developers and especially junior
00:15:29.560 | developers get a lot more leverage on their time. And so
00:15:33.520 | it is going to be easier for small teams to get to an MVP,
00:15:36.760 | which is something they always should have done anyway, with
00:15:39.520 | their seed round, you shouldn't have needed, you know, 50
00:15:42.720 | developers to build your v1, it should be, you know, this the
00:15:46.600 | founders really. So that, that I think is already happening. And
00:15:52.160 | that trend will continue. I think we're still a ways away
00:15:55.760 | from startups being able to replace entire teams of people.
00:16:00.200 | I just, you know, I think right now, we're at the final ways,
00:16:03.440 | final ways, months, years, decades, or it's in the years, I
00:16:08.000 | think, for sure. We don't know how many years. And the reason I
00:16:10.760 | say that it's just very hard to replace, you know, 100% of what
00:16:16.600 | any of these particular job functions do 100% of what a
00:16:20.080 | sales rep does 100% of what a marketing rep does, or even what
00:16:23.760 | a coder does. So right now, I think we're still at the phase
00:16:26.280 | of this, where it's a tool that gives a human leverage. And I
00:16:32.000 | think we're still a ways away from the, you know, human being
00:16:34.600 | completely out of the loop. I think right now, I see it mostly
00:16:38.120 | as a force for good, as opposed to something that's creating,
00:16:42.160 | okay, a ton of dislocation.
00:16:44.240 | Friedberg, your thoughts,
00:16:46.000 | if we follow the trend line, you know, to make that video game
00:16:49.360 | that you shared took probably a few hundred human years than a
00:16:52.800 | few dozen human years, then, you know, with other toolkits
00:16:57.200 | coming out, maybe a few human months, and now this person did
00:17:00.920 | it in one human day, using this tooling. So if you think about
00:17:05.720 | the implication for that, I mentioned this probably last
00:17:08.800 | year, I really do believe that at some point, the whole concept
00:17:12.360 | of publishers and publishing maybe goes away, where, you
00:17:15.920 | know, much like we saw so much of the content on the internet
00:17:18.200 | today being user generated, you know, most of the content is
00:17:20.840 | made by individuals posted on YouTube or Twitter, that's most
00:17:23.920 | of what we consume nowadays, or Instagram or Tick Tock, in terms
00:17:27.360 | of video content. We could see the same in terms of software
00:17:32.840 | itself, where you no longer need a software startup or a software
00:17:36.720 | company to render or generate a set of tools for a particular
00:17:40.600 | user, but that the user may be able to define to their agent,
00:17:45.120 | their AI agent, the set of tools that they would individually
00:17:48.440 | like to use or to create for them to do something interesting.
00:17:51.360 | And so the idea of buying or subscribing to software, or even
00:17:55.760 | buying or subscribing to a video game, or to a movie or to some
00:17:59.240 | other form of content starts to diminish. As the leverage goes
00:18:04.520 | up with these tools, the accessibility goes up, you no
00:18:06.920 | longer need a computer engineering degree or computer
00:18:09.000 | science degree, to be able to harness them or use them. And
00:18:12.280 | individuals may be able to speak in simple and plain English,
00:18:15.120 | that they would like a book or a movie that does that looks and
00:18:18.840 | feels like the following or a video game that feels like the
00:18:21.600 | following. And so when I open up my iPhone, maybe it's not a
00:18:24.960 | screen with dozens of video games, but it's one interface.
00:18:27.760 | And the interface says, What do you feel like playing today. And
00:18:30.360 | then I can very clearly and succinctly state what I feel
00:18:32.440 | like playing and it can render that game and render the code,
00:18:34.680 | render the engine, render the graphics and everything on the
00:18:37.840 | fly for me. And I can use that. And so you know, I kind of think
00:18:41.160 | about this as being a bit of a leveling up that the idea that
00:18:44.440 | all technology again, start central and move to kind of the
00:18:47.040 | edge of the network over time. That may be what's going on with
00:18:50.680 | computer programming itself now, where the toolkit to actually
00:18:54.560 | use computers to generate stuff for us is no longer a toolkit
00:18:59.520 | that's harnessed and controlled and utilized by a set of
00:19:02.400 | centralized publishers, but it becomes distributed and used to
00:19:05.360 | the edge of the network by users like anyone. And then the edge
00:19:09.160 | of the network technology can render the software for you. And
00:19:12.520 | it really creates a profound change in the entire business
00:19:15.440 | landscape of software and the internet. And I think it's, you
00:19:20.000 | know, it's, it's really like, we're just starting to kind of
00:19:22.520 | see have our heads unravel around this notion. And we're
00:19:25.800 | sort of trying to link it to the old paradigm, which is all
00:19:28.160 | startups are gonna get cheaper, smaller teams. But it may be
00:19:30.560 | that you don't even need startups for a lot of stuff
00:19:32.280 | anymore. You don't even need teams. And you don't even need
00:19:34.280 | companies to generate and render software to do stuff for you
00:19:37.320 | anymore. Chama when we look at this, it's kind of a pattern of
00:19:42.320 | augmentation, as we've been talking about here, we're
00:19:45.760 | augmenting human intelligence, then replacing this replication
00:19:51.960 | or this automation, I guess might be a nice way to say it
00:19:54.600 | says augmentation, then automation. And then perhaps
00:19:59.040 | deprecation, where do you sit on this? It seems like sacks feels
00:20:02.920 | it's going to take years. And freeberg thinks, hey, maybe
00:20:05.800 | startups and content are over? Where do you sit on this
00:20:08.640 | augmentation, automation, deprecation journey we're on?
00:20:12.120 | I think that humans have judgment. And I think it's going
00:20:14.640 | to take decades for agents to replace good judgment. And I
00:20:18.160 | think that's where we have some defensible ground. And I'm going
00:20:22.880 | to say something controversial. I don't think developers anymore
00:20:26.240 | have good judgment. developers get to the answer, or they don't
00:20:30.400 | get to the answer. And that's what agents have done. Because
00:20:32.960 | the 10x engineer had better judgment than the one x
00:20:36.320 | engineer. But by making everybody a 10x engineer, you
00:20:39.920 | taking judgment away, you're taking code paths that are now
00:20:44.120 | obvious and making it available to everybody. It's effectively
00:20:47.360 | like what you did in chess, and AI created a solver. So
00:20:50.920 | everybody understood the most efficient path in every single
00:20:54.720 | spot to do the most EV positive thing, the most expected value
00:20:58.800 | positive thing. Coding is very similar that way you can reduce
00:21:02.000 | it and view it very, very reductively. So there is no
00:21:05.800 | differentiation in code. And so I think freeberg is right. So
00:21:08.920 | for example, let's say you're going to start a company today.
00:21:11.600 | Why do you even care what database you use? Why do you
00:21:17.040 | even care? Which cloud you're built on? To freeberg's point,
00:21:22.160 | why do any of these things matter? They don't matter. They
00:21:25.240 | were decisions that used to matter when people had a job to
00:21:28.720 | do. And you paid them for their judgment. Oh, well, we think
00:21:31.760 | GCP is better for this specific workload. And we think that this
00:21:35.520 | database architecture is better for that specific workload. And
00:21:38.160 | we're going to run this on AWS, but that on Azure. And do you
00:21:42.400 | think an agent cares? If you tell an agent, find me the
00:21:45.720 | cheapest way to execute this thing. And if it ever gets not,
00:21:50.280 | you know, cheaper to go someplace else, do that for me
00:21:52.240 | as well. And, you know, ETL all the data and put it in the other
00:21:55.520 | thing, and I don't really care.
00:21:57.320 | So you're saying it will, it will swap out stripe for ad yen
00:22:00.520 | or it does node for Amazon Web Services, it's going to be
00:22:03.680 | ruthless,
00:22:04.400 | it's going to be ruthless. And I think that the point of that that
00:22:07.280 | and that's the exact perfect word, Jason, AI is ruthless,
00:22:10.880 | because it's emotionless. It was not taken to a steak dinner. It
00:22:15.200 | was not brought to a basketball game. It was not sold into a CEO.
00:22:19.760 | It's an agent that looked at a bunch of API endpoints figured
00:22:25.120 | out how to write code to it to get done the job at hand that
00:22:28.400 | was passed to it within a budget, right. The other thing
00:22:31.120 | that's important is these agents execute within budgets. So
00:22:35.160 | another good example was, and this is a much simpler one. But
00:22:39.840 | a guy said, I would like seven days worth of meals. Here are my
00:22:46.120 | constraints from a dietary perspective. Here are also my
00:22:49.640 | budgetary constraints. And then what this agent did was figured
00:22:53.040 | out how to go and use the Instacart plugin at the time and
00:22:56.400 | then these other things and execute within the budget. How
00:23:00.120 | is that different when you're a person that raises $500,000 and
00:23:03.800 | says, I need a full stack solution that does x, y and z
00:23:06.720 | for $200,000. It's the exact same problem. So I think it's
00:23:11.960 | just a matter of time until we start to cannibalize these
00:23:15.280 | extremely expensive, ossified large organizations that have
00:23:19.360 | relied on a very complicated go to market and sales and
00:23:22.440 | marketing motion. I don't think you need it anymore in a world
00:23:24.840 | of agents and auto GPT. And I think that to me is quite
00:23:29.360 | interesting, because a, it creates an obvious set of public
00:23:33.040 | company shorts. And then be, you actually want to arm the rebels
00:23:38.840 | and arming the rebels to use the Toby Lutke analogy here would
00:23:42.680 | mean to seed hundreds of one person teams, hundreds and just
00:23:48.400 | say go and build this entire stack all over again using a
00:23:51.040 | bunch of agents. Yeah, recursively, you'll get to that
00:23:54.440 | answer in less than a year.
00:23:56.560 | Interestingly, when you talk about the emotion of making
00:24:01.040 | these decisions, if you look at Hollywood, I just interviewed on
00:24:04.280 | my other podcast, the founder of you have another podcast. I do.
00:24:09.000 | It's called startups. Thank you. So you've been on her four times
00:24:12.800 | please.
00:24:13.080 | Don't give him an excuse to plug it.
00:24:15.400 | Listen, I'm not going to plug this week in startups available
00:24:18.480 | on Spotify and iTunes and youtube.com slash this weekend.
00:24:21.320 | runway is the name of this company I interviewed. And
00:24:24.160 | what's fascinating about this is he told me on everything
00:24:27.880 | everywhere all at once the award winning film, they had seven
00:24:32.480 | visual effects people on it, and they were using his software. The
00:24:36.000 | late night shows like Colbert and stuff like that are using it.
00:24:38.720 | They are ruthless in terms of creating crazy visual effects
00:24:42.400 | now, without and you can do text prompt to get video output. And
00:24:48.320 | it is quite reasonable what's coming out of it. But you can
00:24:51.240 | also train it on existing data sets. So they're going to be
00:24:54.120 | able to take something sacks, like the Simpsons, or South
00:24:58.880 | Park, or Star Wars, or Marvel, take the entire corpus of the
00:25:02.760 | comic books and the movies and the TV shows, and then have
00:25:06.080 | people type in have Iron Man do this have Luke Skywalker do
00:25:09.560 | that. And it's going to output stuff. And I said, Hey, when
00:25:12.400 | would this reach the level that the Mandalorian TV show is and
00:25:17.760 | he said within two years now he's talking his own book, but
00:25:20.560 | it's quite possible that that all these visual effects people
00:25:24.440 | from industrial light magic on down are going to be replaced
00:25:28.600 | with director sacks who are currently using this technology
00:25:32.200 | to do what do they call the images like that go with the
00:25:35.360 | script storyboards storyboards Thank you. They're doing
00:25:38.520 | storyboards in this right now, right? The difference between
00:25:41.200 | the storyboard sacks and the output is closing in the next
00:25:45.360 | 30 months, I would say, right? I mean, maybe you could speak to
00:25:48.800 | a little bit about the pace here, because that is the
00:25:50.920 | perfect ruthless example of ruthless AI. I mean, you could
00:25:53.360 | have the entire team at industrial light magics or
00:25:56.760 | Pixar be unnecessary. This decade?
00:26:00.400 | Well, I mean, you see a bunch of the pieces already there. So
00:26:03.040 | you have stable diffusion, you have the ability to type in the
00:26:05.800 | image that you want, and it spits out, you know, a version
00:26:08.840 | of it, or 10 different versions of it. And you can pick which
00:26:10.800 | one you want to go with, you have the ability to create
00:26:13.200 | characters, you have the ability to create voices, you
00:26:17.280 | have the ability to replicate a celebrity voice, the only thing
00:26:20.800 | that's not there yet, as far as I know, is the ability to take
00:26:24.360 | static images and string them together into a motion picture.
00:26:27.280 | But that seems like it's coming really soon. So yeah, in theory,
00:26:30.680 | you should be able to train the model, where you just give it a
00:26:33.560 | screenplay, and it outputs, essentially an animated movie.
00:26:37.680 | And then you should be able to fine tune it by choosing the
00:26:40.240 | voices that you want, and the characters that you want. And,
00:26:43.360 | you know, and that kind of stuff. So yeah, I think we're
00:26:46.360 | close to it. Now, I think that the question, though, is, you
00:26:50.040 | know, every nine, let's call it a reliability is a big
00:26:54.360 | advancement. So yeah, it might be easy to get to 90% within two
00:26:59.240 | years, but it might take another two years to go from 90 to 99%.
00:27:02.200 | And then it might take another two years to get to 99.9, and so
00:27:06.320 | on. And so to actually get to the point where you're at this
00:27:09.280 | stage where you can release a theatrical quality movie, I'm
00:27:12.840 | sure it will take a lot longer than two years.
00:27:14.360 | Well, but look at this, sex, I'm just gonna show you one image.
00:27:16.600 | This is the input was aerial drone footage of a mountain
00:27:19.840 | range. And this is what it came up with. Now, if you were
00:27:22.720 | watching TV in the 80s, or 90s, on a non HDTV, this would look
00:27:26.960 | indistinguishable from anything you've seen. And so this is at a
00:27:31.560 | pace that's kind of crazy. There's also opportunity here,
00:27:34.040 | right, Friedberg. I mean, if we were to look at something like
00:27:36.880 | the Simpsons, which has gone on for 30 years, if young people
00:27:41.080 | watching the Simpsons could create their own scenarios, or
00:27:44.960 | with auto GPT, imagine you told the Simpsons, stable diffusion
00:27:51.200 | instance, read what's happening in the news, have Bart Simpson
00:27:55.400 | respond to it have the South Park characters parody, whatever
00:27:59.480 | happened in the news today, you could have automated real time
00:28:02.800 | episodes of South Park, just being published on to some
00:28:06.520 | website. Before you move on, did you see the the wonder studio
00:28:10.000 | demo, we can pull this one up. It's really cool. Yeah, please.
00:28:13.640 | This is a startup that's using this type of technology. And the
00:28:17.800 | way it works is you film a live action scene with a regular
00:28:23.680 | actor, but then you can just drag and drop an animated
00:28:26.160 | character onto it. And it then converts that scene into a movie
00:28:32.360 | with that character, like Planet of the Apes or Lord of the
00:28:35.240 | Rings, right? Yeah, yeah. Dacus, it was he the person who kept
00:28:38.280 | winning all the Oscars. So there it goes after the robot has
00:28:40.880 | replaced the human. Wow, you can imagine like every piece of this
00:28:44.960 | just eventually gets swapped out with AI, right? Like you should
00:28:48.320 | be able to tell the AI give me a picture of a human leaving a
00:28:56.440 | building, like a Victorian era building in New York. And
00:29:01.440 | certainly it can give you a static image of that. So it's
00:29:03.400 | not that far to then give you a video of that, right. And so I
00:29:07.920 | yeah, I think we're, we're pretty close for, let's call it
00:29:10.840 | hobbyists or amateurs be able to create pretty nice looking
00:29:14.520 | movies, using these types of tools. But again, I think
00:29:18.280 | there's a jump to get to the point where you're just all
00:29:21.080 | together replacing.
00:29:22.240 | One of the things I'll say on this is we still keep trying to
00:29:26.000 | relate it back to the way media narrative has been explored and
00:29:31.360 | written by humans in the past, very kind of linear
00:29:35.160 | storytelling, you know, it's a two hour movie, 30 minute TV
00:29:38.360 | segment, eight minute YouTube clip, 30 second Instagram clip,
00:29:41.760 | whatever. But one of the enabling capabilities with this
00:29:47.080 | set of tools is that these stories, the way that they're
00:29:51.960 | rendered, and the way that they're explored by individuals
00:29:54.840 | can be fairly dynamic. You could watch a movie with the same
00:30:00.800 | story, all four of us could watch a movie with the same
00:30:03.160 | story, but from totally different vantage points. And
00:30:06.280 | some of us could watch it in an 18 minute version or a two hour
00:30:09.360 | version or a, you know, three season episode episodic version,
00:30:13.160 | where the way that this opens up the potential for creators and
00:30:17.360 | also, so now I'm kind of saying, before I was saying, hey,
00:30:20.560 | individuals can make their own movies and videos, that's going
00:30:22.800 | to be incredible. There's a separate, I think, creative
00:30:26.480 | output here, which is the leveling up that happens with
00:30:31.000 | creators, that maybe wasn't possible to them before. So
00:30:34.280 | perhaps a creator writes a short book, a short story. And then
00:30:38.280 | that short story gets rendered into a system that can allow
00:30:41.520 | each one of us to explore it and enjoy it in different ways. And
00:30:44.960 | I, as the creator can define those different vantage points,
00:30:48.280 | I, as the creator can say, here's a little bit of this
00:30:50.520 | fast personality, this character trait. And so what I can now do
00:30:54.360 | as a creator is stuff that I never imagined I could do
00:30:56.960 | before. Think about old school photographers doing black and
00:30:59.680 | white photography with pinhole cameras, and then they come
00:31:02.640 | across Adobe Photoshop, what they can do with Adobe Photoshop
00:31:05.680 | with stuff that they could never conceptualize of in those old
00:31:09.000 | days, I think what's going to happen for creators going
00:31:11.480 | forward. And this is going back to that point that we had last
00:31:13.680 | week or two weeks ago about the guy that was like, hey, I'm out
00:31:15.840 | of a job. I actually think that the opportunity for creating new
00:31:19.320 | stuff in new ways is so profoundly expanding, that
00:31:23.440 | individuals can now write entire universes that can then be
00:31:27.120 | enjoyed by millions of people from completely different lengths
00:31:30.480 | and viewpoints and, and models, they can be interactive, they
00:31:33.920 | can be static, they can be dynamic. And that the
00:31:36.640 | personalised, personalised, but the tooling that you as a
00:31:39.760 | creator now have, you could choose which characters you want
00:31:42.760 | to define, you could choose which content you want to write,
00:31:46.720 | you could choose which content you want the AI to fill in for
00:31:49.520 | you and say, hey, create 50 other characters in the village.
00:31:52.800 | And then when the viewer reads the book or watches the movie,
00:31:55.320 | let them explore or have a different interaction with a set
00:31:58.120 | of those villagers in that village. Or you could say, hey,
00:32:01.440 | here's the one character everyone has to meet, here's
00:32:03.600 | what I want them to say. And you can define the dialogue. And so
00:32:06.600 | the way that creators can start to kind of harness their
00:32:09.200 | creative chops, and create new kinds of modalities for content
00:32:13.880 | and for exploration, I think is going to be so beautiful and
00:32:16.600 | incredible.
00:32:17.160 | I mean, Freiburg.
00:32:19.160 | Yeah, you can choose the limits of how much you want the
00:32:22.440 | individual to enjoy from your content, versus how narrowly you
00:32:25.800 | want to define it. And my guess is that the creators that are
00:32:28.800 | going to win are going to be the ones that are going to create
00:32:30.880 | more dynamic range and meet creative output. And then
00:32:33.920 | individuals are going to kind of be stuck, they're going to be
00:32:36.800 | more into that than they will with the static, everyone
00:32:39.080 | watches the same thing over and over. So there will be a whole
00:32:41.320 | new world of creators that you know, maybe have a different set
00:32:43.960 | of tools than just just realizing the last
00:32:47.440 | to build on what you're saying, very, very, she thinks
00:32:49.240 | incredibly insightful. Just think about the controversy
00:32:51.800 | around two aspects of a franchise like James Bond.
00:32:55.080 | Number one, who's your favorite bond? We grew up with Roger
00:32:57.840 | Moore, we lean towards that, then we discover Sean Connery,
00:33:00.160 | and then all of a sudden you see, you know, the latest one,
00:33:02.640 | he's just extraordinary. And Daniel Craig, you're like, you
00:33:06.400 | know what, that's the one that I love most. But what if you could
00:33:08.400 | take any of the films, you'd say, let me get you know, give
00:33:10.440 | me the spy who loved me, but put Daniel Craig in it, etc. And
00:33:13.720 | that would be available to you. And then think about the next
00:33:15.360 | controversy, which is Oh, my God, does Daniel, does James
00:33:18.120 | Bond need to be a white guy from the UK? Of course not. You can
00:33:21.120 | release it around the world and each region could get their own
00:33:24.360 | celebrity, their number one celebrity to play the lead and
00:33:27.920 | controversy over,
00:33:29.360 | you know, the old story, the epic of Gilgamesh, right. So
00:33:32.120 | like, that story was retold in dozens of different languages.
00:33:35.760 | And it was told through the oral tradition. It was like, you
00:33:38.160 | know, spoken by bards around a fire pit and whatnot. And all of
00:33:41.720 | those stories were told with different characters and
00:33:43.880 | different names and different experiences. Some of them were
00:33:47.000 | 10 minutes long, some of them were multi hour sagas explained
00:33:50.200 | through the story. But ultimately, the morality of the
00:33:53.360 | story, the storyline, the intentionality of the original
00:33:56.240 | creator of that story, yes, through the Bible is another
00:33:59.040 | good example of this, where much of the underlying morality and
00:34:01.760 | ethics in the Bible comes through in different stories
00:34:04.080 | read by different people in different languages. That may
00:34:07.120 | be where we go like, my kids want to have a 10 minute bedtime
00:34:09.800 | story. Well, let me give them Peter Pan at 10 minutes, I want
00:34:12.560 | to do you know, a chapter a night for my older daughter for
00:34:15.640 | a week long of Peter Pan. Now I can do that. And so the way that
00:34:19.480 | I can kind of consume content becomes different. So I guess
00:34:22.800 | what I'm saying is there's two aspects to the way that I think
00:34:25.760 | the entire content, the realm of content can be rewritten through
00:34:29.720 | AI. The first is like individual personalized creation of
00:34:32.720 | content, where I as a user can render content that was of my
00:34:36.520 | liking and my interest. The second is that I can engage with
00:34:39.880 | content that is being created that is so much more
00:34:42.120 | multidimensional than anything we conceive of today, where
00:34:45.000 | current centralized content creators now have a whole set of
00:34:47.320 | tools. Now from a business model perspective, I don't think that
00:34:50.240 | plot publishers are really the play anymore. But I do think
00:34:52.880 | that platforms are going to be the play. And the platform
00:34:55.160 | tooling that enables the individuals to do this stuff and
00:34:57.600 | the platform tooling that enables the content creators to
00:35:00.200 | do this stuff are definitely entirely new industries and
00:35:03.320 | models that can create multi hundred billion dollar outcomes.
00:35:06.360 | Let me hand this off to sacks because there has been the dream
00:35:09.640 | for everybody, especially in the Bay Area of a hero coming and
00:35:15.040 | saving Gotham City. And this has finally been realized David
00:35:20.000 | sacks, I did my own little Twitter AI hashtag and I said to
00:35:26.160 | Twitter AI, if only please generate a picture of David
00:35:29.720 | sacks is Batman crouched down on the peak thing a bridge, the
00:35:33.600 | amount of creativity sacks that came from this and this is
00:35:37.640 | something that you know, if we were talking about just five
00:35:41.160 | years ago, this would be like a $10,000 image you could create
00:35:44.240 | by the way, it's a birthday. These were not professional
00:35:46.880 | quote unquote artists, these individuals, individuals that
00:35:50.480 | were able to harness a set of platform tools to generate this
00:35:53.640 | incredible new content. And I think it speaks to the
00:35:56.000 | opportunity ahead. And by the way, we're in inning one, right?
00:35:58.640 | sacks, when you see yourself as Batman, do you ever think you
00:36:02.160 | should take your enormous wealth and resources and put it towards
00:36:05.320 | building a cave under your mansion that lets you out
00:36:09.080 | underneath the Golden Gate Bridge and you could go fight
00:36:10.880 | crime? So good. So good. Do you want to go fight this crime in
00:36:14.800 | Gotham?
00:36:15.240 | I think San Francisco has a lot of Gotham like qualities. I
00:36:19.240 | think the villains are more real than the heroes. Unfortunately,
00:36:22.240 | we don't have a lot of heroes. But yeah,
00:36:23.800 | we got a lot of jokers.
00:36:25.040 | jokers. Yeah, that's a whole separate topic. I'm sure
00:36:29.120 | we'll separate topic we'll get to at some point today. You guys
00:36:31.920 | are talking about all this stupid bullshit. Like there
00:36:34.600 | are trillions of dollars of software companies that could
00:36:36.440 | get disrupted. And you're talking about making fucking
00:36:38.160 | children's books and fat pictures of sacks. It's so dumb.
00:36:41.360 | it's a conversation.
00:36:42.320 | Great job. No, it nobody cares about entertainment anymore,
00:36:47.320 | because it's totally okay. So why don't you talk about
00:36:49.520 | industries? Where the money is? Why don't you teach people where
00:36:53.040 | there's going to be actual economic destruction? This is
00:36:55.280 | going to be amazing economic destruction and opportunity.
00:36:57.960 | You spend all this time on the most stupidest fucking topics.
00:37:01.520 | Listen, it's an illustrative example. No, it's an elitist
00:37:04.680 | example that you know, it's fucking circle.
00:37:07.400 | It's Batman's not nobody. Nobody cares. Let's bring nobody
00:37:11.640 | Lord tweet over everybody.
00:37:13.200 | I mean, I think I think us box office is like 20. Here I
00:37:17.080 | remember when like, like 100 billion a year payment volume,
00:37:20.640 | and now it's like hundreds of billions. So
00:37:22.320 | add in and Stripe are going to process $2 trillion almost once
00:37:25.720 | you talk about that disruption, you nanny market size of US
00:37:28.760 | media and entertainment industry 717 billion. Okay, it's not
00:37:32.120 | insignificant.
00:37:33.000 | Video games are nearly half a trillion a year.
00:37:35.720 | Yeah, I mean, this is not insignificant. But let's pull up
00:37:38.920 | Chamath tweet. Of course, the dictator wants to dictate here
00:37:42.200 | all this incredible innovation is being made. And a new hero
00:37:46.880 | has been born Chamath Palihapitiya, a tweet that went
00:37:50.440 | viral over 1.2 million views already. I'll read your tweet
00:37:54.280 | for the audience. If you invent a novel drug, you need the
00:37:57.520 | government to vet and approve it FDA before you can
00:37:59.920 | commercialize it. If you invent a new mode of air travel, you
00:38:02.720 | need the government to vet and approve it. FAA. I'm just going
00:38:05.800 | to edit this down a little bit. If you create new security, you
00:38:08.000 | need the government to vet it and approve it sec more
00:38:10.080 | generally, when you create things with broad societal
00:38:12.720 | impact, positive and negative, the government creates a layer
00:38:15.160 | to review and approve it. AI will need such an oversight
00:38:18.200 | body. The FDA approval process seems the most credible and
00:38:21.480 | adaptable into a framework to understand how a model behaves
00:38:26.520 | and it's counter factual. Our political leaders need to get in
00:38:30.840 | front of this sooner rather than later and create some oversight
00:38:33.920 | before the eventual big avoidable mistakes happen. And
00:38:37.120 | genies are let out of the bottle. Chamath, you really want
00:38:39.560 | the government to come in. And then when people build these
00:38:43.560 | tools, they have to submit them to the government to approve
00:38:46.240 | them. That's what you're saying here. And you want that to
00:38:48.200 | start now.
00:38:48.840 | Here's the alternative. The alternative is going to be the
00:38:52.240 | debacle that we know as Section 230. So if you try to write a
00:38:57.640 | brittle piece of legislation or try to use old legislation to
00:39:02.960 | deal with something new, it's not going to do a good job
00:39:06.720 | because technology advances way too quickly. And so if you look
00:39:11.240 | at the Section 230 example, where have we left ourselves,
00:39:14.120 | the politicians have a complete inability to pass a new
00:39:17.720 | framework to deal with social media to deal with
00:39:20.080 | misinformation. And so now we're all kind of guessing what a
00:39:25.120 | bunch of eight 70 and 80 year old Supreme Court justices will
00:39:29.160 | do in trying to rewrite technology law when they have to
00:39:33.080 | apply it on Section 230. So the point of that tweet was to lay
00:39:37.320 | the alternatives. There is no world in which this will be
00:39:41.720 | unregulated. And so I think the question to ask ourselves is do
00:39:46.280 | we want a chance for a new body? So the FDA is a perfect example
00:39:51.600 | why, even though the FDA commissioner is appointed by the
00:39:54.600 | President, this is a quasi organization, it's still arms
00:39:58.560 | length away. It has subject matter experts that they hire,
00:40:02.960 | and they have many pathways to approval. Some pathways take
00:40:08.480 | days, some pathways are months and years, some pathways are for
00:40:12.400 | breakthrough innovation, some pathways are for devices. So
00:40:15.640 | they have a broad spectrum of ways of of arbitrating what can
00:40:20.240 | be commercialized and what cannot. Otherwise, my
00:40:23.400 | prediction is we will have a very brittle law that will not
00:40:27.400 | work. It'll be like the Commerce Department and the FTC trying to
00:40:31.920 | gerrymander some old piece of legislation. And then what will
00:40:35.840 | happen is it'll get escalated to the Supreme Court. And I think
00:40:38.840 | they are the last group of people who should be deciding on
00:40:43.720 | this incredibly important topic for society. So what I have been
00:40:48.760 | advocating our leaders and I will continue to do so is don't try
00:40:53.280 | to ram this into an existing body. It is so important, it is
00:40:56.840 | worth creating a new organization like the FDA and
00:41:01.560 | having a framework that allows you to look at a model and look
00:41:05.320 | at the counterfactual judge how good how important how
00:41:10.040 | disruptive it is, and then release it in the wild
00:41:13.080 | appropriately. Otherwise, I think you'll have these chaos
00:41:15.480 | GPT things scale infinitely. Because again, as Friedberg said
00:41:20.320 | in a sex that you're talking about one person that can create
00:41:22.720 | this chaos, multiply that by every person that is an
00:41:26.520 | anarchist or every person that just wants to sow seeds of chaos
00:41:30.000 | and I think it's going to be all avoidable.
00:41:31.920 | I think regulating what software people can write is a near
00:41:35.320 | impossible task. Number one, I think you can probably put rules
00:41:39.040 | and restrictions around commerce, right? That's
00:41:41.240 | certainly feasible in terms of how people can monetize but in
00:41:45.120 | terms of writing and utilizing software, it's going to be as
00:41:48.920 | challenged as trying to monitor and demand oversight and
00:41:54.640 | regulation around how people write and use tools for for
00:41:59.240 | genome and biology exploration. Certainly, if you want to take a
00:42:02.800 | product to market and sell a drug to people that can
00:42:04.920 | influence their body, you have to go get that approved. But in
00:42:08.360 | terms of you know, doing your work in a lab, it's very
00:42:11.880 | difficult. I think the other challenge here is software can
00:42:16.080 | be written anywhere. It can be executed anywhere. And so if the
00:42:20.920 | US does try to regulate, or does try to put the brakes on the
00:42:26.640 | development of tools where the US can have kind of a great
00:42:30.160 | economic benefit and a great economic interest, there will be
00:42:33.760 | advances made elsewhere, without a doubt. And those markets and
00:42:37.720 | those those places will benefit in an extraordinarily out of
00:42:43.640 | pace way. As we just mentioned, there's such extraordinary kind
00:42:47.320 | of economic gain to be realized here, that if we're not, if the
00:42:51.960 | United States is not leading the world, we are going to be
00:42:55.720 | following and we are going to get disrupted, we are going to
00:42:57.560 | lose an incredible amount of value and talent. And so any
00:43:00.880 | attempt at regulation, or slowing down or telling people
00:43:04.400 | that they cannot do things when they can easily hop on a plane
00:43:07.360 | and go do it elsewhere, I think is is fraught with peril.
00:43:10.600 | So you don't agree with regulation, sacks? Are you on
00:43:13.680 | board with the Chamath plan? Are you on board with the free
00:43:15.600 | bird? I'll say I think I think just like with computer hacking,
00:43:18.240 | it's illegal to break into someone else's computer. It is
00:43:20.800 | illegal to steal someone's personal information. There are
00:43:23.280 | laws that are absolutely simple and obvious and you know, no
00:43:29.120 | nonsense laws, those laws
00:43:30.600 | are legal to get rid of 100,000 jobs by making a piece of
00:43:33.880 | software, though.
00:43:34.400 | That's right. And so I think trying to intentionalize how we
00:43:38.840 | do things versus intentionalizing the things that
00:43:42.400 | we want to prohibit happening as an outcome, we can certainly try
00:43:45.360 | and prohibit the things that we want to happen up as an outcome
00:43:47.240 | and pass laws and institute governing bodies with authority
00:43:51.480 | to oversee those laws. With respect to things like stealing
00:43:55.480 | data,
00:43:55.840 | but you can jump on a plane and go do it in Mexico, Canada, or
00:43:58.840 | whatever region you get to sacks. Where do you stand on
00:44:01.120 | this?
00:44:01.280 | Yeah, I'm saying like, there are ways to protect people,
00:44:03.920 | there's ways to protect society about passing laws that make it
00:44:06.920 | illegal to do things as the output of the outcome.
00:44:08.920 | What law do you pass on chaos GPT?
00:44:10.880 | Explain chaos GPT? Give an example, please.
00:44:13.120 | Yeah. Do you want to talk about it real quick?
00:44:14.880 | It's a recursive agent that basically is trying to destroy
00:44:18.480 | itself.
00:44:18.840 | Try to destroy humanity.
00:44:20.880 | Yeah. But I guess by first becoming all powerful and
00:44:23.520 | destroying humanity and then destroying itself.
00:44:25.320 | Yeah, it's a tongue in cheek. Auto GPT.
00:44:28.800 | But it's not it's not it's not a tongue in cheek. Auto GPT.
00:44:32.680 | The guy, the guy that created it, you know, put it out there
00:44:35.000 | and said, like, he's trying to show everyone to your point,
00:44:37.560 | what intentionality could arise here, which is negative
00:44:40.080 | intentionality.
00:44:40.840 | I think it's very naive for anybody to think that this is
00:44:46.160 | not equivalent to something that could cause harm to you. So for
00:44:50.120 | example, if the prompt is, hey, here is a security leak that we
00:44:54.240 | figured out in Windows. And so why don't you exploit it? So
00:44:57.880 | look, a hacker now has to be very technical. Today with with
00:45:02.000 | these auto GPT is a hacker does not need to be technical,
00:45:04.600 | exploit the zero day. exploit in Windows, hack into this plane
00:45:09.400 | and bring it down. Okay, the GPT will do it. So who's going to
00:45:13.560 | tell you that those things are not allowed, who's going to
00:45:15.680 | actually vet that that wasn't allowed to be released in the
00:45:18.680 | wild. So for example, if you work with Amazon and Google and
00:45:22.480 | Microsoft and said, you're going to have to run these things in a
00:45:25.400 | sandbox, and we're going to have to observe the output before we
00:45:28.240 | allow it to run on actual bare metal in the wild. Again, that
00:45:33.200 | seems like a reasonable thing. And it's super naive for people
00:45:36.360 | to think it's a free market. So we should just be able to do
00:45:38.400 | what we want. This will end badly quickly. And when the
00:45:42.240 | first plane goes down, and when the first fucking thing gets
00:45:44.800 | blown up, all of you guys will be like, Oh, sorry,
00:45:47.280 | facts. Pretty compelling example here by Chamath. Somebody puts
00:45:50.880 | out into the wild chaos GPT, you can go do a Google search for it
00:45:53.600 | and says, Hey, what are the vulnerabilities to the
00:45:56.600 | electrical grid, compile those and automate a series of attacks
00:46:01.760 | and write some code to probe those until we and success in
00:46:06.040 | this mission, you get 100 points and stars every time you Jason do
00:46:09.800 | this such a such a beautiful example, but it's even more
00:46:12.520 | nefarious. It is. Hey, this is an enemy that's trying to hack
00:46:17.920 | our system. So you need to hack theirs and bring it down. You
00:46:20.760 | know, like you can easily trick these GPT. Right? Yes, they have
00:46:24.320 | no judgment. They have no judgment. And as you said,
00:46:27.360 | they're ruthless in getting to the outcome. Right? So why do we
00:46:32.280 | think all of a sudden, this is not going to happen?
00:46:34.200 | I mean, it's literally the science fiction example, you say,
00:46:36.360 | Hey, listen, make sure no humans get cancer and like, okay, well,
00:46:39.200 | the logical way to make sure no humans get cancer is to kill all
00:46:41.600 | the humans.
00:46:42.080 | But can you just address the point? So what do you think
00:46:44.600 | you're regulating? Are you regulating the code that here's
00:46:47.120 | what I'm saying to right?
00:46:48.240 | If you look at the FDA, no, you're allowed to make any
00:46:51.000 | chemical drug you want. But if you want to commercialize it,
00:46:53.800 | you need to run a series of trials with highly qualified
00:46:58.360 | measurable data, and you submit it to like minded experts that
00:47:01.800 | are trained as you are to evaluate the viability of that.
00:47:05.880 | And but no, it's how long there are pathways that allow you to
00:47:08.760 | get that done in days under emergency use. And then there
00:47:12.120 | are pathways that can take years depending on how gargantuan the
00:47:15.680 | task is at hand. And all I'm suggesting is having some amount
00:47:20.200 | of oversight is not bad in this specific example.
00:47:24.600 | I get what you're saying. But I'm asking tactically how what
00:47:27.480 | are you overseeing? You're overseeing chat GPT, you're
00:47:30.640 | overseeing the model you're doing exactly what chips.
00:47:34.080 | Okay, look, I used to run the Facebook platform, we used to
00:47:36.920 | create sandboxes, if you submit code to us, you would we would
00:47:40.960 | run it in the sandbox, we would observe it, we would figure out
00:47:43.560 | what it was trying to do. And we would tell you this is allowed
00:47:46.120 | to run in the wild. There's a version of that that Apple does
00:47:49.040 | when you submit an app for review and approval. Google does
00:47:52.720 | it as well. In this case, all the bare metal providers, all the
00:47:56.560 | people that provide GPUs will be forced by the government, in my
00:48:00.320 | opinion, to implement something. And all I'm suggesting is that
00:48:05.200 | it should be a new kind of body that essentially observes that
00:48:09.560 | has PhDs that has people who are trained in this stuff, to
00:48:13.120 | develop the kind of testing and the output that you need to
00:48:17.160 | figure out whether it should even be allowed to run in the
00:48:19.480 | wild on bare metal.
00:48:20.280 | Sorry, but you're saying that the mod the model, sorry, I'm
00:48:22.680 | just trying to understand too much points, you're saying that
00:48:24.120 | the models need to be reviewed by this body. And those models,
00:48:28.080 | if they're run on a third party set of servers, if they're run
00:48:31.760 | in the wild, right, so if you're on a computer on the on the open
00:48:36.520 | internet,
00:48:36.880 | freeberg, you cannot run an app on your computer, you know that,
00:48:39.240 | right? It needs to be connected to the internet, right? Like if
00:48:41.600 | you wanted to run an auto GPT, it actually crawls the internet,
00:48:45.040 | it actually touches other API's, it tries to then basically send
00:48:48.760 | a push request, sees what it gets back parses the JSON
00:48:52.240 | figures out what it needs to do. All of that is allowed because
00:48:55.600 | it's hosted by somebody, right? That code is running not
00:48:59.120 | locally, but it's
00:49:00.120 | running. So the host becomes
00:49:02.120 | sure, if you want to run it locally, you can do whatever you
00:49:04.680 | want to do.
00:49:05.400 | But evil agents are going to do that, right. So if I'm an evil
00:49:07.800 | agent, I'm not going to go use AWS to run my evil agent, I'm
00:49:10.840 | gonna set up a bunch of servers and connect to the internet.
00:49:14.240 | I could use VPNs. The internet is open, there's open
00:49:18.120 | in another rogue country, they can do whatever. I think that
00:49:21.760 | what you're gonna see is that if you, for example, try to VPN and
00:49:24.880 | run it out of like, to Gika stand back to the United States,
00:49:28.520 | it's not going to take years for us to figure out that we need to
00:49:31.760 | IP block rando shit coming in push and pull requests from all
00:49:35.400 | kinds of IPs that we don't trust anymore, because we don't now
00:49:38.200 | trust the regulatory oversight that they have for code that's
00:49:40.960 | running from those IPs that are not us domesticated.
00:49:43.400 | Just to let me steal man, Chamath position for a second,
00:49:46.160 | Jason, hold on, I think the ultimate if what Chamath is
00:49:49.600 | saying, is the point of view of Congress, and if Chamath has
00:49:53.480 | this point of view, then there will certainly be people in
00:49:55.320 | Congress that will adopt this point of view. The only way to
00:49:59.040 | ultimately do that degree of regulation and restriction is
00:50:02.600 | going to be to restrict the open internet, it is going to be to
00:50:04.920 | have monitoring and firewalls and safety protocols across the
00:50:07.600 | open internet. Because you can have a set of models running on
00:50:10.200 | any set of servers sitting in any physical location. And as
00:50:13.320 | long as they can move data packets around, they're going to
00:50:15.920 | be able to get up to their nefarious activities.
00:50:17.920 | Let me still man that for you, freeberg. I think, yes, you're
00:50:22.280 | correct. The internet has existed in a very open way. But
00:50:25.480 | there are organizations and there are places like the
00:50:28.520 | National Highway Traffic Safety Administration, if I were to
00:50:31.160 | steal mention mods position, if you want to manufacture a car,
00:50:34.600 | and you want to make one in your backyard and put it on your
00:50:37.960 | track and on your land up in Napa somewhere, and you don't
00:50:41.400 | want to have brakes on the car and you don't want to have, you
00:50:44.360 | know, a speed limiter or airbags or seatbelts and you want to
00:50:47.360 | drive on the hood of the car, you can do that. But once you
00:50:49.640 | want it to go on the open road, the open internet, you need to
00:50:52.680 | get you need to submit it for some safety standards like NHT
00:50:56.840 | sa like Tesla has to afford has to. So sacks, where do you sit
00:51:00.440 | on this? Or is, let's assume that people are going to do very
00:51:05.280 | bad things with very powerful models that are becoming
00:51:09.080 | available. Amazon today said they'll be Switzerland, they're
00:51:11.160 | going to put a bunch of LLM and other models available on AWS,
00:51:14.480 | Bloomberg's LLM, Facebook's, Google barred, and of course,
00:51:18.640 | chance upt opening and being all this stuff's available to have
00:51:21.800 | access to that. Do you need to have some regulation of who has
00:51:25.520 | access to those at scale powerful tools? Should there be
00:51:29.000 | some FDA or NHTSA?
00:51:31.840 | I don't think we know how to regulate it yet. I think it's
00:51:34.000 | too early. And I think the harms that we're speculating about
00:51:36.920 | we're making the AI more powerful than it is. And I
00:51:40.320 | believe it will be that powerful. But I think that it's
00:51:43.040 | premature to be talking about regulating something that
00:51:44.840 | doesn't really exist yet take the chaos GPT scenario. The way
00:51:48.960 | that would play out would be you've got some future
00:51:52.360 | incarnation of auto GPT. And somebody says, Okay, auto GPT, I
00:51:57.240 | want you to be, you know, WMD AI, and figure out how to cause
00:52:02.480 | like a mass destruction event, you know, and then it creates
00:52:05.400 | like a planning checklist and that kind of stuff. So that's
00:52:08.640 | basically the the type of scenario we're we're talking
00:52:11.880 | about. We're not anywhere close to that yet. I mean, the chaos
00:52:15.400 | GPT is kind of a joke. It doesn't produce it doesn't
00:52:19.560 | produce a checklist.
00:52:20.400 | I can give an example that would actually be completely plausible.
00:52:24.760 | One of the first things on the chaos GPT checklist was to stay
00:52:27.920 | within the boundaries of the law because it didn't want to get
00:52:29.520 | prosecuted.
00:52:30.880 | Got it. So the person who did that had some sort of good
00:52:33.800 | intent. But I can give you an example right now. That could be
00:52:37.080 | done by chat GPT and auto GPT that could take down large
00:52:40.200 | swaths of society and cause massive destruction. I'm almost
00:52:42.480 | reticent to say it here. Say it. Well, I'll say and then maybe
00:52:45.840 | we'll have to delete this. But if somebody created this, and
00:52:48.640 | they said, figure out a way to compromise as many powerful
00:52:52.440 | peoples and as many systems, passwords, then go in there and
00:52:55.960 | delete all their files and turn off as many systems as you can.
00:53:00.240 | chat GPT and auto GPT could very easily create phishing accounts
00:53:03.960 | create billions of websites to create billions of logins, have
00:53:07.760 | people log into them, get their passwords, log into whatever
00:53:10.520 | they do, and then delete everything in their account.
00:53:12.920 | Chaos, you're right to be done today. I don't think we've done
00:53:16.960 | today. simpler than this. How about how about you fish?
00:53:19.120 | website? Yeah, pieces of it can be created today. But you're
00:53:22.880 | you're accelerating the progress.
00:53:25.000 | Yeah, but you can automate what fishing now to an hour days.
00:53:28.120 | Yeah, exactly. And by the way, I'm accelerating it in weeks.
00:53:31.280 | Why don't you just spoof the bank accounts and just steal the
00:53:34.160 | money like that's even simpler, like people will do this stuff
00:53:37.160 | because they're trying to do it today. Holy cow, they just have
00:53:39.640 | a more efficient way to solve the problem about bank accounts.
00:53:41.920 | So number one, this is a tool. And if people use a tool in
00:53:45.680 | nefarious ways, you prosecute them. Number two, the platforms
00:53:49.320 | that are commercializing these tools do have trust and safety
00:53:52.760 | teams. Now in the past, trust and safety has been a euphemism
00:53:56.760 | for censorship, which it shouldn't be. But you know, open
00:54:00.080 | AI has a safety team and they try to detect when people are
00:54:03.480 | using their tech in a nefarious way and they try to prevent it.
00:54:07.040 | Do you trust? Well, no, not on censorship. But I think that
00:54:12.000 | they're probably million people are using
00:54:13.720 | they're policing it. Are you willing to abdicate your work
00:54:18.680 | societal responsibility to to open AI to do the trust and
00:54:22.040 | what I'm what I'm saying is I'd like to see how far we get in
00:54:25.880 | terms of the system.
00:54:27.640 | Yeah. So you want to see the mistakes, you want to see where
00:54:30.240 | the mistakes are, and how bad the mistakes are.
00:54:32.400 | I'm saying it's still very early to be imposing regulation, we
00:54:34.920 | don't even know what to regulate. So I think we have to
00:54:37.160 | keep tracking this to develop some understanding of how it
00:54:40.880 | might be misused, how the industry is going to develop
00:54:43.560 | safety guardrails. Okay. And then you can talk about
00:54:47.920 | regulation. Look, you create some new FDA right now. Okay.
00:54:50.960 | First of all, we know what would happen. Look at the drug
00:54:53.600 | process. As soon as the FDA got involved in slow down massively.
00:54:56.720 | Now it takes years, many years to get a drug approved.
00:54:59.480 | Appropriately. So yes, but at least with a drug, we know what
00:55:04.160 | the gold standard is, you run a double blind study to see
00:55:08.000 | whether it causes harm or whether it's beneficial. We
00:55:10.960 | don't know what that standard is for AI yet. We have no idea.
00:55:14.200 | You can study in AI. What? No, we don't have somebody review
00:55:19.320 | the code. You have two instances in a sandbox use a code to do
00:55:22.360 | what? Oh, sacks. Listen, point, auto GPT. It's benign. I mean,
00:55:28.240 | by friend use it to book a wine tasting. So who's going to
00:55:33.520 | review that code and then speculate and say, Oh, well,
00:55:36.320 | he's 99.9% of cases. It's perfectly benevolent and fine.
00:55:41.760 | And innocuous. I can fantasize about some cases someone might
00:55:45.600 | do. How are you supposed to resolve that? Very simple.
00:55:48.360 | There are two types of regulation that occur in any
00:55:50.720 | industry. You can do what the movie industry did, which is
00:55:53.240 | they self regulate and they came up with their own rating system.
00:55:55.840 | Or you can do what happens with the FDA and what happens with
00:55:59.480 | cars, which is an external government based body. I think
00:56:02.800 | now is the time for self regulation, so that we avoid the
00:56:06.080 | massive heavy hand of government having to come in here. But
00:56:09.800 | these tools can be used today to create massive harm. They're
00:56:12.560 | moving at a pace we just said in the first half of the show that
00:56:15.520 | none of us have ever seen every 48 hours something drops. That
00:56:18.840 | is mind blowing. That's never happened before. And you can
00:56:22.280 | take these tools. And in the one example that Truman and I came
00:56:26.920 | up with the top of our head in 30 seconds, you could create
00:56:30.000 | phishing sites, compromise people's bank accounts, take all
00:56:32.960 | the money out, delete all the files and cause chaos on a scale
00:56:36.080 | that has never been possible by a series of Russian hackers or
00:56:40.400 | Chinese hackers working in a boiler room. This can scale and
00:56:44.520 | that is the fundamental difference here. And I didn't
00:56:46.800 | think I would be sitting here steel manning trumans argument.
00:56:49.120 | I think humans have a high level ability to compound. I think
00:56:52.040 | people do not understand compound interest. And this is a
00:56:54.360 | perfect example, where when you start to compound technology at
00:56:57.480 | the rate of 24 hours, or 48 hours, which we've never really
00:57:00.920 | had to acknowledge, most people's brains break, and they
00:57:03.520 | don't understand what six months from now looks like. And six
00:57:06.480 | months from now, when you're compounding at 48, or 72 hours,
00:57:10.160 | is like 10 to 12 years in other technology solutions. This is
00:57:15.080 | compounding. This is this is different because of the
00:57:16.960 | compounding. I agree with that the pace of evolution is very
00:57:19.880 | fast. We are on a bullet train to something. And we don't know
00:57:22.880 | exactly what it is. And that's disconcerting. However, let me
00:57:25.960 | tell you what would happen if we create a new regulatory body
00:57:28.160 | like the FDA to regulate this, they would have no idea how to
00:57:32.000 | arbitrate whether a technology should be approved or not.
00:57:34.640 | Development will basically slow to a crawl just like drug
00:57:37.880 | development. There is no double blind stand. I agree. What
00:57:40.840 | regulation can we do? What self regulation can we do? There is
00:57:43.600 | no double blind standard in AI that everyone can agree on right
00:57:47.200 | now to know whether something should be approved. And what's
00:57:49.720 | going to happen is the thing that's made software development
00:57:52.400 | so magical and allowed all this innovation over the last 25
00:57:56.160 | years is permissionless innovation. Any developer, any
00:58:01.000 | dropout from a university can go create their own project, which
00:58:04.720 | turns into a company. And that is what has driven all the
00:58:08.320 | innovation and progress in our economy over the last 25 years.
00:58:11.480 | So you're going to replace permissionless innovation with
00:58:13.840 | going to Washington to go through some approval process.
00:58:16.200 | And it will be the politically connected, it'll be the big
00:58:19.320 | donors who get their projects approved. And the next Mark
00:58:22.720 | Zuckerberg, who's trying to do his little project in a dorm
00:58:24.840 | room somewhere will not know how to do that will not know how to
00:58:28.200 | compete in that highly political process.
00:58:30.560 | Come on, I think you're mixing a bunch of things together. So
00:58:32.720 | first of all, permissionless innovation happens today in
00:58:36.640 | biotech as well. It's just that it's what Jason said, when you
00:58:39.720 | want to put it on the rails of society, and make it available
00:58:43.120 | to everybody, you actually have to go and do something
00:58:46.200 | substantive. In the negotiation of these drug approvals, it's
00:58:50.200 | not some standardized thing, you actually sit with the FDA, and
00:58:52.760 | you have to decide what are our endpoints? What is the
00:58:54.920 | mechanism of action? And how will we measure the efficacy of
00:58:58.320 | this thing? The idea that you can't do this today in AI is
00:59:01.360 | laughable? Yes, you can. And I think that smart people so for
00:59:04.280 | example, if you pit deep minds team versus open AI team, to
00:59:09.280 | both agree that a model is good and correct, I bet you they
00:59:12.040 | would find a systematic way to test that it's fine.
00:59:15.280 | I just want to point out, okay, so basically, in order to do
00:59:18.760 | what you're saying, okay, this entrepreneur, who just dropped
00:59:22.560 | out of college to do their project, they're gonna have to
00:59:24.240 | learn how to go sit with regulators, have a conversation
00:59:27.160 | with them, go through some complicated approval process.
00:59:29.840 | And you're trying to say that that won't turn into a game of
00:59:33.200 | political connections. Of course it will, of course it will.
00:59:38.360 | Which is self regulation.
00:59:39.640 | Yeah, well, let's get to that. Hold on a second. And let's look
00:59:42.880 | at the drug approval process. If you want to create a drug
00:59:45.760 | company, you need to raise hundreds of millions of dollars.
00:59:48.520 | It's incredibly expensive. It's incredibly capital intensive.
00:59:51.640 | There is no drug company that is two guys in their garage. Like
00:59:56.960 | many of the biggest companies, like many of the biggest
00:59:59.320 | companies in Silicon Valley started.
01:00:01.040 | That is because you're talking about taking a chemical or
01:00:04.800 | biological compound and injecting into some hundreds or
01:00:08.240 | thousands of people who are both racially gender based, age
01:00:13.680 | based, highly stratified all around the world, or at a
01:00:17.000 | minimum all around the country. You're not talking about that
01:00:19.720 | here, David, I think that you could have a much simpler and
01:00:22.680 | cheaper way where you have a version of the internet that's
01:00:26.760 | running in a huge sandbox someplace that's closed off from
01:00:29.360 | the rest of the internet, and another version of the internet
01:00:31.640 | that's closed off from everything else as well. And you
01:00:33.920 | can run on a parallel path, as it is with this agent, and you
01:00:37.800 | can easily, in my opinion, actually figure out whether this
01:00:41.080 | agent is good or bad, and you can probably do it in weeks. So I
01:00:44.880 | actually think the approvals are actually not that complicated.
01:00:47.600 | And the reason to do it here is because I get that it may cause
01:00:52.200 | a little bit more friction for some of these mom and pops. But
01:00:56.600 | if you think about what's the societal and consequences of
01:01:01.640 | letting the worst case outcomes happen, the AGI type outcomes
01:01:05.560 | happen, I think those are so bad. They're worth slowing some
01:01:10.320 | folks down. And I think like, just because you want to, you
01:01:13.400 | know, buy groceries for $100, you should be able to do it, I
01:01:16.400 | get it. But if people don't realize and connect the dots
01:01:19.800 | between that and bringing airplanes down, then that's
01:01:22.480 | because they don't understand what this is capable of.
01:01:24.280 | I'm not saying we're never going to need regulation. What I'm
01:01:27.000 | saying is, it's way too early. We don't even know what we're
01:01:29.920 | calculating. I don't know what the standard would be. And what
01:01:32.800 | we will do by racing to create a new FDA is destroying American
01:01:36.240 | innovation in the sector, and other countries will not slow
01:01:38.920 | down. They will beat us to the punch here.
01:01:40.840 | Got it. I think there's a middle ground here of self regulation
01:01:45.400 | and thoughtfulness on the part of the people who are providing
01:01:47.840 | these tools at scale. To give just one example here, and this
01:01:51.720 | tweet is from five minutes ago. So to look at the pace of this
01:01:55.200 | five minutes ago, this tweet came out, a developer who is an
01:01:59.200 | AI developer says AI agents continue to amaze my GPT for
01:02:02.960 | coding and says learn how to build apps with authenticated
01:02:05.840 | users that can build and design a web app, create a back end,
01:02:08.600 | handle off, logins, upload code to GitHub and deploy. He
01:02:14.640 | literally while we were talking is deploying websites. Now if
01:02:18.440 | this website was a phishing app, or the one that Chamath is
01:02:23.000 | talking about, he could make a gazillion different versions of
01:02:26.760 | Bank of America, Wells Fargo, etc, then find everybody on the
01:02:30.480 | internet's email, then start sending different spoofing
01:02:32.840 | emails, determine which spoofing emails work, iterate on those,
01:02:36.200 | and create a global financial collapse. Now this sounds
01:02:38.520 | insane, but it's happening right now. People get hacked every day
01:02:42.200 | at 123%. Saks fraud is occurring right now in the low single
01:02:47.600 | digit percentages identity theft is happening in the low single
01:02:50.480 | identity percentages. This technology is moving so fast
01:02:54.320 | that bad actors could 10x that relatively easy. So if 10% of us
01:02:59.400 | want to be hacked and have our credit card attacked, this could
01:03:01.680 | create chaos. I think self regulation is the solution. I'm
01:03:05.720 | the one who brought up self regulation. What I said first, I
01:03:08.400 | brought up first I get credit. No, good.
01:03:10.040 | I'm not it's not about credit. I'm no regulation.
01:03:13.480 | About it because you interrupted
01:03:15.960 | you talked for eight minutes. So if you have a point to make you
01:03:17.920 | should have got in the eight minutes.
01:03:19.040 | Oh my god, you guys kept interrupting me. Go ahead. What I
01:03:22.440 | said is that there are trust and safety teams at these big AI
01:03:26.840 | companies, these big foundation model companies like open AI.
01:03:30.600 | Like I said, in the past, trust and safety has been a euphemism
01:03:34.400 | for censorship. And that's why people don't trust it. But I
01:03:37.520 | think it would be appropriate for these platform companies to
01:03:40.840 | apply some guardrails on how their tools can be used. And
01:03:44.120 | based on everything I know they're doing that. So
01:03:47.200 | websites on the open web with chat GP four, and he's going to
01:03:50.840 | have it do it automated, you're basically postulating
01:03:53.360 | capabilities that don't yet exist.
01:03:56.040 | I just tweeted the guy's doing it. He's got a video of himself
01:03:58.640 | doing it on the web. What do you think? That's a far cry from
01:04:01.640 | basically running like some fishing expedition that's going
01:04:05.400 | to bring down the entire banking system.
01:04:07.240 | A literally a fishing a fishing site and a site with OAuth are
01:04:10.640 | the same thing. Go ahead. Freeberg.
01:04:11.880 | I think that that guy is doing something illegal if he's
01:04:16.960 | hacking into computers, into people's emails and bank
01:04:20.680 | accounts. That's illegal. You're not allowed to do that. And so
01:04:24.520 | that action breaks the law, that person can be prosecuted for
01:04:28.480 | doing that. The tooling that one might use to do that can be used
01:04:33.200 | in a lot of different ways. Just like you could use Microsoft
01:04:37.200 | Word to forge letters, just like you could use Microsoft Excel to
01:04:42.200 | create fraudulent financial statements. I think that the
01:04:44.600 | application of a platform technology needs to be
01:04:48.520 | distinguished from the technology itself. And while we
01:04:53.160 | all feel extraordinarily fearful, because the unbelievable
01:04:56.040 | leverage that these AI tools provide, again, I'll remind you
01:05:00.480 | that this chat GPT-4 or this GPT-4 model, by some estimates,
01:05:05.880 | is call it a few terabytes, you could store it on a hard drive,
01:05:08.280 | or you can store it on your iPhone. And you could then go
01:05:11.120 | run it on any set of servers that you could go set up
01:05:13.680 | physically anywhere. So you know, it's a little bit naive to
01:05:17.160 | say we can go ahead and, you know, regulate platforms and we
01:05:20.640 | can go regulate the tools. Certainly, we should continue to
01:05:23.720 | enforce and protect ourselves against nefarious actors using,
01:05:27.920 | you know, new tools in inappropriate and illegal ways.
01:05:30.800 | You know, I also think that there's a moment here that we
01:05:35.880 | should all kind of observe just how quickly we want to shut
01:05:40.840 | things down, when, you know, they take away what feels like
01:05:46.320 | the control that we all have from one day to the next. And,
01:05:51.480 | you know, that the real side kind of sense of fear that seems
01:05:56.960 | to be quite contagious for a large number of people that have
01:05:59.800 | significant assets or significant things to lose is
01:06:05.120 | that, you know, tooling that's, that's, you know, creating
01:06:08.440 | entirely newly disruptive systems and models for business
01:06:11.400 | and economics. And opportunity for so many needs to be
01:06:16.400 | regulated away to minimize, you know, what we claim to be some
01:06:20.520 | potential downside when we already have laws that protect
01:06:22.920 | us on the other side. So, you know, I just kind of want to
01:06:28.120 | also consider that this set of tools creates extraordinary
01:06:31.800 | opportunity, we gave one sort of simple example about the
01:06:34.640 | opportunity for creators, but we talked about how new business
01:06:37.760 | models, new businesses can be started with one or two people,
01:06:40.720 | you know, entirely new tools can be built with a handful of
01:06:43.840 | people, entirely new businesses, this is an incredible economic
01:06:47.600 | opportunity. And again, if the US tries to regulate it, or the
01:06:51.440 | US tries to come in and stop the application of models in general
01:06:54.360 | or regulate models in general, you're certainly going to see
01:06:57.040 | those models of continue to evolve and continue to be
01:06:59.280 | utilized in very powerful ways are going to be advantageous to
01:07:03.720 | places outside the US, there's over 180 countries on earth,
01:07:06.960 | they're not all going to regulate together. It's been
01:07:09.280 | hard enough to get any sort of coordination around financial
01:07:12.600 | systems to get coordination around climate change to get
01:07:15.360 | coordination around anything on a global basis to try and get
01:07:18.720 | coordination around the software models that are being developed,
01:07:22.040 | I think is is pretty naive.
01:07:23.680 | You don't want to have a global organization, I think you need
01:07:25.960 | to have a domestic organization that protects us. And I think
01:07:29.520 | Europe will have their own thing. Again, FDA versus Emma,
01:07:33.200 | Canada has its own Japan has its own, China has its own, and they
01:07:38.080 | have a lot of overlap and a lot of commonality and then the guard
01:07:41.000 | rules they use. And I think that's what's going to happen
01:07:43.040 | here.
01:07:43.200 | This will be beneficial only for political insiders who will
01:07:45.880 | basically be able to get their projects and their apps approved
01:07:48.560 | with a huge deadweight loss for the system because innovation
01:07:50.880 | will completely slow down. But to me build on freeberg's point,
01:07:53.840 | which is that we have to remember that AI won't just be
01:07:58.840 | used by nefarious actors, it'll be used by positive actors. So
01:08:02.720 | there will be new tools that law enforcement will be able to use.
01:08:05.680 | And if somebody is creating phishing sites at scale, they're
01:08:08.480 | going to be probably pretty easy for you know, law enforcement
01:08:11.880 | AI is to detect. So let's not forget that there'll be co
01:08:15.040 | pilots written for our law enforcement authorities, they'll
01:08:18.520 | be able to use that to basically detect and fight crime. And a
01:08:21.520 | really good example of this was in the crypto space. We saw this
01:08:24.360 | article over the past week that chain analysis has figured out
01:08:28.360 | how to basically track, you know, illicit Bitcoin
01:08:31.040 | transactions. And there's now a huge number of prosecutions
01:08:34.240 | that are happening of illegal use of Bitcoin. And if you go
01:08:37.920 | back to when Bitcoin first took off, there was a lot of
01:08:42.080 | conversations around Silk Road. And the only thing that Bitcoin
01:08:44.920 | was good for was basically illegal transactions,
01:08:47.720 | blackmailing, drug trafficking, and therefore we had to stop
01:08:51.640 | Bitcoin. Remember, that was the main argument. And the counter
01:08:55.240 | argument was that will know Bitcoin, like any technology can
01:08:59.240 | be used for good or bad. However, there will be
01:09:01.640 | technologies that spring up to combat those nefarious or
01:09:06.040 | illicit use cases. And sure enough, you had a company like
01:09:08.680 | chain analysis come along. And now it's been used by law
01:09:11.160 | enforcement to basically crack down on the illicit use of
01:09:14.680 | Bitcoin. And if anything, it's cleaned up the Bitcoin community
01:09:17.920 | tremendously. And I think it's dispelled this idea that the
01:09:21.160 | only thing you'd use Bitcoin for is black market transactions.
01:09:24.800 | Quite the contrary. I think you'd be really stupid now to
01:09:27.800 | use Bitcoin in that way. It's actually turned Bitcoin into
01:09:30.840 | something of a honeypot now. Because if you used it for
01:09:34.040 | nefarious transactions, your transactions record in the
01:09:36.920 | blockchain forever just waiting for chain analysis to find it.
01:09:40.320 | So again, using Bitcoin to do something illegal be really
01:09:43.280 | stupid. I think in a similar way, you're going to see self
01:09:46.680 | regulation by these major AI platform companies combined with
01:09:50.400 | new tools are used new AI tools that spring up to help combat
01:09:54.840 | the nefarious uses. And until we let those forces play out. I'm
01:09:59.560 | not saying regulate never, I'm just saying we need to let those
01:10:02.600 | forces play out. Before we leap to creating some new regulatory
01:10:06.920 | body that doesn't even understand what its mandate
01:10:09.160 | emissions supposed to be.
01:10:10.000 | The Bitcoin story is hilarious, by the way.
01:10:11.760 | Oh, my gosh, rejournal story. It's
01:10:15.000 | unbelievable. Pretty epic. It took years. But basically, this
01:10:18.440 | guy was buying blow on Silk Road. And he deposited his
01:10:23.160 | Bitcoin. And then when he withdrew it, he there was a bug
01:10:26.560 | that gave him twice as many Bitcoin. So he kept creating
01:10:29.000 | more accounts putting more money into Silk Road and getting more
01:10:31.800 | Bitcoin out. And then years later, the authorities figured
01:10:35.640 | this out again with you know, chain analysis type things.
01:10:38.520 | Look at James Zong over there. Look at him.
01:10:40.520 | James Zong, the accused, had a Lamborghini, a Tesla, a lake
01:10:45.000 | house, and was living his best life apparently, when the feds
01:10:50.760 | knocked on his door and found the digital keys to his crypto
01:10:54.400 | fortune in a popcorn tin in his bathroom, and in a safe in his
01:11:00.080 | basement floor. So they have
01:11:03.280 | a phone. The reason the reason I posted this was I was like,
01:11:05.840 | what if this claim that you can have all these anonymous
01:11:10.640 | transactions actually fooled an entire market? Because it looks
01:11:16.600 | like that this anonymity has effectively been reverse
01:11:20.760 | engineered, and there's no anonymity at all. And so what
01:11:24.280 | Bitcoin is quickly becoming is like the most singular honeypot
01:11:29.120 | of transactional information that's complete and available in
01:11:33.360 | public. And I think what this article talks about is how
01:11:36.280 | companies like chain analysis, and others have worked now, for
01:11:40.400 | years, almost a decade, with law enforcement to be able to map
01:11:44.560 | all of it. And so now every time money goes from one Bitcoin
01:11:49.280 | wallet to another, they effectively know the sender and
01:11:51.960 | the recipient.
01:11:52.480 | And I just want to make one quick correction here. It wasn't
01:11:55.320 | actually exactly popcorn. It was Cheetos spicy flavored popcorn.
01:12:00.400 | And there's the tin where he had a motherboard of a computer that
01:12:05.840 | is there a chance that that this project was actually
01:12:09.400 | introduced by the government? I mean, there's been reports of
01:12:12.440 | tour on anonymous or network that the CIA had their hands all
01:12:17.000 | over tour. to our if you don't know it, which is an anonymous
01:12:20.280 | like multi relay, peer to peer web browsing system, and people
01:12:25.680 | believe it's a CIA honeypot, an intentional trap for criminals
01:12:31.280 | to get themselves caught up in. All right, as we wrap here, what
01:12:36.920 | an amazing discussion, my lord, I didn't I never thought I would
01:12:39.240 | be. I want to say one thing. Yes.
01:12:42.360 | We saw that someone was arrested for the murder of Bob Lee.
01:12:47.600 | That's what I was about this morning. Yeah, which turns out
01:12:50.640 | that the report of the SF PDs arrest is that it's someone that
01:12:54.760 | he knew that also works in the tech industry, someone possibly
01:12:57.440 | know, right. So still breaking news. Yes, possibly. But I want
01:13:01.960 | to say two things. One, obviously, based on this arrest
01:13:04.840 | and the storyline, it's quite different than what we all
01:13:08.080 | assumed it to be, which was some sort of homeless robbery type
01:13:11.480 | moment that has become all too commonplace in SF. It's a
01:13:16.080 | commentary for me on two things. One is how quick we all were to
01:13:20.240 | kind of judge and assume that, you know, a homeless robber type
01:13:24.920 | person would do this in SF, which I think speaks to the
01:13:28.120 | condition in SF right now, also speaks to our conditioning that
01:13:32.080 | that we all kind of lacked or didn't even want to engage in a
01:13:35.520 | conversation that maybe this person was murdered by someone
01:13:37.960 | that they knew. Because we wanted to kind of very quickly
01:13:42.200 | fill our own narrative about how bad SF is. And that's just
01:13:45.560 | something that I really felt when I read this this morning, I
01:13:47.520 | was like, man, like, I didn't even consider the possibility
01:13:50.280 | that this guy was murdered by someone that he knew, because I
01:13:54.040 | am so enthralled right now by this narrative that SF is so
01:13:57.000 | bad, and it must be another data point that validates my point of
01:13:59.680 | view on SF. So you know, I kind of want to just acknowledge that
01:14:02.680 | and acknowledge that we all kind of do that right now. But I do
01:14:05.440 | think it also does, in fact, unfortunately speak to how bad
01:14:08.160 | things are in SF, because we all are. We've all had these
01:14:11.240 | experiences of feeling like we're in danger and under
01:14:13.520 | threat all the time we're walking around in SF, in so many
01:14:16.560 | parts of San Francisco, I should say, where things feel like
01:14:19.360 | they've gotten really bad. I think both things can be true
01:14:22.760 | that we can kind of feel biased and fill our own narrative by
01:14:28.240 | kind of latching on to our assumption about what something
01:14:30.800 | tells us. But But it also tells us quite a lot about what is
01:14:34.040 | going on. So I just wanted to make that point.
01:14:36.760 | In fairness, and I think it's fine for you to make that point.
01:14:39.000 | I am extremely vigilant on this program to always say when
01:14:42.800 | something is breaking news, withhold judgment, whether it's
01:14:45.200 | the Trump case or Jesse Smollett or anything in between January
01:14:48.480 | 6, let's wait until we get all the facts. And in fact, quote
01:14:51.680 | from Sacks, we don't know exactly what happened yet.
01:14:55.720 | Correct.
01:14:56.920 | Literally, Sacks started with that. We do that every fucking
01:15:01.200 | time on this program. We know when there's breaking news to
01:15:04.680 | withhold judgment, but you can also know two things can be
01:15:08.760 | true. A tolerance for ambiguity is necessary.
01:15:11.640 | But I'm saying I didn't even do that. As soon as I heard this, I
01:15:14.360 | was like, I was like, Oh,
01:15:15.200 | assumption. But you know, David, that is a fine assumption to
01:15:19.120 | make. That's a fine
01:15:20.520 | assumption. It's a logical assumption. Listen,
01:15:22.280 | you make that assumption for your own protection.
01:15:24.680 | We got all these reporters who are basically propagandists
01:15:27.920 | trying to claim that crime is down in San Francisco. They're
01:15:29.840 | all basically seeking comment from me this morning, sending
01:15:32.680 | emails or trying to God on us because we basically talked
01:15:36.320 | about the bubbly case in that way. Listen, we said that we
01:15:41.680 | didn't know what happened. But if we were to bet, at least what
01:15:44.880 | I said is I bet this case, it looks like a lot like the
01:15:47.440 | Brianna Kupfer case. That was logical. That's not
01:15:50.160 | conditioning or bias. That's logic. And you need to look at
01:15:53.760 | what else happened that week. Okay, so just the same week that
01:15:58.120 | Bob Lee was killed, let me give you three other examples of
01:16:00.960 | things that happened in Gotham City, aka San Francisco. So
01:16:04.600 | number one, former fire commissioner, Don carminiani was
01:16:08.960 | beaten within an inch of his life by a group of homeless
01:16:11.760 | addicts in the marina. And one of them was interviewed in terms
01:16:16.520 | of why it happened. And basically, Don came down from
01:16:19.520 | his mother's house and told them to move off his mother's front
01:16:22.640 | porch, because they were obstructing her ability to get
01:16:25.160 | in and out of her apartment. They interpreted that as
01:16:27.240 | disrespect. And they beat him with a tire iron or a metal
01:16:30.640 | pipe. And one of the hoodlums who was involved in this
01:16:34.560 | apparently admitted this. Yeah, play the video.
01:16:37.040 | Somebody over the head like that and attack him.
01:16:39.600 | As he was he was this disrespectful. We
01:16:43.680 | who was disrespectful.
01:16:45.360 | There was a big old bald haired old man.
01:16:49.320 | Don Don. So he was being disrespectful. And then but is
01:16:53.080 | that enough to beat him off?
01:16:55.400 | Yeah, sometimes.
01:16:57.240 | Oh, my Lord. I mean,
01:16:58.800 | so this is case number one. And apparently in the reporting on
01:17:02.760 | that person who was just interviewed, he's been in the
01:17:04.760 | marina kind of terrorizing people, maybe not physically,
01:17:07.600 | but verbally. So you have, you know, bands of homeless people
01:17:12.880 | encamped in front of people's houses. Don carminiani gets
01:17:16.560 | beaten within an inch of his life. You then had the case of
01:17:19.520 | the Whole Foods store on Market Street shut down in San
01:17:23.000 | Francisco. And this was not a case of shoplifting like some of
01:17:26.280 | the other store closings we've seen. They said they were
01:17:28.600 | closing the store because they could not protect their
01:17:31.320 | employees. The bathrooms were filled with needles and pipes
01:17:35.640 | that were drug paraphernalia. You had drug addicts going in
01:17:38.440 | there using it they were engaging in altercations with
01:17:41.400 | store employees. And Whole Foods felt like that to close the
01:17:44.160 | store because again, they cannot protect their employees. Third
01:17:47.760 | example, Board of Supervisors had to disband their own meeting
01:17:51.520 | because their internet connection got vandalized. The
01:17:55.520 | fiber for the cable connection to provide their internet got
01:17:59.560 | vandalized. So they had to basically disband their meeting
01:18:01.600 | Aaron Preskin was the one who announced this and you saw in
01:18:04.240 | the response to this. Yeah, my retweeting him went viral. There
01:18:08.760 | were lots of people said, Yeah, I've got a small business and
01:18:11.360 | the fiber, the copper wire, whatever was vandalized. And in
01:18:15.240 | a lot of cases, I think it's basically drug addicts stealing
01:18:17.320 | whatever they can they steal $10 of copper wire, sell that to
01:18:20.760 | get a hit. And it causes $40,000 of property damage.
01:18:24.540 | Here's the insincerity, sex, literally, the proper response
01:18:29.040 | when there's violence in San Francisco is, hey, we need to
01:18:31.920 | make this place less violent. Is there a chance that it could be
01:18:34.740 | people who know each other? Of course, that's inherent in any
01:18:37.680 | crime that occurs that there'll be time to investigate it. But
01:18:40.800 | literally, the press is now using this as a moment to say
01:18:44.240 | there's no crime in San Francisco, or that we're
01:18:47.040 | reacting. And like, I just have the New York Times email me
01:18:49.500 | during the podcast. Heather Knight from the Chronicle, San
01:18:52.980 | Francisco Chronicle, in light of the Bob Lee killing appearing
01:18:56.160 | to be an interpersonal dispute. She still doesn't know, right?
01:18:58.760 | We don't have all the facts with another tech leader. Do you
01:19:01.320 | think the tech community jumped to conclusions? Why are so many
01:19:03.920 | tech leaders painting San Francisco as a dystopian
01:19:06.880 | hellscape with the reality with the reality is more nuanced. I
01:19:10.620 | think it's a little typo there. Yes. Of course, the reality is
01:19:15.160 | nuanced. Of course, it's a hellscape. Walk down the
01:19:18.840 | street. Heather, can I give you a theory, please? I think it was
01:19:24.580 | most evident in the way that Ilan dismantled and manhandled
01:19:29.940 | the BBC reporter. Oh my god, that was brutal. This is a small
01:19:34.180 | microcosm of what I think media is. So I used to think that
01:19:38.900 | media had an agenda. I actually now think that they don't
01:19:44.300 | particularly have an agenda, other than to be relevant,
01:19:48.620 | because they see waning relevance. And so I think what
01:19:52.980 | happens is whenever there are a bunch of articles that tilt the
01:19:56.700 | pendulum into a narrative, they all of a sudden become very
01:20:01.860 | focused on refuting that narrative. And even if it means
01:20:07.300 | they have to lie, they'll do it. Right. So, you know, I think for
01:20:10.980 | months and months, I think people have seen that the
01:20:13.500 | quality of the discourse on Twitter became better and
01:20:16.020 | better. Ilan is doing a lot with bots and all of this stuff,
01:20:19.220 | cleaning it up. And this guy had to try to establish the
01:20:23.780 | counter narrative, and was willing to lie in order to do
01:20:26.780 | it, then he was dismantled. Here, you guys, I don't have a
01:20:30.340 | bone to pick so much with San Francisco. I think I've been
01:20:32.660 | relatively silent on this topic. But you guys, as residents and
01:20:36.180 | former residents, I think have a vested interest in the quality
01:20:38.700 | of that city. And you guys have been very vocal. But I think
01:20:41.620 | that you're not the only ones Michelle Tandler, you know,
01:20:44.220 | Schellenberger, there's a bunch of smart, thoughtful people
01:20:47.260 | who've been beating this drum, Gary tan. And so now I think
01:20:52.820 | reporters don't want to write the end plus first article
01:20:55.780 | saying that San Francisco is a hellscape. So they have to take
01:20:59.140 | the other side. And so now they're going to go and kick up
01:21:02.100 | the counter narrative. And they'll probably dismantle the
01:21:05.340 | truth and kind of redirect it in order to do it. So I think that
01:21:09.380 | what you're seeing is, they'll initially tell a story, but
01:21:12.820 | what then there's too much of the truth, they'll go to the
01:21:15.220 | other side, because that's the only way to get clicks and be
01:21:17.180 | seen. So I think that that's what you guys are a part of
01:21:19.940 | right now,
01:21:20.300 | they are in the business of protecting the narrative. But I
01:21:22.980 | do think there's a huge ideological component to the
01:21:24.900 | narrative, both in the long case where they're trying to claim
01:21:28.420 | that there was a huge rise in hate speech on Twitter. The
01:21:30.740 | reason they're saying that is because they want Twitter to
01:21:34.020 | engage in more censorship. That's the ideological agenda
01:21:36.780 | here. The agenda is this radical agenda of decarceration, they
01:21:41.740 | actually believe that more and more people should be let out of
01:21:44.260 | prison. And so therefore, they have an incentive to deny the
01:21:50.260 | existence of crime in San Francisco and the rise in crime
01:21:53.580 | in San Francisco. If you pull most people in San Francisco,
01:21:56.500 | large majorities of San Francisco believe that crime is
01:21:58.660 | on the rise, because they can see it, they hear it. And what I
01:22:01.420 | would say is, look, I think there's a pyramid of activity, a
01:22:05.700 | pyramid of criminal or anti social behavior in San
01:22:08.940 | Francisco, that we can all see the base level is you've got a
01:22:12.660 | level of chaos on the streets, where you have open air drug
01:22:16.220 | markets, people doing drugs, sometimes you'll see, you know,
01:22:20.060 | a person doing something disgusting, you know, like
01:22:22.420 | people defecating on the streets or even worse, then there's like
01:22:25.860 | a level up where they're chasing after you, or, you know,
01:22:28.060 | harassing you people have experienced that I've
01:22:30.300 | experienced that, then there's a level up where there's petty
01:22:33.460 | crime, your car gets broken into, or something like that,
01:22:36.660 | then there's the level where you get mugged. And then finally,
01:22:39.860 | the top of the pyramid is that there's a murder. And it's true
01:22:43.300 | that most of the time, the issues don't go all the way to
01:22:46.420 | the top of the pyramid where someone is murdered. Okay, but
01:22:49.660 | that doesn't mean there's not a vast pyramid underneath that, of
01:22:53.540 | basically quality of life issues. And I think this term
01:22:56.660 | quality of life was originally used as some sort of way to
01:23:02.140 | minimize the behavior that was going on saying that they
01:23:05.220 | weren't really crimes, we shouldn't worry about them. But
01:23:08.100 | if anything, what we've seen in San Francisco is that when you
01:23:10.420 | ignore quality of life crimes, you will actually see a huge
01:23:15.260 | diminishment in what it's like to live in these cities, like
01:23:18.900 | quality of life is real. And that's the issue. And I think
01:23:21.420 | what they're trying to do now is that say that because Bob Lee
01:23:24.860 | wasn't the case that we thought it was that that whole pyramid
01:23:28.420 | doesn't exist, doesn't exist pyramid exists, we can all
01:23:31.300 | experience Oh, my God. And that's the insincerity of this.
01:23:34.620 | It is insincere. And the existence of that pyramid that
01:23:37.340 | we can see, and hear and feel and experience every day is why
01:23:41.340 | we're willing to make a bet. We called it a bet that the Bob Lee
01:23:45.060 | case was like the Brianna Kupfer case. And in that
01:23:47.700 | with a disclaimer, with a disclaimer, and we always do a
01:23:50.700 | disclaimer here. And just to George Hammond from the
01:23:53.660 | Financial Times who emailed me, here's what he asked me, there's
01:23:55.740 | a lot of public attention lately on whether San Francisco status
01:23:58.380 | has one of the top business and technology hubs in the US is at
01:24:00.780 | risk in the aftermath of the pandemic. Duh, obviously it is.
01:24:04.020 | I wonder if you had a moment to chat about that. And whether
01:24:06.860 | there is a danger that negative perceptions about the city will
01:24:10.060 | damage its reputation for founders and capital allocators
01:24:12.300 | in the future. So essentially, the and it says the obviously a
01:24:15.900 | lot of potential for hysteria in this conversation, which I'm
01:24:18.540 | keen to avoid. And it's like, hey, have you walked down the
01:24:21.540 | street? And I asked him, have you walked down the street and
01:24:23.740 | San Francisco,
01:24:24.620 | Jason, the best response is send him the thing that sacks and
01:24:27.740 | which is the amount of available office space in San Francisco.
01:24:32.380 | People are voting for hours, companies are voting with their
01:24:35.260 | feet. So it's already if the quality of life wasn't so poor,
01:24:38.140 | they stay.
01:24:38.820 | This is the essence of gaslighting is what they do is
01:24:41.820 | that people who've actually created the situation, San
01:24:44.220 | Francisco with their policies, their policies of defunding the
01:24:47.700 | police, making it harder for the police to do their job
01:24:50.220 | decriminalizing theft under $950, allowing open air drug
01:24:54.460 | markets, the people who have now created that matrix of policies
01:24:57.500 | that create the situation, what they then turn around and do is
01:25:00.260 | say no, the people who are creating the problem are the
01:25:02.380 | ones who are observing this. That's all we're doing is
01:25:05.380 | observing and complaining about it. And what they try to do is
01:25:08.460 | say, Well, no, you're you're running down San Francisco,
01:25:10.460 | we're not the ones creating the problem, we're observing it. And
01:25:12.460 | just this week, another data point is that the mayor's office
01:25:15.660 | said that they were short more than 500 police officers in San
01:25:19.100 | Francisco.
01:25:19.700 | Yeah, nobody who's going to become a police officer here.
01:25:22.380 | Are you crazy?
01:25:23.060 | Well, and there was another article just this week about
01:25:26.180 | how there's a lot of speculation, rumors are
01:25:29.220 | swirling of an unofficial strike, an informal strike by
01:25:32.740 | police officers who are normally on the force who are tired of
01:25:36.140 | risking life and limb. And then you know, they basically risk
01:25:39.620 | getting out of physical altercation with a homeless
01:25:42.500 | person, they bring them in, and then they're just released
01:25:45.020 | again. So there's a lot of quiet quitting that's going on in the
01:25:47.820 | job. It's like this learned helplessness, because why take a
01:25:51.300 | risk and then the police commission doesn't have your
01:25:53.140 | back. It seems like the only time you have prosecutorial
01:25:56.180 | zeal by a lot of these prosecutors is when they can go
01:25:58.860 | after a cop, not one of these repeat offenders. And you just
01:26:01.980 | saw that, by the way, in LA,
01:26:03.100 | look, motherboard and New York Times just emailed and DM me.
01:26:06.260 | And then and then did you guys say that instead of solving
01:26:09.180 | these issues, the Board of Supervisors was dealing with a
01:26:12.740 | wild parrot? What was it?
01:26:14.060 | The meeting that was suspended,
01:26:18.580 | they had, or Yeah, they had scheduled a meeting to vote on
01:26:22.540 | whether the wild parrots are the official animal of the city of
01:26:26.780 | San Francisco. So that was the scheduled meeting that got
01:26:31.220 | disbanded.
01:26:32.100 | Also, can I just clarify what Chumash talked about with the
01:26:35.060 | Elon interview, a BBC reporter interviewed Elon and said,
01:26:38.340 | there is much more race and hate and hate speech in the feeds on
01:26:44.100 | Twitter. And he said, Can you give me an example? And he said,
01:26:46.660 | Well, I don't have an example. But people are saying this is
01:26:48.660 | that which people are saying it. And the BBC reporter said,
01:26:51.540 | Well, just different groups of people are saying it. And you
01:26:53.620 | know, I've certainly seen he said, Okay, you saw it. And for
01:26:56.260 | you, he goes, No, I stopped looking at for you. He said, so
01:26:58.540 | give me one example of hate speech that you've seen in your
01:27:01.860 | feed. Now, we without speaking about any inside information,
01:27:05.900 | which I do not have much of, they've been pretty deliberate
01:27:08.900 | of removing hate speech from places like for you. And, you
01:27:12.140 | know, it's a very complicated issue when you have an open
01:27:13.820 | platform, but the people may say a word, but it doesn't reach a
01:27:18.620 | lot of people. So if you were to say something really nasty, it
01:27:20.900 | doesn't take a genius to block that and not have it reach a
01:27:23.740 | bunch of people. This reporter kept insisting to Elon that this
01:27:27.500 | was on the rise with no factual basis for it that other people
01:27:30.580 | said it. And then he said, But I don't look at the feed. He said,
01:27:33.180 | So you're telling me that there's more hate speech that
01:27:35.780 | you've seen, but you just admitted to me that you haven't
01:27:37.700 | looked at the for you feed in three months. And it was just
01:27:40.180 | like this completely weird thing. I just had mother calling
01:27:43.060 | it a lie. He called it a lie. caught him. And this is the
01:27:45.980 | thing. If you're a journalist, just cut it down the middle.
01:27:49.540 | Come prepared with a position either way. I want to connect
01:27:54.380 | one dot, please, which is that he filled in his own narrative,
01:27:59.820 | even though the data wasn't necessarily there. In the same
01:28:03.620 | way that, you know, we kind of filled in our narrative about
01:28:06.300 | San Francisco, with the Bob Lee, you know, murder, being another
01:28:11.300 | example,
01:28:11.860 | disclaimer on it.
01:28:13.820 | He said, Well, we didn't hold on a second. We so we knew we
01:28:17.740 | didn't know. And furthermore, we're taking great pains this
01:28:21.260 | week to correct the record and explain what we now know. Yeah.
01:28:25.180 | He was intellectually honest. This is just intellectual
01:28:30.820 | honesty.
01:28:31.420 | Honestly, you're you're you're going soft here. Freeberg,
01:28:34.580 | you're getting gaslit by all these people.
01:28:36.180 | I'm not getting gaslit by anyone. I think the guy totally
01:28:39.660 | the guy totally had zero data. By the way, when you're
01:28:42.540 | journalist, you're supposed to report on data and evidence. So
01:28:45.060 | he's certainly you know, I think
01:28:46.500 | just replacing Bob Lee with Don carminiani. It's the same
01:28:49.580 | story. Yeah. This is that Don Don happened to survive.
01:28:53.900 | Guys. I love I love you. But I gotta go.
01:28:56.060 | Goodbye. Here's what Maxwell from mother body. Have fun.
01:28:59.660 | There's been a lot of discussion about the future of San
01:29:02.100 | Francisco and the death has quickly become politicized. Has
01:29:05.420 | that caused any division or disagreement from what you've
01:29:08.460 | seen? Or has that not been the case?
01:29:11.940 | The press is gleeful right now.
01:29:13.460 | They're gleeful like, oh my god.
01:29:15.140 | Oh, just like the right was gleeful with Jesse Smollett.
01:29:18.820 | Having gotten himself beaten up or you know, setting up his own.
01:29:22.460 | All right, everybody for the Sultan of science currently
01:29:27.740 | conducting experiments on a beach to see exactly how burned
01:29:33.300 | he can get with his SPF 200 under an umbrella wearing a sun
01:29:37.300 | shirt and pants. Freeberg freeberg on the beach was the
01:29:41.420 | same outfit astronauts wear when they do spacewalks. Hey,
01:29:44.740 | stable diffusion. Make me an image of David freeberg wearing
01:29:48.820 | a full body baiting suit covered in SPF 200 under three
01:29:53.460 | umbrellas. Sunny Beach. Thank you.
01:29:56.900 | Oh my god.
01:29:59.060 | For the dictator. Chamath Palihapitiya creating
01:30:02.700 | regulations.
01:30:03.620 | And the regular the regulator you can call me the regular
01:30:06.580 | regulator. See you tonight when we'll eat our orchelons what's
01:30:10.100 | left of them. The final four or five orchelons in existence.
01:30:13.820 | Don't be late. Otherwise I'm putting you on the B list today
01:30:16.180 | if you're like I will be there. I'll be there. I promise. I
01:30:18.260 | promise. I promise. Can't wait to be there and the rain man
01:30:22.300 | himself namaste didn't even get to putting Ron
01:30:25.260 | Oh, well,
01:30:26.820 | versus Nikki.
01:30:28.300 | I think you should ask auto GPT how you can eat more endangered
01:30:35.140 | animals. Yes, we have a plan for you.
01:30:37.660 | Yes. And then have it go kill those animals. In the real
01:30:41.220 | world. Put something on the dark web to go kill the remaining
01:30:44.180 | rhinos and bring them to Chamath's house for poker night.
01:30:47.100 | I don't think rhinos would taste good.
01:30:49.700 | Was that the Plava movie? It was a Oh, did you guys see is
01:30:53.940 | cocaine bear out yet? No, it was a Matthew Broderick Marlon
01:30:56.660 | Brando movie right where they're doing the takeoff on the
01:30:58.700 | godfather was the first father. Yeah, yeah, yeah, yeah. It's
01:31:01.660 | like a conspiracy to eat. Endangered animals. Yes, the
01:31:05.740 | freshman. The pressure came out in 1990. Yeah, Marlon Brando did
01:31:10.980 | it with Matthew Broderick. And like Bruno Kirby. They actually
01:31:15.540 | they that was the whole thing. No Kirby. That's a deep they
01:31:18.100 | were actually they were eating endangered animals. What do you
01:31:22.180 | what do you think he too? Is that going to be good? Saks? I
01:31:24.620 | know he's one of your favorite films. Me too. It's awesome. Is
01:31:27.580 | there a sequel coming? They're gonna do he took in the novels
01:31:30.500 | already come out. Adam. I saw the novel. Yeah, he's amazing.
01:31:34.060 | He does. He's one of those movies where when it comes on,
01:31:37.020 | you just can't stop watching. Yes. To screener. Best bank
01:31:40.900 | robbery slash shootout in movie history. You know, that is
01:31:44.500 | literally the best film ever. Like it's up there with like the
01:31:47.420 | Joker with reservoir dogs. The Joker in that Batman movie where
01:31:52.260 | he robs the bank like, I mean, what I love you guys. All right.
01:31:55.180 | Love you besties and for blah, blah, blah, blah, blah. This is
01:31:58.580 | gonna be all in podcast 124. If you want to go to the fan meetups
01:32:01.340 | and hang out with other blah, blah, blah, blah, blah, blah.
01:32:04.180 | Bye bye.
01:32:05.700 | Let your winners ride.
01:32:09.820 | Rain Man David
01:32:12.340 | we open source it to the fans and they've just gone crazy with
01:32:19.260 | it. Love you.
01:32:20.060 | West. I squeen of
01:32:21.100 | besties are
01:32:29.300 | gone.
01:32:29.900 | We should all just get a room and just have one big huge orgy
01:32:41.660 | because they're all just like this like sexual tension that
01:32:44.540 | they just need to release.
01:32:45.420 | What you're about to be
01:32:48.620 | your feet
01:32:50.580 | waiting to get
01:32:52.940 | Merchies are back.
01:32:54.940 | I'm going all in. I'm going all in.
01:33:01.940 | Going all in
01:33:03.940 | (music)
01:33:05.040 | (Music plays)