back to index

In conversation with Sam Altman


Chapters

0:0 Welcoming Sam Altman to the show!
2:28 What's next for OpenAI: GPT-5, open-source, reasoning, what an AI-powered iPhone competitor could look like, and more
21:56 How advanced agents will change the way we interface with apps
33:1 Fair use, creator rights, why OpenAI has stayed away from the music industry
42:2 AI regulation, UBI in a post-AI world
52:23 Sam breaks down how he was fired and re-hired, why he has no equity, dealmaking on behalf of OpenAI, and how he organizes the company
65:33 Post-interview recap
70:38 All-In Summit announcements, college protests
79:6 Signs of innovation dying at Apple: iPad ad, Buffett sells 100M+ shares, what's next?
89:41 Google unveils AlphaFold 3.0

Whisper Transcript | Transcript Only Page

00:00:00.000 | I first met our next guest Sam Altman almost 20 years ago when he was working on a local mobile
00:00:05.600 | app called looped. We were both backed by Sequoia Capital. And in fact, we were both in the first
00:00:11.440 | class of Sequoia scouts. He did investment in a little unknown fintech company called stripe. I
00:00:16.320 | did Uber. And in that tiny experiment, I've never heard that before. Yeah, I think so.
00:00:21.760 | That tiny experimental fund that Sam and I were part of a scouts is Sequoia's highest
00:00:49.440 | multiple returning fund. A couple of low digit millions turned into over 200 million, I'm told.
00:00:54.560 | And then he did. Yeah, that's what I was told by Ruloff. Yeah. And he did a stint at Y Combinator,
00:00:59.280 | where he was president from 2014 to 2019. In 2016, he co founded open AI with the goal of ensuring
00:01:05.440 | that artificial general intelligence benefits all of humanity. In 2019, he left YC to join
00:01:11.600 | opening I full time as CEO, things got really interesting on November 30 of 2022. That's the
00:01:17.760 | day open AI launched chat GPT. In January 2023, Microsoft invested 10 billion in November 2023.
00:01:25.120 | Over a crazy five day span, Sam was fired from opening AI, everybody was going to go work at
00:01:30.960 | Microsoft, a bunch of heart emojis went viral on x slash Twitter. And people started speculating
00:01:36.720 | that the team had reached artificial general intelligence, the world was going to end. And
00:01:41.600 | suddenly, a couple days later, he was back to being the CEO of open AI. In February, Sam was
00:01:48.080 | reportedly looking to raise $7 trillion for an AI chip project. This after it was reported that Sam
00:01:53.840 | was looking to raise a billion from Masayoshi San to create an iPhone killer with Johnny Ive,
00:01:58.560 | the co creator of the iPhone. All of this while chat GPT has become better and better. And a
00:02:03.920 | household name is having a massive impact on how we work and how work is getting done. And it's
00:02:09.680 | reportedly the fastest product hit 100 million users in history in just two months. And check
00:02:16.080 | out opening eyes insane revenue ramp up the reportedly hit 2 billion in AR last year. Welcome
00:02:22.480 | to the all in podcast, Sam. Thank you. Thank you, guys. Zach, you want to lead us off here?
00:02:27.840 | Okay, sure. I mean, I think the whole industry is waiting with bated breath for the release of GPT
00:02:32.880 | five. I guess it's been reported that it's launching sometime this summer, but that's a
00:02:37.600 | pretty big window. Can you narrow that down? I guess? Where? Where are you in the release of GPT
00:02:42.960 | five? We take our time on releases of major new models. And I don't think we I think it will be
00:02:52.560 | great. When we do it, I think we'll be thoughtful about how we do it. Like we may release it in a
00:02:58.560 | different way than we've released previous models. Also, I don't even know if we'll call it GPT five.
00:03:04.480 | What I what I will say is, you know, a lot of people have noticed how much better GPT four has
00:03:09.040 | gotten. Since we've released it, and particularly over the last few months, I think I think that's
00:03:16.320 | like a better hint of what the world looks like, where it's not the like 1234567. But you you just
00:03:24.800 | you use an AI system, and the whole system just gets better and better fairly continuously.
00:03:32.160 | I think that's like both a better technological direction. I think that's like easier for society
00:03:36.480 | to adapt to. But But I assume that's where we'll head. Does that mean that there's not going to be
00:03:43.280 | long training cycles, and it's continuously retraining or training sub models, Sam,
00:03:48.960 | and maybe you could just speak to us about what might change architecturally going forward
00:03:52.320 | with respect to large models? Well, I mean, one one. One thing that you could imagine is this
00:03:59.520 | just that you keep training, right model, that that would seem like a reasonable thing to me.
00:04:04.320 | Do you think we talked about releasing it differently this time? Are you thinking maybe
00:04:10.480 | releasing it to the page users first or, you know, a slower rollout to get the red teams
00:04:16.880 | tight since now there's so much at stake, you have so many customers actually paying and you've got
00:04:21.760 | everybody watching everything you do. You know, is it you have to be more thoughtful now? Yeah.
00:04:27.920 | Still only available to the paid users. But one of the things that we really want to do
00:04:32.160 | is figure out how to make more advanced technology available to free users, too.
00:04:36.720 | I think that's a super important part of our mission. And this idea that we build AI tools
00:04:43.600 | and make them super widely available, free or, you know, not that expensive, whatever it is,
00:04:48.880 | so that people can use them to go kind of invent the future, rather than the magic AGI in the sky,
00:04:53.920 | inventing the future and showing it down upon us. That seems like a much better path. It seems like
00:04:59.280 | a more inspiring path. I also think it's where things are actually heading. So it makes me sad
00:05:04.320 | that we have not figured out how to make GPT four level technology available to free users. It's
00:05:08.720 | something we really want to do. It's just very expensive. It's very expensive. Yeah.
00:05:13.040 | To mock your thoughts, I think maybe the the two big vectors, Sam, that people always
00:05:19.040 | talk about is that underlying cost and sort of the latency that's kind of rate limited,
00:05:24.960 | a killer app. And then I think the second is sort of the long term ability for people
00:05:31.680 | to build in an open source world versus a closed source world. And I think the crazy thing about
00:05:37.200 | this space is that the open source community is rabid. So one example that I think is incredible
00:05:43.840 | is, you know, we had these guys do a pretty crazy demo for Devon, remember, like even like five or
00:05:50.080 | six weeks ago, that looked incredible. And then some kid just published it under an open MIT
00:05:55.680 | license, like open Devon. And it's incredibly good, and almost as good as that other thing
00:06:02.800 | that was closed source. So maybe we can just start with that, which is, tell me about the
00:06:08.240 | business decision to keep these models closed source. And where do you see things going in the
00:06:13.440 | next couple years? So on the first part of your question, speed and cost, those are hugely
00:06:20.320 | important to us. And I don't want to like give a timeline on when we can bring them down a lot,
00:06:26.800 | because research is hard, but I am confident we'll be able to, we want to like cut the latency super
00:06:32.560 | dramatically, we want to cut the cost really, really dramatically. And I believe that will
00:06:37.920 | happen. We're still so early in the development of the science and understanding how this works.
00:06:43.440 | Plus, we have all the engineering tailwinds. So I don't know like when we get to intelligence
00:06:50.720 | too cheap to meter and so fast that it feels instantaneous to us and everything else. But
00:06:55.680 | I do believe we can get there for, you know, a pretty high level of intelligence. And
00:07:01.520 | it's important to us, it's clearly important to users, and it'll unlock a lot of stuff.
00:07:07.840 | On the sort of open source, closed source thing, I think there's great roles for both, I think.
00:07:13.120 | You know, we've open sourced some stuff, we'll open source more stuff in the future.
00:07:18.640 | But really, like our mission is to build towards AGI and to figure out how to broadly distribute
00:07:24.080 | its benefits. We have a strategy for that seems to be resonating with a lot of people. It obviously
00:07:29.440 | isn't for everyone. And there's like a big ecosystem. And there will also be open source
00:07:33.680 | models and people who build that way. One area that I'm particularly interested personally in
00:07:38.800 | open source for is I want an open source model that is as good as it can be that runs on my phone.
00:07:44.640 | And that, I think is gonna, you know, the world doesn't quite have the technology for a good
00:07:51.360 | version of that yet. But that seems like a really important thing to go do at some point.
00:07:55.440 | Will you do? Will you do that? Will you release it?
00:07:57.120 | I don't know if we will or someone will, but someone will.
00:07:59.120 | What about Llama 3?
00:08:00.240 | Llama 3 running on a phone?
00:08:02.880 | Well, I guess maybe there's like a 7 billion version.
00:08:06.320 | I don't know if that will fit on a phone or not, but...
00:08:10.800 | That should be fittable on a phone, but I'm not sure if that one is like,
00:08:15.920 | I haven't played with it. I don't know if it's like good enough to kind of do the thing I'm
00:08:18.880 | thinking about here.
00:08:20.000 | So when Llama 3 got released, I think the big takeaway for a lot of people was,
00:08:24.160 | oh, wow, they've like caught up to GPT-4. I don't think it's equal in all dimensions,
00:08:28.960 | but it's like pretty, pretty close to pretty in the ballpark.
00:08:31.840 | I guess the question is, you know, you guys released 4 a while ago,
00:08:36.400 | you're working on 5 or, you know, more upgrades to 4.
00:08:40.160 | I mean, I think to Chamath's point about Devon, how do you stay ahead of open source?
00:08:44.400 | I mean, it's just, that's just like a very hard thing to do in general, right?
00:08:48.080 | I mean, how do you think about that?
00:08:50.160 | What we're trying to do is not make the sort of smartest set of weights that we can,
00:08:58.800 | but what we're trying to make is like this useful intelligence layer for people to use.
00:09:04.080 | And a model is part of that. I think we will stay pretty far ahead of,
00:09:09.280 | I hope we'll stay pretty far ahead of the rest of the world on that.
00:09:12.960 | But there's a lot of other work around the whole system that's not just that,
00:09:20.160 | you know, the model weights and we'll have to build up enduring value the old fashioned way,
00:09:25.440 | like any other business does. We'll have to figure out a great product and reasons to
00:09:29.120 | stick with it and, you know, deliver it at a great price.
00:09:31.840 | - When you founded the organization, the stated goal or part of what you discussed was,
00:09:38.320 | hey, this is too important for any one company to own it. So, therefore, it needs to be open.
00:09:42.800 | Then there was the switch, hey, it's too dangerous for anybody to be able to see it
00:09:47.760 | and we need to lock this down because you had some fear about that, I think. Is that accurate?
00:09:53.040 | Because the cynical side is like, well, this is a capitalistic move. And then the, I think,
00:09:58.240 | you know, I'm curious what the decision was here in terms of going from open,
00:10:04.000 | the world needs to see this, it's really important to close, only we can see it.
00:10:08.640 | - Well, how did you come to that conclusion? What were the discussions?
00:10:12.000 | - Part of the reason that we released ChatGPT was we want the world to see this. And we've
00:10:16.000 | been trying to tell people that AI is really important. And if you go back to like October
00:10:20.720 | of 2022, not that many people thought AI was going to be that important or that it was really
00:10:24.480 | happening. And a huge part of what we try to do is put the technology in the hands of people.
00:10:33.760 | Now, again, there's different ways to do that. And I think there really is an important role
00:10:36.720 | to just say like, here's the way to have at it. But the fact that we have so many people using
00:10:42.400 | a free version of ChatGPT that we don't, you know, we don't run ads on, we don't try to like
00:10:46.560 | make money on, we just put out there because we want people to have these tools, I think has done
00:10:50.960 | a lot to provide a lot of value and, you know, teach people how to fish, but also to get the world
00:10:56.800 | really thoughtful about what's happening here. Now, we still don't have all the answers. And
00:11:02.720 | we're fumbling our way through this, like everybody else, and I assume we'll change strategy
00:11:06.720 | many more times as we learn new things. You know, when we started OpenAI, we had really no idea
00:11:12.560 | about how things were going to go, that we'd make a language model, that we'd ever make a product.
00:11:17.840 | We started off just, I remember very clearly that first day where we're like, well,
00:11:21.600 | now we're all here, that was, you know, it was difficult to get this set up. But what happens
00:11:25.760 | now? Maybe we should write some papers, maybe we should stand around a whiteboard.
00:11:29.440 | And we've just been trying to like put one foot in front of the other and figure out what's next,
00:11:33.760 | and what's next, and what's next. And I think we'll keep doing that.
00:11:38.640 | Can I just replay something and just make sure I heard it right? I think what you were saying
00:11:42.560 | on the open source, closed source thing is, if I heard it right, all these models,
00:11:48.800 | independent of the business decision you make are going to become asymptotically accurate
00:11:53.680 | towards some amount of accuracy, like not all, but like, let's just say there's four or five that
00:11:58.880 | are well capitalized enough, you guys, Meta, Google, Microsoft, whomever, right? So let's just
00:12:05.680 | say four or five, maybe one startup, and on the open web, and then quickly, the accuracy or the
00:12:14.480 | value of these models will probably shift to these proprietary sources of training data that
00:12:19.440 | you could get that others can't, or others can get that you can't. Is that how you see
00:12:23.920 | this thing evolving, where the open web gets everybody to a certain threshold, and then it's
00:12:29.360 | just an arms race for data beyond that? So I definitely don't think it'll be an arms race
00:12:35.120 | for data, because when the models get smart enough, at some point, it shouldn't be about
00:12:38.480 | more data, at least not for training, it may matter data to make it useful.
00:12:42.640 | Look, the one thing that I have learned most throughout all this is that it's hard to make
00:12:50.240 | confidence statements a couple of years in the future about where this is all going to go.
00:12:53.760 | And so I don't want to try now. I will say that I expect lots of very capable models in the world.
00:13:00.000 | And, you know, like, it feels to me like we just like stumbled on a new fact of nature or science
00:13:07.440 | or whatever you want to call it, which is like, we can create, you can like, I mean, I don't believe
00:13:14.880 | this literally, but it's like a spiritual point. You know, intelligence is just this emergent
00:13:19.440 | property of matter. And that's like a, that's like a rule of physics or something. So people
00:13:24.720 | are going to figure that out. But there'll be all these different ways to design the systems,
00:13:27.920 | people will make different choices, figure out new ideas. And I'm sure like, you know,
00:13:34.400 | like any other industry, I would expect there to be multiple approaches and different people like
00:13:42.240 | different ones. You know, some people like iPhones, some people like an Android phone,
00:13:46.160 | I think there'll be some effect like that. Let's go back to that first section of just the,
00:13:50.640 | the cost and the speed. All of you guys are sort of a little bit rate limited on
00:13:56.720 | literally NVIDIA's throughput, right? And I think that you and most everybody else have sort of
00:14:03.120 | effectively announced how much capacity you can get just because it's as much as they can spin out.
00:14:07.280 | What needs to happen at the substrate so that you can actually compute cheaper, compute faster,
00:14:15.120 | get access to more energy? How are you helping to frame out the industry solving those problems?
00:14:21.200 | Well, we'll make huge algorithmic gains for sure. And I don't want to discount that I'll you know,
00:14:25.760 | I'm very interested in chips and energy. But if we can make our if we can make a same quality
00:14:31.280 | model twice as efficient, that's like we had twice as much compute. Right. And I think there's
00:14:35.920 | a gigantic amount of work to be done there. And I hope we'll start really seeing those results.
00:14:44.800 | Um, other than that, the whole supply chain is like very complicated. You know, there's,
00:14:49.200 | there's logic fab capacity, there's how much HBM the world can make, there's how quickly you can
00:14:55.040 | like get permits and pour the concrete make the data centers and then have people in there wiring
00:14:58.560 | them all up. There's finally energy, which is a huge bottleneck. But I think when there's this
00:15:05.280 | much value to people, the world will do its thing. We'll try to help it happen faster.
00:15:11.840 | And there's probably like, I don't know how to give it a number, but there's like some percentage
00:15:16.320 | chance where there is, as you were saying, like a huge substrate breakthrough. And we have like a
00:15:21.920 | massively more efficient way to do computing. But I don't, I don't like bank on that or spend too
00:15:26.720 | much time thinking about it. What about the device side and sort of, you mentioned sort of the models
00:15:33.840 | that can fit on a phone. So obviously, whether that's an LLM or some SLM or something, I'm sure
00:15:38.480 | you're thinking about that. But then does the device itself change? I mean, is it does it need
00:15:42.560 | to be as expensive as an iPhone? Ah,
00:15:44.680 | I'm super interested in this. I love like great new form factors of computing. And it feels like
00:15:54.720 | with every major technological advance, a new thing becomes possible. Phones are unbelievably
00:16:02.480 | good. So I think the threshold is like very high here. Like, like, I think, like, I personally
00:16:08.080 | think iPhone is like the greatest piece of technology humanity has ever made. It's really
00:16:14.080 | a wonderful product. What comes after it? Like, I don't know. I mean, I was gonna that was what I
00:16:18.480 | was saying. It's so good to get beyond it. I think the bar is like, quite high. Well, you've been
00:16:24.640 | working with Johnny Ivan on something, right? We've been discussing ideas. But I don't like,
00:16:30.160 | if I knew, is it that that it has to be more complicated, or actually just much, much cheaper
00:16:35.040 | and simpler? Well, every most almost everyone's willing to pay for a phone anyway. So if you
00:16:39.920 | could like make a way cheaper device, I think the barrier to carry a second thing or use a second
00:16:45.120 | thing is pretty high. So I don't think given that we're all willing to pay for phones, or most of us
00:16:50.560 | are, I don't think cheaper is the answer. Different is the answer then? Would there be
00:16:56.960 | like a specialized chip that would run on the phone that was really good at powering a, you
00:17:01.840 | know, a phone size AI model? Probably, but the phone manufacturers are going to do that for sure.
00:17:06.240 | That doesn't that doesn't necessitate a new device. I think you'd have to like find some
00:17:10.720 | really different interaction paradigm that the technology enables. And if I knew what it was,
00:17:17.520 | I would be excited to be working on it right now. But you have you have voice working right now in
00:17:22.480 | the app. In fact, I set my action button on my phone to go directly to chat GPT's voice app,
00:17:27.840 | and I use it with my kids and they love it talking. I think latency issues, but it's really
00:17:31.920 | we'll get we'll get that we'll get that better. And I think voice is a hint to whatever the next
00:17:37.520 | thing is, like if you can get voice interaction to be really good, it feels I think that feels
00:17:44.400 | like a different way to use a computer. But again, like we already with that, by the way, like what
00:17:48.960 | why is it not responsive? And, you know, it's it feels like a CB, you know, like over over really
00:17:55.760 | annoying to use, you know, in that way. But it's also brilliant when it gives you the right answer.
00:18:01.040 | We are working on that. It's it's so clunky right now. It's slow. It's like kind of doesn't feel
00:18:08.000 | very smooth or authentic or organic. Like we'll get all that to be much better.
00:18:12.560 | What about computer vision? I mean, they have classes or maybe you could wear a pendant.
00:18:19.040 | I mean, you take the combination of visual or video data, combine it with voice and now
00:18:25.600 | super I knows everything that's happening around you.
00:18:28.240 | Super powerful to be able to like the multimodality of saying like, hey,
00:18:33.040 | chat, GPT, what am I looking at? Or like, what kind of plant is this? I can't quite tell.
00:18:37.920 | That's obvious that that's like a that's another I think like hint, but
00:18:43.200 | whether people want to wear glasses or like hold up something when they want that, like,
00:18:48.800 | I there's a bunch of just like the sort of like societal interpersonal issues here
00:18:55.440 | are all very complicated about wearing a computer on your face.
00:18:58.160 | We saw that with Google Glass. People got punched in the face in the mission.
00:19:02.960 | Started a lot of I forgot about that. I forgot about that. So I think it's like.
00:19:06.400 | What are the apps that could be unlocked if AI was sort of ubiquitous on people's phones?
00:19:14.000 | Do you have a sense of that? Or what would you want to see built?
00:19:19.520 | Uh. I think what I want is just this always on like super low friction thing where I can.
00:19:30.000 | Either by voice or by text or ideally like some other, it just kind of knows what I want.
00:19:37.200 | Have this like constant thing helping me throughout my day that's got like as much
00:19:40.640 | context as possible. It's like the world's greatest assistant. And it's just this like
00:19:46.080 | thing working to make me better and better. There's there's like a and when you hear people
00:19:51.920 | like talk about the future there, imagine they imagine there's sort of two. Different approaches,
00:19:58.480 | and they don't sound that different, but I think they're like very different for how we'll design
00:20:01.680 | the system in practice. There's the. I want an extension of myself. I want like. A ghost or an
00:20:09.840 | alter ego or this thing that really like is me is acting on my behalf is responding to emails,
00:20:17.200 | not even telling me about it is sort of like. It becomes more me and is me. And then there's
00:20:24.640 | this other thing which is like. I want a great senior employee. It may get to know me very well.
00:20:30.560 | I may delegate it. You know you can like have access to my email and I'll tell you the constraints,
00:20:35.040 | but but I think of it as this like separate entity. And I personally like the separate
00:20:42.400 | entity approach better and think that's where we're going to head. And so in that sense.
00:20:47.120 | The thing is not you, but it's it's like a always available, always great.
00:20:53.360 | Super capable assistant executive agent in a way like it's out there working on your behalf
00:21:00.240 | and understands what you want and anticipates what you want is what I'm reading into what
00:21:04.800 | you're saying. I think there'd be agent like behavior, but there's like a difference between.
00:21:10.080 | A senior employee in an agent. Yeah, and like I want it. You know, I think of like my.
00:21:16.640 | I think like a bit. Like one of the things that I like about a senior employee is still.
00:21:23.920 | They'll push back on me. They will sometimes not do something I ask,
00:21:31.040 | or there sometimes will say like I can do that thing if you want, but if I do it,
00:21:34.640 | here's what I think would happen, and then this and then that. And are you really sure?
00:21:37.600 | I definitely want that kind of vibe, which not not just like this thing that I
00:21:43.280 | really ask and it blindly does it can reason. Yeah, yeah, and push reason. It has like the
00:21:48.000 | kind of relationship with me that I would expect out of a really competent person that I worked
00:21:53.360 | with, which is different from like a sycophant. Yeah, the thing in that world where if you had
00:21:58.720 | this like Jarvis like thing that can reason, what do you think it does to products that you use
00:22:07.120 | today where the interface is very valuable? So for example, if you look at an Instacart,
00:22:12.640 | or if you look at an Uber, or if you look at a DoorDash, these are not services that are meant
00:22:17.280 | to be pipes that are just providing a set of APIs to a smart set of agents that ubiquitously work
00:22:24.000 | on behalf of 8 billion people. What do you think has to change and how we think about how apps need
00:22:29.600 | to work of how this entire infrastructure of experiences need to work in a world where
00:22:33.680 | you're agentically interfacing to the world? I'm actually very interested in designing
00:22:38.720 | a world that is equally usable by humans and by AI. So I I like the interpretability of that. I
00:22:49.440 | like the smoothness of the handoffs. I like the ability that we can provide feedback or whatever.
00:22:53.760 | So, you know, DoorDash could just expose some API to my future AI assistant, and they could go put
00:23:02.160 | the order and whatever. Or I could say like, I could be holding my phone and I could say, okay,
00:23:06.640 | AI assistant, like you put in this order on DoorDash, please. And I could like watch the
00:23:11.920 | app open and see the thing clicking around. And I could say, hey, no, not this or like,
00:23:17.760 | there's something about designing a world that is usable equally well by humans and AIs that
00:23:25.680 | I think is an interesting concept. And same reason I'm like more excited about humanoid
00:23:30.240 | robots than sort of robots of like very other shapes. The world is very much designed for
00:23:34.400 | humans. I think we should absolutely keep it that way. And a shared interface is nice.
00:23:38.080 | So you see voice chat, that modality kind of gets rid of apps. You just ask it for sushi,
00:23:44.800 | it knows sushi you like before, it knows what you don't like and does its best shot at doing it.
00:23:48.880 | It's hard for me to imagine that we just go to a world totally where you say like, hey,
00:23:54.480 | ChatGBT, order me sushi. And it says, okay, do you want it from this restaurant? What kind,
00:23:59.120 | what time, whatever. I think user, I think visual user interfaces are super good for a lot of things.
00:24:07.360 | And it's hard for me to imagine like a world where you never look at a screen and just
00:24:15.760 | use voice mode only. But I can't imagine that for a lot of things.
00:24:19.280 | Yeah, I mean, Apple tried with Siri, like supposedly you can order an Uber automatically
00:24:24.320 | with Siri. I don't think anybody's ever done it because why would you take the risk of not?
00:24:28.960 | Well, the quality, to your point, the quality is not good. But when the quality
00:24:32.320 | is good enough, you're a lot, you'll actually prefer it just because it's just lighter weight.
00:24:36.640 | You don't have to take your phone out. You don't have to search for your app and press it. Oh,
00:24:40.960 | it automatically logged you out. Oh, hold on, log back in. Oh, TFA. It's a whole pain in the ass.
00:24:45.760 | You know, it's like setting a timer with Siri. I do every time because it works really well
00:24:50.720 | and it's great. I don't need more information. But ordering an Uber, like I want to see the
00:24:56.240 | prices for a few different options. I want to see how far away it is. I want to see like
00:25:00.640 | maybe even where they are on the map because I might walk somewhere. I get a lot more information
00:25:05.120 | by I think in less time by looking at that order of the Uber screen than I would if I
00:25:09.840 | had to do that all through the audio channel. I like your idea of watching it happen. That's
00:25:14.080 | kind of cool. I think there will just be like, yeah, different. There are different interfaces
00:25:20.080 | we use for different tasks, and I think that'll keep going. Of all the developers that are
00:25:23.760 | building apps and experiences on OpenAI, are there a few that stand out for you where you're like,
00:25:29.680 | OK, this is directionally going in a super interesting area, even if it's like a toy app,
00:25:34.800 | but are there things that you guys point to and say this is really important?
00:25:38.560 | I met with a new company this morning or barely even a company. It's like two people that are
00:25:46.560 | going to work on a summer project trying to actually finally make the AI tutor. And I've
00:25:52.240 | always been interested in this space. A lot of people have done great stuff on our platform. But
00:25:56.160 | if if someone can deliver like the way that you actually like.
00:26:02.880 | They used a phrase I love, which is this is going to be like a Montessori level reinvention for how
00:26:08.240 | people how people learn. Yeah. But if you can find this new way to let people explore and learn in
00:26:13.920 | new ways on their own, I'm personally super excited about that. A lot of the coding related
00:26:20.720 | stuff you mentioned, Devon, earlier, I think that's like a super cool vision of the future.
00:26:24.880 | The thing that I am health care, I believe should be pretty transformed by this. But the thing I'm
00:26:32.800 | personally most excited about is the sort of doing faster and better scientific discovery. GPT-4
00:26:40.320 | clearly not there in a big way, although maybe it accelerates things a little bit by making
00:26:44.240 | scientists more productive. But Alpha 4.3. Yeah, that's like, but Sam, that will be a triumph.
00:26:52.240 | Those are not like, these, these models are trained and built differently than the language
00:27:02.000 | models. I mean, to some, obviously, there's a lot that's similar. But there's a lot, there's
00:27:07.360 | kind of a ground up architecture to a lot of these models that are being applied to these specific
00:27:11.840 | problem sets is these specific applications, like chemistry interaction modeling, for example.
00:27:19.200 | You'll need some of that for sure. But the thing that I think we're missing
00:27:22.720 | across the board for many of these things we've been talking about, is models that can do reasoning.
00:27:28.800 | And once you have reasoning, you can connect it to chemistry stimulators.
00:27:31.840 | Yeah, that's the important question I wanted to kind of talk about today was this idea of
00:27:37.920 | networks of models. People talk a lot about agents as if there's kind of this
00:27:44.240 | linear set of call functions that happen. But one of the things that arises
00:27:49.440 | in biology is networks of systems that have cross interactions that the aggregation of the system,
00:27:57.440 | the aggregation of the network produces an output rather than one thing calling another,
00:28:02.640 | that thing calling another. Do we see like an emergence in this architecture of either
00:28:07.040 | specialized models or network models that work together to address bigger problem sets use
00:28:12.720 | reasoning? There's computational models that do things like chemistry or arithmetic,
00:28:16.720 | and there's other models that do rather than one model to rule them all that's purely generalized.
00:28:21.680 | I don't know. I don't know how much reasoning
00:28:30.640 | is going to turn out to be a super generalizable thing. I suspect it will, but that's more just
00:28:37.200 | like an intuition and a hope and it would be nice if it worked out that way. I don't know
00:28:42.400 | if that's like... But let's walk through the protein modeling example. There's a bunch of
00:28:51.360 | training data, images of proteins, and then sequence data, and they build a model, predictive
00:28:56.960 | model, and they have a set of processes and steps for doing that. Do you envision that there's this
00:29:02.080 | artificial, general intelligence or this great reasoning model that then figures out how to build
00:29:07.200 | that sub-model that figures out how to solve that problem by acquiring the necessary data and then
00:29:12.800 | resolving... There's so many ways where that could go. Maybe it trains a literal model for it, or
00:29:18.080 | maybe it just knows the one big model. It can go pick what other training data it needs and ask a
00:29:24.640 | question and then update on that. I guess the real question is, are all these startups going to die
00:29:30.560 | because so many startups are working in that modality, which is go get special data and then
00:29:34.880 | train a new model on that special data from the ground up, and then it only does that one sort of
00:29:39.520 | thing and it works really well at that one thing and it works better than anything else at that
00:29:43.360 | one thing. There's a version of this I think you can already see. When you were talking about
00:29:52.960 | biology and these complicated networks of systems, the reason I was smiling is I got
00:29:56.320 | super sick recently. I'm mostly better now, but it was just like body got beat up one system at a
00:30:02.960 | time. You can really tell, okay, it's this cascading thing. That reminded me of you talking
00:30:10.480 | about biology is just these... You have no idea how much these systems interact with each other
00:30:14.720 | until things start going wrong. That was interesting to see. I was using Chad GPT
00:30:23.760 | to try to figure out what was happening, whatever, and would say, "Well, I'm unsure of this one
00:30:29.840 | thing," and then I just posted a paper on it without even reading the paper in the context,
00:30:35.680 | and it says, "Oh, that was a thing I was unsure of. Now I think this instead." That was a small
00:30:41.360 | version of what you're talking about, where you can say, "I don't know this thing," and you can
00:30:47.200 | put more information. You don't retrain the model. You're just adding it to the context here,
00:30:50.000 | and now you're getting a... These models that are predicting protein structure,
00:30:55.520 | let's say, right? This is the whole basis, and now other molecules at AlphaFold3.
00:31:00.640 | Can they... Yeah, I mean, is it basically a world where the best generalized model goes in and
00:31:08.880 | gets that training data and then figures out on its own? Maybe you could use an example for us.
00:31:14.720 | Can you tell us about Sora, your video model that generates amazing moving images, moving video,
00:31:20.960 | and what's different about the architecture there, whatever you're willing to share?
00:31:25.760 | On how that is different. Yeah. On the general thing first,
00:31:30.800 | you clearly will need specialized simulators, connectors, pieces of data, whatever, but
00:31:44.080 | my intuition... And again, I don't have this backed up with science. My intuition would be,
00:31:49.200 | if we can figure out the core of generalized reasoning, connecting that to new problem
00:31:54.480 | domains in the same way that humans are generalized reasoners, would, I think, be doable.
00:32:02.000 | It's like a fast unlock. Faster unlock than... I think so.
00:32:06.240 | But yeah, Sora does not start with a language model. That's a model that is customized to do
00:32:18.640 | video. And so, we're clearly not at that world yet.
00:32:23.920 | Right. So, just as an example, for you guys to build a good video model, you built it from
00:32:29.920 | scratch using, I'm assuming, some different architecture and different data. But in the
00:32:36.080 | future, the generalized reasoning system, the AGI, whatever system, theoretically could
00:32:43.360 | render that by figuring out how to do it. Yeah. I mean, one example of this is like,
00:32:48.080 | okay, as far as I know, all the best text models in the world are still autoregressive models,
00:32:53.200 | and the best image and video models are diffusion models. That's like, sort of strange in some sense.
00:32:58.800 | Yeah. So, there's a big debate about training data. You guys have been, I think, the most
00:33:06.800 | thoughtful of any company. You've got licensing deals now, FT, et cetera. And we got to just be
00:33:13.360 | gentle here because you're involved in a New York Times lawsuit you weren't able to settle,
00:33:17.440 | I guess, an arrangement with them for training data. How do you think about fairness in fair use?
00:33:24.560 | We've had big debates here on the pod. Obviously, your actions speak volumes that you're trying to
00:33:32.400 | be fair by doing licensing deals. So, what's your personal position on the rights of artists who
00:33:39.520 | create beautiful music, lyrics, books, and you taking that and then making a derivative product
00:33:46.640 | out of it, and then monetizing it? And what's fair here? And how do we get to a world where
00:33:52.480 | artists can make content in the world and then decide what they want other people to do with it?
00:33:59.040 | Yeah. And I'm just curious your personal belief, because I know you to be a thoughtful person on
00:34:03.600 | this. And I know a lot of other people in our industry are not very thoughtful about how they
00:34:08.640 | think about content creators. So, I think it's very different for different kinds of… I mean,
00:34:14.000 | look, on fair use, I think we have a very reasonable position under the current law,
00:34:19.040 | but I think AI is so different. But for things like art, we'll need to think about them in
00:34:23.760 | different ways. But let's say if you go read a bunch of math on the internet and learn how to
00:34:32.160 | do math, that, I think, seems unobjectionable to most people. And then there's like, you know,
00:34:39.040 | another set of people who might have a different opinion. Well, what if you like,
00:34:41.840 | okay, actually, let me not get into that, just in the interest of not making this answer too
00:34:49.040 | long. So, I think there's like one category of people are like, okay, there's like, generalized
00:34:53.200 | human knowledge, you can kind of like, go, if you learn that, like, that's, that's like,
00:34:58.880 | open domain or something, if you kind of go learn about the Pythagorean theorem.
00:35:02.080 | That's one end of the spectrum. And I think the other extreme end of the spectrum is,
00:35:10.000 | is art. And maybe even like, more than more specifically, I would say it's like doing,
00:35:15.840 | it's a system generating art in the style or the likeness of another artist would be kind
00:35:23.600 | of the furthest end of that. And then there's many, many cases on the spectrum in between.
00:35:29.600 | I think the conversation has been historically very caught up on training data, but it will
00:35:37.360 | increasingly become more about what happens at inference time, as training data becomes
00:35:43.200 | less valuable. And the, what the system does, accessing, you know, information in, in context,
00:35:54.000 | in real time, or, you know, taking like, like something like that, what happens at inference
00:36:00.720 | time will become more debated and how the, what the new economic model is there. So if you say,
00:36:05.920 | like, if you say like, create me a song in this, in the style of Taylor Swift,
00:36:12.560 | even if the model were never trained on any Taylor Swift songs at all, you can still have
00:36:20.080 | a problem, which is that may have read about Taylor Swift, it may know about her themes,
00:36:23.200 | Taylor Swift means something. And then, and then the question is like, should that model,
00:36:28.160 | even if it were never trained on any Taylor Swift song whatsoever, be allowed to do that? And if so,
00:36:35.680 | how should Taylor get paid? Right? So I think there's an opt in opt out in that case,
00:36:40.400 | first of all, and then there's an economic model. Staying on the music example,
00:36:44.880 | there is something interesting to look at from the historical perspective here, which is sampling and
00:36:51.120 | how the economics around that work. This is quite the same thing, but it's like an interesting place
00:36:55.600 | to start looking. Sam, let me just challenge that. What's the difference in the example you're giving
00:37:00.480 | of the model learning about things like song structure, tempo, melody, harmony, relationships,
00:37:07.520 | all the discovering all the underlying structure that makes music successful,
00:37:11.680 | and then building new music using training data. And what a human does that listens to lots of
00:37:18.000 | music, learns about and fit and their brain is processing and building all those same sort of
00:37:23.360 | predictive models are those same sort of discoveries or understandings. What's the
00:37:28.800 | difference here? And why? Why are you making the case that perhaps artists should be uniquely paid?
00:37:35.840 | This is not a sampling situation. You're not the AI is not outputting, and it's not storing in the
00:37:40.320 | model, the actual original song. Yeah, I was learning structure, right? So I wasn't trying
00:37:44.880 | to make that that point, because I agree, like in the same way that humans are inspired by other
00:37:49.120 | humans. I was saying if you if you say generate me a song in the style of Taylor Swift, I see,
00:37:54.400 | right, okay, where the prompt leverages some artists, I think, personally, that's a different
00:38:00.240 | case, would you be comfortable asking? Or would you be comfortable letting the model train itself
00:38:06.160 | with a music model, being trained on the whole corpus of music that humans have created,
00:38:10.320 | without royalties being paid to the artists that that music is being fed in. And then you're not
00:38:17.280 | allowed to ask, you know, artists specific prompts, you could just say, Hey, pay me,
00:38:20.640 | play me a really cool pop song that's fairly modern about heartbreak, you know, with a female
00:38:25.840 | voice, you know, we have currently made the decision not to do music, and partly because
00:38:32.000 | exactly these questions of where you draw the lines. And, you know, what, like, even
00:38:36.160 | I was meeting with several musicians I really admire recently, I was just trying
00:38:41.360 | to like talk about some of these edge cases, but even the world in which
00:38:47.760 | if we went and let's say we paid 10,000 musicians to create a bunch of music just to make a great
00:38:55.920 | training set where the music model could learn everything about strong, strong structure.
00:39:00.320 | And what makes a good catchy beat and everything else. And only trained on that, let's say we
00:39:09.440 | could still make a great music model, which maybe maybe we could, you know, I was kind of like
00:39:14.080 | posing that as a thought experiment to musicians. And they're like, Well, I can't object to that on
00:39:17.680 | any principle basis at that point. And yet, there's still something I don't like about it.
00:39:22.080 | Now, that's not a reason not to do it, necessarily, but it is. Did you see that ad that Apple put out,
00:39:30.800 | maybe it was yesterday or something of like squishing all of human creativity down into
00:39:34.640 | one really thin iPad? What was your take on it? Ah, people got really emotional about it. Yeah,
00:39:41.200 | yeah, yeah, yeah, you would think there's something about I'm obviously hugely positive
00:39:50.160 | on AI. But there is something that I think is beautiful about human creativity and human
00:39:55.520 | artistic expression. And, you know, for an AI that just does better science, like great, bring
00:40:00.160 | that on. But an AI that is going to do this, like deeply beautiful human creative expression,
00:40:05.600 | I think we should like, figure out, it's going to happen. It's going to be a tool
00:40:10.640 | that will lead us to greater creative heights. But I think we should figure out to do it
00:40:14.400 | in a way that like preserves the spirit of what we all care about here.
00:40:17.600 | And I think your actions speak loudly, we were trying to do Star Wars characters in Dolly. And
00:40:27.920 | if you ask for Darth Vader, it says, Hey, we can't do that. So you've, I guess, red teamed or whatever
00:40:33.040 | you call it internally, we try. Yeah, you're not allowing people to use other people's IP. So
00:40:38.480 | you've taken that decision. Now, if you asked it to make a Jedi bulldog, or a Sith Lord bulldog,
00:40:43.440 | which I did, it made my bulldogs as if bulldogs. So there's an interesting question about spectrum,
00:40:49.200 | right? Yeah, you know, we put out this thing yesterday called the spec, where we're trying
00:40:53.440 | to say here are, here's, here's how our models supposed to behave. And it's very hard. It's a
00:41:01.040 | long document, it's very hard to like specify exactly in each case where the limits should be.
00:41:05.680 | And I view this as like, a discussion that's going to need a lot more input. But, but these
00:41:12.160 | sorts of questions about, okay, maybe it shouldn't generate Darth Vader, but the idea of a Sith Lord
00:41:18.560 | or a Sith style thing, or Jedi at this point is like part of the culture, like, like, these are,
00:41:24.080 | these are all hard decisions. Yeah. And I think you're right, the music industry is going to
00:41:29.760 | consider this opportunity to make Taylor Swift songs their opportunity. It's part of the four
00:41:34.080 | part fair use test is, you know, these who gets to capitalize on new innovations for existing art.
00:41:40.640 | And Disney has an argument that, hey, you know, if you're going to make Sora versions of Ashoka,
00:41:47.440 | or whatever, Obi Wan Kenobi, that's Disney's opportunity. And that's a great partnership
00:41:52.320 | for you, you know, to pursue. So we're, I think this section I would label as AI and the law.
00:41:58.960 | So let me ask maybe a higher level question. What does it mean when people say regulate AI, Sam,
00:42:07.360 | what does it what does that even mean? And comment on California's new proposed regulations as well,
00:42:13.120 | if you if you're up for it? I'm concerned. I mean, there's so many proposed regulations,
00:42:17.920 | but most of the ones I've seen on the California state things I'm concerned about, I also have a
00:42:22.320 | general fear of the states all doing this them themselves. When people say regulate AI, I don't
00:42:28.320 | think they mean one thing. I think there's like some people are like ban the whole thing.
00:42:33.040 | Some people like don't allow it to be open source required to be open source.
00:42:36.080 | The thing that I am personally most interested in is, I think there will come. Look, I may be
00:42:44.560 | wrong about this, I will acknowledge that this is a forward looking statement. And those are always
00:42:48.240 | dangerous to make. But I think there will come a time in the not super distant future. Like,
00:42:53.200 | you know, we're not talking like decades and decades from now, where AI says the frontier
00:42:57.760 | AI systems are capable of causing significant global harm. And for those kinds of systems,
00:43:07.120 | and the same way we have like global oversight of nuclear weapons or synthetic bio or things that
00:43:13.120 | can really like, have a very negative impact way beyond the realm of one country, I would like to
00:43:18.960 | see some sort of international agency that is looking at the most powerful systems and ensuring
00:43:24.560 | like reasonable safety testing. You know, these things are not going to escape and recursively
00:43:29.600 | self improve or whatever. The criticism of this is that your you have the resources to cozy up
00:43:38.320 | to lobby to be involved, and you've been very involved with politicians and then startups,
00:43:42.800 | which are also passionate about and invest in are not going to have the ability to resource
00:43:48.800 | and deal with this. And that this regulatory capture as per our friend, you know, Bill Gurley
00:43:53.280 | did a great talk last year about it. So maybe you could address that head on. Do you know,
00:43:57.200 | if the line where we're only going to look at models that are trained on computers that cost
00:44:02.240 | more than 10 billion or more than 100 billion or whatever dollars, I'd be fine with that,
00:44:06.880 | there'd be some line that define. And I don't think that puts any regulatory burden on startups.
00:44:12.320 | So if you have like, the nuclear raw material to make a nuclear bomb, like there's a small
00:44:17.440 | subset of people who have that, therefore you use the analogy of like a nuclear inspectors
00:44:22.000 | and the situation. Yeah, I think that's interesting. Saks, you have a question?
00:44:26.480 | Well, Tomas, go ahead. You had a follow up.
00:44:27.920 | Can I say one more thing about that? Of course, I'd be super nervous about regulatory overreach
00:44:34.400 | here. I think we can get this wrong by doing way too much, or even a little too much. I think we
00:44:38.560 | can get this wrong by doing not enough. But, but I do think part of, and I, and now I mean, you know,
00:44:48.000 | we have seen regulatory overstepping or capture just get super bad in other areas. And, you know,
00:44:56.640 | that also maybe nothing will happen. But, but I think it is part of our duty and our mission to
00:45:01.760 | like talk about what we believe is likely to happen and what it takes to get that right.
00:45:07.840 | The challenge, Sam, is that we have statute that is meant to protect people protect society at
00:45:14.160 | large. What we're creating, however, a statute that gives the government rights to go in and
00:45:21.600 | audit code to audit business trade secrets. We've never seen that to this degree before,
00:45:31.280 | basically the California legislation that's proposed and some of the federal legislation
00:45:34.800 | that's been proposed, basically requires the federal government to audit a model to audit
00:45:41.600 | software to audit and review the parameters and the weightings of the model. And then you need
00:45:46.640 | their checkmark in order to deploy it for commercial or public use. And for me, it just
00:45:53.680 | feels like we're trying to reign in the government agencies for fear and, and because folks have a
00:46:03.040 | hard time understanding this and are scared about the implications of it, they want to control it.
00:46:07.760 | And because they want, and the only way to control it is to say, give me a right to audit
00:46:10.800 | before you can release it. I mean, the way that the stuff is written, you read it, you're like
00:46:16.400 | going to pull your hair out because as you know, better than anyone in 12 months, none of this
00:46:20.160 | stuff's going to make sense anyway. Totally. Right. Look, the reason I have pushed for
00:46:24.000 | an agency based approach for, for, for kind of like the big picture stuff and not a,
00:46:29.440 | like write it in laws, I don't, in 12 months it will all be written wrong. And I don't think,
00:46:34.240 | even if these people were like true world experts, I don't think they could get it right
00:46:39.680 | looking at 12 or 24 months. And I don't, these policies, which is like, we're going to look at,
00:46:46.720 | you know, we're going to audit all of your source code and like, look at all of your
00:46:50.080 | weights one by one. Like, yeah, I think there's a lot of crazy proposals out there.
00:46:54.240 | By the way, especially if the models are always being retrained all the time,
00:46:57.760 | if they become more dynamic. Again, this is why I think it's, yeah. But, but like when,
00:47:01.920 | before an airplane gets certified, there's like a set of safety tests. We put the airplane through
00:47:07.280 | it. Um, and totally, it's different than reading all of your code. That's reviewing the output of
00:47:13.040 | the model, not viewing the insides of the model. And so what I was going to say is that is the kind
00:47:19.360 | of thing that I think as safety testing makes sense. How are we going to get that to happen,
00:47:25.760 | Sam? And I'm not just speaking for open AI, I speak for the industry for, for humanity,
00:47:29.760 | because I am concerned that we draw ourselves into almost like a dark ages type of era by
00:47:36.320 | restricting the growth of these incredible technologies that can prosper, that humanity
00:47:41.040 | can prosper from so significantly. How do we change the sentiment and get that to happen?
00:47:45.520 | Because this is all moving so quickly at the government levels. And folks seem to be getting
00:47:49.360 | it wrong. And I'm just, just to build on that, Sam, the architectural decision, for example,
00:47:54.880 | that llama took is pretty interesting in that it's like, we're going to let llama grow and be as
00:48:01.600 | unfettered as possible. And we have this other kind of thing that we call llama guard, that's
00:48:05.840 | meant to be these protective guardrails. Is that how you see the problem being solved correctly?
00:48:10.880 | Or do you see that at the current strength of models, definitely some things are going to go
00:48:16.720 | wrong. And I don't want to like, make light of those or not take those seriously. But I'm not
00:48:21.520 | like, I don't have any like catastrophic risk worries with a GPT-4 level model.
00:48:27.040 | And I think there's many safe ways to choose to deploy this.
00:48:31.520 | Maybe we'd find more common ground if we said that, and I like, you know, the specific example
00:48:42.400 | of models that are capable, that are technically capable, not even if they're not going to be used
00:48:47.920 | this way, of recursive self-improvement, or of, you know, autonomously designing and deploying
00:48:58.960 | a bioweapon, or something like that. Or a new model.
00:49:01.920 | That was the recursive self-improvement point. You know, we should have safety testing on the
00:49:09.280 | outputs at an international level for models that, you know, have a reasonable chance of
00:49:14.880 | posing a threat there. I don't think like GPT-4, of course, does not pose any sort of,
00:49:24.960 | well, I don't say any sort, because we don't, yeah, I don't think the GPT-4 poses a material
00:49:31.040 | threat on those kinds of things. And I think there's many safe ways to release a model like
00:49:34.560 | this. But, you know, when like significant loss of human life is a serious possibility, like
00:49:43.840 | airplanes, or any number of other examples where I think we're happy to have some sort of testing
00:49:50.000 | framework, like I don't think about an airplane when I get on it, I just assume it's going to be
00:49:53.280 | safe.
00:49:53.920 | Right, right.
00:49:55.120 | There's a lot of hand-wringing right now, Sam, about jobs. And you had a lot of, I think you
00:50:00.720 | did like some sort of a test when you were at YC about UBI, and you've been-
00:50:04.400 | Our results on that come out very soon. I just, it was a five-year study that wrapped up,
00:50:08.480 | or started five years ago. Well, there was like a beta study first, and then it was like a
00:50:13.760 | long one that ran. But-
00:50:15.200 | Well, Mark, what did you learn about that?
00:50:16.800 | Yeah, why'd you start it? Maybe just explain UBI and why you started it.
00:50:20.000 | So we started thinking about this in 2016, kind of about the same time,
00:50:26.720 | started taking AI really seriously. And the theory was that the magnitude of the change that may come
00:50:33.600 | to society, and jobs, and the economy, and sort of in some deeper sense than that,
00:50:41.360 | like what the social contract looks like, meant that we should have many studies to study many
00:50:49.040 | ideas about new ways to arrange that. I also think that, you know, I'm not like a super fan
00:50:57.680 | of how the government has handled most policies designed to help poor people. And I kind of
00:51:04.000 | believe that if you could just give people money, they would make good decisions, the market would
00:51:09.040 | do its thing. And, you know, I'm very much in favor of lifting up the floor and reducing,
00:51:15.280 | eliminating poverty. But I'm interested in better ways to do that than what we have tried for the
00:51:22.000 | existing social safety net, and kind of the way things have been handled. And I think giving
00:51:26.560 | people money is not gonna go solve all problems. It's certainly not gonna make people happy.
00:51:31.600 | But it might. It might solve some problems that it might give people a better horizon
00:51:40.160 | with which to help themselves. And I'm interested in that. I think that now that we see some of the
00:51:47.600 | ways so 2016 was very long time ago. You know, now that we see some of the ways that AI is
00:51:53.040 | developing, I wonder if there's better things to do than the traditional
00:52:00.000 | conceptualization of UBI. Like, I wonder, I wonder if the future looks something like
00:52:06.320 | more like universal basic compute than universal basic income. And everybody gets like a slice of
00:52:11.120 | GPT sevens compute, and they can use it, they can resell it, they can donate it to somebody to use
00:52:16.720 | for cancer research. But But what you get is not dollars, but this like productivity slice. Yeah,
00:52:22.160 | you own like part of the productivity, right? I would like to shift to the gossip part of this.
00:52:27.760 | Awesome. Let's go back to November. What the flying?
00:52:33.440 | You know, I, if you have specific questions, I'm happy to maybe talk about it at some point. So
00:52:44.720 | here's the point. What happened? You were fired, you came back and it was palace intrigue. Did
00:52:51.280 | somebody stab you in the back? Did you find AGI? What's going on? This is a safe space.
00:52:57.280 | Um, I was fired. I was. I talked about coming back, I kind of was a little bit unsure at the
00:53:07.040 | moment about what I wanted to do, because I was very upset. And I realized that I really
00:53:14.080 | loved open AI and the people and that I would come back and I kind of, I knew it was going to be
00:53:21.360 | hard. It was even harder than I thought. But I kind of like, all right, fine. I agreed to come
00:53:27.600 | back. The board like, took a while to figure things out. And then, you know, we were kind of
00:53:33.840 | like, trying to keep the team together and keep doing things for our customers. And, you know,
00:53:39.760 | sort of started making other plans. Then the board decided to hire a different interim CEO. And then
00:53:45.040 | everybody, there are many people. Oh, my gosh. What was, what was that guy's name? He was there
00:53:50.720 | for like a scaremoochie, right? Like, and it's great. And I have nothing but good things to say
00:53:56.720 | about it. Um, and then where were you in the, um, when you found the news that you'd been fired?
00:54:05.120 | I was in a hotel room in Vegas for F1 weekend. I think there's a text and they're like,
00:54:11.920 | well, you're fired. Pick up. I said, I think that's happened to you before J cow.
00:54:15.040 | I'm trying to think if I ever got fired. I don't think I've gotten tired.
00:54:19.040 | Um, yeah, I got a text. No, it's just a weird thing. Like it's a text from who?
00:54:22.560 | Actually, no, I got a text the night before. And then I got on a phone call with the board.
00:54:26.720 | Uh, and then that was that. And then I kind of like, I mean, then everything went crazy. I was
00:54:32.080 | like, uh, it was like, I mean, I have, my phone was like unusable. It was just a nonstop vibrating
00:54:39.840 | thing of like text messages, calls. You got fired by tweet. That happened a few times during the
00:54:45.440 | Trump administration. A few, uh, before tweeting. Um, and then like, you know, I kind of did like
00:54:53.760 | a few hours of just this like absolute fugue state, um, in the hotel room trying to, like,
00:55:01.200 | I was just confused beyond belief trying to figure out what to do. And, uh, so weird. And then like
00:55:08.800 | flew home. It may be like, I got on a plane, like, I don't know, 3:00 PM or something like that.
00:55:14.640 | Um, still just like, you know, crazy nonstop phone blowing up, uh, met up with some people
00:55:20.000 | in person by that evening. I was like, okay, you know, I'll just like go do AGI research and was
00:55:27.520 | feeling pretty happy about the future. And yeah, you have options. And then, and then the next
00:55:33.760 | morning, uh, had this call with a couple of board members about coming back and that led to a few
00:55:40.240 | more days of craziness. And then, uh, and then it kind of, I think it got resolved. Well, it was
00:55:49.360 | like a lot of insanity in between, but what percent, what percent of it was because of these
00:55:54.320 | nonprofit board members? Um, well, we only have a nonprofit board, so it was all the nonprofit
00:56:01.200 | board members. Uh, there, the board had gotten down to six people. Um, they,
00:56:05.760 | and then they removed Greg from the board and then fired me. Um, so, but it was like,
00:56:14.720 | you know, but I mean, like, was there a culture clash between the people on the board who had
00:56:18.400 | only nonprofit experience versus the people who had started experience?
00:56:22.560 | And maybe you can share a little bit about if you're willing to the motivation
00:56:26.160 | behind the action, anything you can, I think there's always been culture clashes at.
00:56:32.160 | Look, obviously not all of those board members are my favorite people in the world, but I have
00:56:41.840 | serious respect for the gravity with which they treat AGI and the importance of getting
00:56:52.720 | AI safety. Right. And even if I stringently disagree with their decision-making and actions,
00:57:01.360 | which I do, um, I have never once doubted their integrity or commitment to, um,
00:57:09.920 | the sort of shared mission of safe and beneficial AGI. Um, you know, do I think they like made good
00:57:17.680 | decisions in the process of that, or kind of know how to balance all the things open AI has to get.
00:57:22.720 | Right. No, but, but I think the, like the intent, the intent of the magnitude of AGI and getting
00:57:34.480 | that right. I actually, let me ask you about that. So the mission of open AI is explicitly
00:57:39.840 | to create AGI, which I think is really interesting. A lot of people would say that
00:57:46.880 | if we create AGI, that would be like an unintended consequence of something gone
00:57:52.560 | horribly wrong. And they're very afraid of that outcome, but open AI makes that the actual mission.
00:57:57.760 | Does that create like more fear about what you're doing? I mean, I understand it can
00:58:03.680 | create motivation too, but how do you reconcile that? I guess, why is it a lot of, I think a lot
00:58:08.480 | of the, well, I mean, first I'll say that I'll answer the first question and the second one,
00:58:13.840 | I think it does create a great deal of fear. I think a lot of the world is understandably
00:58:18.960 | very afraid of AGI or very afraid of even current AI and very excited about it and
00:58:24.880 | even more afraid and even more excited about where it's going. And we, we wrestle with that,
00:58:33.840 | but like, I think it is unavoidable that this is going to happen. I also think it's going to be
00:58:38.960 | tremendously beneficial, but we do have to navigate how to get there in a reasonable way.
00:58:44.000 | And like a lot of stuff is going to change and changes, you know, pretty, pretty uncomfortable
00:58:49.520 | for people. So there's a lot of pieces that we got to get right. Can I ask a different question?
00:58:58.240 | You, you have created, I mean, it's the hottest company and you are literally at the center of
00:59:06.240 | the center of the center, but then it's so unique in the sense that all of this value you eschewed
00:59:14.960 | economically. Can you just like walk us through like, yeah, I wish I had taken, I wish I had
00:59:19.760 | taken equity. So I never had to answer this question. If I could go back and give you a
00:59:24.240 | grant now, why doesn't the board just give you a big option grant like you deserve?
00:59:28.400 | Yeah. Give you five points. What was the decision back then? Like, why was that so important?
00:59:32.560 | The decision back then, the, the original reason was just like the structure of our
00:59:36.880 | nonprofit. It was like, there was something about, yeah, okay. This is like, nice from a motivation
00:59:44.080 | perspective, but mostly it was that our board needed to be a majority of disinterested directors.
00:59:49.280 | And I was like, that's fine. I don't need equity right now. I kind of,
00:59:53.360 | but like, but now that you're running a company, yeah, it creates these weird questions of like,
01:00:00.880 | well, what's your real motivation for us?
01:00:02.720 | That's that it is so deeply on one thing I have noticed it is, it's so deeply unimaginable to
01:00:10.880 | people to say, I don't really need more money. Like, and I think, I think people think it's
01:00:17.840 | a little bit of an ulterior motive. I think, well, yeah, yeah, yeah. No, it's so it assumes
01:00:21.600 | it's like doing on the side to make money. If I were, if I were just trying to say like,
01:00:26.160 | I'm going to try to make a trillion dollars with open AI. I think everybody would have an easier
01:00:29.840 | time. And it wouldn't save me a lot of conspiracy theories.
01:00:32.960 | This is totally the back channel. You are a great deal maker. I've watched your whole career. I mean,
01:00:40.480 | you're just great at it. You got all these connections, you're really good at raising
01:00:45.360 | money. You're fantastic at it. And you got this Johnny I've been going, you're in humane,
01:00:51.600 | you're investing in companies, you got the orb, you're raising $7 trillion to build
01:00:56.800 | fabs, all this stuff, all of that put together. I'm kind of being a little facetious here,
01:01:04.000 | you know, obviously, it's not, you're not raising $7 trillion. But maybe that's the
01:01:06.880 | market cap is something putting all that aside. The T was, you're doing all these deals,
01:01:12.560 | they don't trust you, because what's your motivation, you, your end running and what
01:01:18.080 | opportunities belong inside of open AI, what opportunity should be Sam's and this group of
01:01:23.280 | nonprofit people didn't trust you? Is that what happens? So the things like, you know, device
01:01:28.400 | companies, or if we were doing some chip fab company, it's like, those are not Sam project,
01:01:32.720 | those would be like opening, I would get that quickly. Would Okay, that's not public perception.
01:01:38.320 | Well, that's not like, kind of the people like you who have to like commentate on this stuff
01:01:43.040 | all day is perception, which is fair, because we haven't announced this stuff, because it's not
01:01:46.000 | done. I don't think most people in the world like are thinking about this. But I agree, it spins up
01:01:52.640 | a lot of conspiracies, conspiracy theories in like tech commentators. Yeah. And if I could go back,
01:01:59.920 | yeah, I would just say like, let me take equity and make that super clear. And then I'd be like,
01:02:04.240 | all right, like, I'd still be doing it because I really care about AGI and think this is like
01:02:08.240 | the most interesting work in the world. But it would at least type check to everybody.
01:02:12.000 | What's the chip project that the $7 trillion and where the $7 trillion number come from?
01:02:17.200 | I don't know where that came from. Actually, I genuinely don't. I think,
01:02:21.680 | I think the world needs a lot more AI infrastructure, a lot more
01:02:25.120 | than it's currently planning to build and with a different cost structure.
01:02:29.040 | The exact way for us to play there is, we're still trying to figure that out.
01:02:35.680 | What's your preferred model of organizing open AI? Is it
01:02:39.600 | sort of like the move fast, break things highly distributed small teams? Or is it more of this
01:02:47.440 | organized effort where you need to plan because you want to prevent some of these edge cases?
01:02:52.080 | Um, oh, I have to go in a minute. It's not because,
01:02:55.360 | it's not to prevent the edge cases that we need to be more organized. But it is that these
01:03:02.640 | systems are so complicated and concentrating bets are so important. Like one, you know,
01:03:10.400 | at the time before it was like obvious to do this, you have like DeepMind or whatever has
01:03:14.960 | all these different teams doing all these different things. And they're spreading their
01:03:17.920 | bets out. And you had open AI say, we're going to like basically put the whole company and work
01:03:21.680 | together to make GPT-4. And that was like unimaginable for how to run an AI research lab.
01:03:27.840 | But it is, I think what works at a minimum, it's what works for us. So not because we're
01:03:33.040 | trying to prevent edge cases, but because we want to concentrate resources and do these
01:03:36.640 | like big, hard, complicated things. We do have a lot of coordination on what we work on.
01:03:41.120 | All right, Sam, I know you got to go. You've been great on the hour. Come back anytime.
01:03:46.240 | Great talking to you guys. We've been talking about it for like a year plus.
01:03:52.480 | I'm really happy it finally happened. Yeah, it's awesome. I really appreciate it.
01:03:55.040 | I would love to come back on after our next like major launch and I'll be able to
01:03:58.560 | talk more directly about something. Yeah, you got the Zoom link. Same Zoom
01:04:01.600 | link every week. Just same time, same Zoom link. Drop in anytime. Just drop it.
01:04:05.760 | You put it on your account. Come back to the game.
01:04:08.160 | Come back to the game. Yeah, come back to the game.
01:04:10.960 | I, you know, I would love to play poker. It has been forever. That would be a lot of fun.
01:04:15.520 | Yeah, that famous hand where Chamath, when you and I were heads up, when you, you had-
01:04:19.440 | I don't, rely on me?
01:04:20.400 | You and I were heads up and you went all in. I had a set, but there was a straight and a flush
01:04:26.880 | on the board. And I'm in the tank trying to figure out if I want to lose. This is back when we played
01:04:31.600 | small stakes. It might've been like 5k pot or something. And then Chamath can't stay out of
01:04:36.960 | the pot. And he starts taunting the two of us. You should call. You shouldn't call. He's bluffing.
01:04:41.920 | And I'm like, Chamath, I'm going, I'm trying to figure out if I make the call here. I make the
01:04:45.840 | call. And, uh, it was like, uh, you had a really good hand and I just happened to have a set. I
01:04:51.200 | think you had like top pair, top kicker or something, but you made a great move because
01:04:54.880 | the board was so textured. Almost like a bottom set.
01:04:57.120 | Sam has a great style of playing, which I would call random jam.
01:05:00.240 | Totally. You got to just get out of the way.
01:05:02.000 | Chamath, I don't really know if you, I don't, I don't know if you can say that about anybody.
01:05:04.320 | I don't, I don't, I'm not going to.
01:05:07.360 | You haven't seen Chamath play in the last 18 months. It's a lot different.
01:05:10.000 | I've come down to the game so much fun now.
01:05:13.440 | Have you played Bomb Pots before? Have you played Bomb Pots in this game?
01:05:17.920 | I don't know what that is.
01:05:18.800 | Okay. You'll love it.
01:05:19.840 | This game is nuts. It's PLO, but you do two boards. It's nuts on everything. Honestly.
01:05:26.480 | Thank you, Chamath.
01:05:26.880 | Thanks for coming on and see you guys.
01:05:28.480 | Love to have you back when the next, after the big launch.
01:05:30.400 | Sounds good.
01:05:30.720 | Yeah, please do.
01:05:31.520 | Cool. Bye.
01:05:32.080 | Gentlemen, some breaking news here. All those projects, he said, are part of OpenAI. That's
01:05:38.960 | something people didn't know before this and a lot of confusion there.
01:05:41.840 | Chamath, what was your major takeaway from our hour with Sam?
01:05:46.080 | I think that these guys are going to be one of the four major companies that matter in this
01:05:53.440 | whole space. I think that that's clear. I think what's still unclear is where is the economics
01:05:59.280 | going to be. He said something very discreet, but I thought was important, which is, I think he
01:06:03.840 | basically, my interpretation is these models will roughly all be the same, but there's going to be
01:06:09.600 | a lot of scaffolding around these models that actually allow you to build these apps.
01:06:15.120 | In many ways, that is like the open source movement. Even if the model itself is never
01:06:20.240 | open source, it doesn't much matter because you have to pay for the infrastructure, right? There's
01:06:24.960 | a lot of open source software that runs on Amazon. You still pay AWS something. I think the right way
01:06:30.480 | to think about this now is the models will basically be all really good. Then it's all
01:06:38.400 | this other stuff that you'll have to pay for. The interface.
01:06:41.200 | Whoever builds all this other stuff is going to be in a position to build a really good business.
01:06:47.920 | Freeberg, he talked a lot about reasoning. It seemed like that he kept going to reasoning
01:06:51.840 | in a way from the language model. Did you note that and anything else that you noted in our
01:06:56.080 | hour with Sam? Yeah, I mean, that's a longer conversation because there is a lot of talk
01:06:59.600 | about language models eventually evolving to be so generalizable that they can resolve
01:07:06.400 | pretty much like all intelligent function. The language model is the foundational model that
01:07:12.960 | yields AGI. I think there's a lot of people that are different schools of thought on this.
01:07:18.480 | My other takeaway, I think what he also seemed to indicate is there's so many... We're all so
01:07:30.000 | enraptured by LLMs, but there's so many things other than LLMs that are being baked and rolled
01:07:36.160 | by him and by other groups. I think we have to pay some amount of attention to all those because
01:07:41.120 | that's probably where... I think, Freeberg, you tried to go there in your question. That's where
01:07:44.640 | reasoning will really come from is this mixture of experts approach. You're going to have to think
01:07:49.680 | multi-dimensionally to reason. We do that. Do I cross the street or not in this point in time?
01:07:55.760 | You reason based on all these multi-inputs. There's all these little systems that go into
01:08:00.720 | making that decision in your brain. If you use that as a simple example, there's all this stuff
01:08:05.840 | that has to go into making some experience being able to reason intelligently.
01:08:11.280 | Sax, you went right there with the corporate structure, the board, and he gave us a lot
01:08:18.960 | more information here. What are your thoughts on the chip stuff and the other stuff I'm working on?
01:08:25.520 | That's all part of open AI. People just don't realize it in that moment. Then your questions
01:08:31.040 | to him about equity. Your thoughts on... I'm not sure I was the main guy who
01:08:36.000 | asked that question, Jekyll. Well, no. You did talk about the non-profit,
01:08:41.040 | the difference between the non-profit... Well, I had a follow-up question about...
01:08:44.000 | That's what I'm talking about. There clearly was some sort of culture clash
01:08:46.480 | on the board between the people who originated from the non-profit world and the people who
01:08:51.040 | came from the startup world. The tech side, yeah.
01:08:52.400 | We don't really know more than that, but there clearly was some sort of culture clash.
01:08:55.440 | I thought a couple of the other areas that he drew attention to that were kind of interesting
01:09:00.560 | is he clearly thinks there's a big opportunity on mobile that goes beyond just having a ChatGT app
01:09:07.680 | on your phone or maybe even having a Siri on your phone. There's clearly something bigger there. He
01:09:13.280 | doesn't know exactly what it is, but it's going to require more inputs. It's that personal assistant
01:09:19.040 | that's seeing everything around you and helping you. I think that's a great insight, David,
01:09:23.360 | because he was talking about, "Hey, I'm looking for a senior team member who can push back on
01:09:29.120 | me and understands all contexts." I thought that was very interesting to think about.
01:09:32.640 | Yeah, he's talking about an executive assistant or an assistant that has executive function as
01:09:38.560 | opposed to being just an alter ego for you or what he called a sycophant. That's kind of
01:09:43.600 | interesting. I thought that was interesting, yeah.
01:09:46.000 | Clearly, he thinks there's a big opportunity in biology and scientific discovery.
01:09:50.080 | After the break, I think we should talk about AlphaFold 3. It was just announced today.
01:09:53.120 | Yeah, let's do that, and we can talk about the Apple ad in depth. I just want to also make sure
01:09:57.360 | people understand when people come on the pod, we don't show them questions. They don't edit the
01:10:01.120 | transcript. Nothing is out of bounds. If you were wondering why I didn't ask or we didn't ask about
01:10:06.720 | the Elon lawsuit, he's just not going to be able to comment on that, so it'll be a no comment.
01:10:11.760 | We're not hearing it. Our time was limited, and there's a lot of questions that we could ask him
01:10:15.760 | that would have just been a waste of time. Frankly, he's already been asked, so I just
01:10:19.120 | want to make sure people understand that. Yeah, of course, he's going to no comment
01:10:21.120 | on any lawsuit, and he's already been asked about that 500 times.
01:10:24.480 | All right. Should we take a quick break before we come back?
01:10:27.440 | Yeah, let's take a bio break, and then we'll come back with some news for you and some more banter
01:10:31.120 | with your favorite besties on the number one podcast in the world, The Olin Podcast.
01:10:38.240 | All right. Welcome back, everybody. Second half of the show, great guest, Sam Altman. Thanks for
01:10:42.080 | coming on the pod. We've got a bunch of news on the docket, so let's get started. Freyberg,
01:10:48.000 | you told me I could give some names of the guests that we've booked for the Olin Summit.
01:10:53.600 | I did not.
01:10:54.560 | You did. You've said each week, every week that I get to say some names.
01:10:58.960 | I did not. I appreciate your interest in the Olin Summit's lineup, but we do not yet have
01:11:06.000 | enough critical mass to feel like we should go out there.
01:11:09.440 | Well, I am a loose cannon, so I will announce my two guests, and I created the summit,
01:11:16.160 | and you took it from me, so I've done a great job. I will announce my guests. I don't care
01:11:20.480 | what your opinion is. I have booked two guests for the summit, and it's going to be sold out.
01:11:25.920 | Look at these two guests I've booked. For the third time coming back to the summit,
01:11:29.840 | our guy Elon Musk will be there, hopefully in person, if not from 40,000 feet on Starlink
01:11:35.200 | Connection, wherever he is in the world, and for the first time, our friend Mark Cuban will be
01:11:41.360 | coming, and so two great guests for you to look forward to, but Freyberg's got like 1,000 guests
01:11:46.560 | coming. He'll tell you when it's like 48 hours before the conference, but yeah, two great guests.
01:11:51.120 | Wait, speaking of billionaires who are coming, isn't [censored] coming too?
01:11:54.400 | Yes, [censored] coming. Yes, he's booked.
01:11:56.160 | So we have three billionaires.
01:11:58.000 | Three billionaires, yes.
01:11:59.200 | [Censored] hasn't fully confirmed, so don't...
01:12:01.120 | Okay, well, we're going to say it anyway. [Censored] has penciled in.
01:12:03.840 | Don't say it, don't back out.
01:12:04.880 | We'll say penciled. Yeah, don't back out.
01:12:07.200 | This is going to be catnip for all these protest organizers. Like, if you've got one place...
01:12:11.760 | Do not poke the bear.
01:12:13.200 | Well, by the way, speaking of updates, what did you guys think of the bottle for the all-in tequila?
01:12:19.120 | Oh, beautiful. Honestly, I will just say, I think you are doing a marvelous job. That,
01:12:24.640 | I was shocked at the design. Shocked meaning it is so unique and high quality. I think
01:12:33.680 | it's amazing. It would make me drink tequila.
01:12:36.400 | You're going to. You're going to want to.
01:12:39.280 | It is stunning. Just congratulations. And yeah, it was just... When we went through the deck at the
01:12:47.760 | monthly meeting, it was like, "Oh, that's nice. Oh, that's nice. We're going to do the concept
01:12:51.840 | bottles." And then that bottle came up and everybody went like crazy. It was like somebody
01:12:56.560 | hitting like a... Steph Curry hitting a half-court shot. It was like, "Oh my God!" It was just so
01:13:01.440 | clear that you've made an iconic bottle that if we can produce it, oh, Lord, it is going to be...
01:13:09.280 | Looks like we can.
01:13:10.000 | It's going to be amazing.
01:13:13.040 | I'm excited. I'm excited for it. It's like...
01:13:14.960 | I mean, the bottle design is so complicated that we had to do a feasibility analysis on whether
01:13:18.560 | it was actually manufacturable, but it is. Or at least the early reports are good. So, we're going
01:13:23.760 | to... Hopefully, we'll have some made in time for the all-in summit.
01:13:27.360 | I mean, why not? Sounds great.
01:13:30.080 | I mean, it's great. When we get barricaded in by all these protesters, we can drink the tequila.
01:13:34.080 | Did you guys see Peter Thiel? Peter Thiel got barricaded by these ding-dongs at Cambridge.
01:13:40.480 | My God.
01:13:41.280 | Listen, people have the right to protest. I think it's great people are protesting, but
01:13:44.800 | surrounding people and threatening them is a little bit over the top and dangerous.
01:13:48.880 | I think you're exaggerating what happened.
01:13:50.960 | Well, I don't know exactly what happened because all we see is these videos.
01:13:54.000 | Look, they're not threatening anybody. And I don't even think they tried to barricade
01:13:57.600 | him in. They were just outside the building. And because they were blocking the driveway,
01:14:02.720 | his car couldn't leave. But he wasn't physically locked in the building or something.
01:14:09.280 | Yeah, that's what the headlines say, but that could be fake news, fake social.
01:14:13.200 | Yeah.
01:14:13.520 | This was not on my bingo card. This pro-protest support by Sachs was not on the bingo card,
01:14:19.840 | I got to say. I didn't see it coming.
01:14:22.560 | The Constitution of the United States in the First Amendment provides for the right
01:14:26.960 | of assembly, which includes protest and sit-ins as long as they're peaceable.
01:14:32.000 | Now, obviously, if they go too far and they vandalize or break into buildings or use violence,
01:14:38.000 | then that's not peaceable. However, expressing sentiments with which you disagree does not make
01:14:43.840 | it violent. And there's all these people out there now making the argument that if you hear
01:14:49.680 | something from a protester that you don't like, and you subjectively experience that as a threat
01:14:56.880 | to your safety, then that somehow should be treated as valid. That's basically violent.
01:15:03.040 | Well, that's not what the Constitution says. And these people understood well just a few
01:15:10.000 | months ago that that was basically snowflakery. Just because somebody...
01:15:14.320 | Snowflakery, peaceable. We're getting a lot of these great words.
01:15:17.520 | We have the rise of the woke right now, where they're buying into...
01:15:20.320 | The woke right.
01:15:20.880 | Yeah, the woke right. They're buying into this idea of safetyism, which is
01:15:24.080 | being exposed to ideas you don't like, to protests you don't like, is a threat to your safety. No,
01:15:28.240 | it's not. Even if they're saying things you don't like, we absolutely have snowflakery
01:15:33.280 | on both sides now.
01:15:34.560 | It's ridiculous. The only thing I will say that I've seen is this surrounding individuals who
01:15:42.480 | you don't want there, and locking them in a circle and then moving them out of the protest area.
01:15:47.760 | That's not cool.
01:15:48.480 | Yeah, obviously you can't do that. But look, I think that most of the protests on most of
01:15:52.880 | the campuses have not crossed the line. They've just occupied the lawns of these campuses.
01:15:57.200 | And look, I've seen some troublemakers try to barge through the encampments and claim that
01:16:04.960 | because they can't go through there, that somehow they're being prevented from going to class. Look,
01:16:09.920 | you just walk around the lawn, and you can get to class, okay? And some of these videos are showing
01:16:16.480 | that these are effectively right-wing provocateurs who are engaging in left-wing tactics. And
01:16:23.200 | I don't support it either way.
01:16:25.600 | By the way, some of these camps are some of the funniest things you've ever seen. It's like,
01:16:29.600 | there are like one tent that's dedicated to a reading room, and you go in there and there's
01:16:35.520 | like these like, mindfulness center. Oh my god, it's unbelievably hilarious.
01:16:40.080 | Look, there's no question that because the protests are originating on the left,
01:16:43.760 | that there's some goofy views. Like, you're dealing with like a left-wing
01:16:47.680 | idea complex, right? And it's easy to make fun of them doing different things. But
01:16:53.920 | the fact of the matter is that most of the protests on most of these campuses are,
01:16:58.240 | even though they can be annoying because they're occupying part of the lawn, they're not violent.
01:17:04.400 | And, you know, the way they're being cracked down on, they're sending the police in at 5 a.m.
01:17:07.840 | to crack down on these encampments with batons and riot gear. And I find that
01:17:14.240 | part to be completely excessive.
01:17:15.840 | Well, it's also dangerous because, you know, things can escalate when you have mobs of people
01:17:20.960 | and large groups of people. So, I just want to make sure people understand that large group of
01:17:24.720 | people, you have a diffusion of responsibility that occurs when there's large groups of people
01:17:29.440 | who are passionate about things, and people can get hurt. People have gotten killed at these
01:17:33.520 | things. So, just, you know, keep it calm, everybody. I agree with you. Like, what's the
01:17:37.680 | harm of these folks protesting on a lawn? It's not a big deal. When they break into buildings,
01:17:41.840 | of course.
01:17:42.240 | Yeah, that crosses the line, obviously.
01:17:43.760 | Yeah. But I mean, let them sit out there, and then they'll run out their food cards,
01:17:47.040 | their campus food card, and they'll run out of waffles.
01:17:50.800 | Did you guys see the clip? I think it was on the University of Washington campus where
01:17:54.880 | one kid challenged this Antifa guy to a pushup contest.
01:17:59.040 | Oh, fantastic.
01:18:01.680 | I mean, it is some of the funniest stuff. Some content is coming out that's just hilarious.
01:18:06.480 | My favorite was the woman who came out and said that the Columbia students needed humanitarian
01:18:11.360 | Oh, my God. The overdubs on her were hilarious.
01:18:14.960 | I was like, "Humanitarian aid?" I was like, "We need our door dash. Right now,
01:18:20.480 | we need to double dash some boba, and we can't get it through the police. We need our boba."
01:18:25.360 | Low sugar boba, with the popping boba. Bubbles wasn't getting in.
01:18:30.720 | But, you know, people have the right to protest.
01:18:32.240 | Peaceable, by the way. There's a word I've never heard very good,
01:18:36.000 | Sax. "Peaceable," inclined to avoid argument or violent conflict. Very nice.
01:18:41.040 | Well, it's in the Constitution. It's in the First Amendment.
01:18:43.040 | Is it really? I haven't heard the word "peaceable" before. I mean, you and I are
01:18:46.720 | simpatico on this. We used to have the ACLU backing up the KKK going down Main Street and
01:18:55.840 | really fighting for—
01:18:56.400 | Yeah, the Skokie decision.
01:18:57.680 | Yeah, they were really fighting for—and I have to say, the Overton window is opened back up,
01:19:03.200 | and I think it's great. All right, we got some things on the docket here. I don't know if you
01:19:06.720 | guys saw the Apple new iPad ad. It's getting a bunch of criticism. They used some giant
01:19:12.000 | hydraulic press to crush a bunch of creative tools. DJ turntable, trumpet, piano. People
01:19:20.240 | really care about Apple's ads and what they represent. We talked about that Mother Earth
01:19:26.480 | little vignette they created here. What do you think, Freeberg? Did you see the ad? What was
01:19:30.240 | your reaction to it?
01:19:30.960 | It made me sad. It did not make me want to buy an iPad, so it did not seem like a good—
01:19:36.720 | It made you sad? It actually elicited an emotion? Meaning, like, commercials—it's very rare that
01:19:42.240 | commercials can actually do that. Most people just zone out.
01:19:44.560 | Yeah, they took all this beautiful stuff and hurt it. It didn't feel good. I don't know. It just
01:19:48.960 | didn't seem like a good ad. I don't know why they did that. I don't get it. I don't know.
01:19:53.120 | I think maybe what they're trying to do is—the selling point of this new iPad is that it's the
01:19:57.840 | thinnest one. I mean, there's no innovation left, so they're just making the devices thinner.
01:20:03.040 | So I think the idea was that they were going to take this hydraulic press to represent
01:20:08.720 | how ridiculously thin the new iPad is. Now, I don't know if the point there was to smush all
01:20:15.440 | of that good stuff into the iPad. I don't know if that's what they were trying to convey. But yeah,
01:20:20.640 | I think that by destroying all those creative tools that Apple is supposed to represent,
01:20:27.120 | it definitely seemed very off-brand for them. I think people were reacting to the fact that
01:20:32.720 | it was so different than what they would have done in the past. Of course, everyone was saying,
01:20:37.440 | "Well, Steve would never have done this." I do think it did land wrong. I mean, I didn't care
01:20:43.200 | that much, but I was kind of asking the question, like, why are they destroying all these creator
01:20:48.960 | tools that they're renowned for creating or for turning into the digital version?
01:20:55.120 | Yeah, it just didn't land. I mean, Chamath, how are you doing emotionally after seeing that?
01:21:01.920 | Are you okay, buddy?
01:21:03.920 | Yeah. I think this is — you guys see that in the Berkshire annual meeting last weekend,
01:21:13.760 | Tim Cook was in the audience, and Buffett was very laudatory. "This is an incredible company."
01:21:20.560 | But he's so clever with words. He's like, "You know, this is an incredible business
01:21:25.360 | that we will hold forever, most likely." And then it turns out that he sold $20 billion
01:21:32.720 | worth of Apple shares.
01:21:33.600 | A caveat. We're going to hold it forever.
01:21:36.720 | Which, by the way —
01:21:37.680 | Sell, sell.
01:21:38.560 | If you guys remember, we put that little chart up which shows when he doesn't mention it in the
01:21:43.280 | annual letter. It's basically, like, it's foreshadowing the fact that he is just pounding
01:21:48.160 | the sell. And he sold $20 billion.
01:21:50.800 | Well, also, holding it forever could mean one share.
01:21:53.600 | Yeah, exactly.
01:21:55.280 | We kind of need to know, like, how much are we talking about?
01:21:57.920 | I mean, it's an incredible business that has so much money with nothing to do,
01:22:02.960 | they're probably just going to buy back the stock. Just a total waste.
01:22:06.320 | There were floating this rumor of buying Rivian, you know, after they shut down
01:22:09.600 | Titan Project, the internal project to make a car. It seems like a car is the only thing
01:22:13.440 | people can think of that would move the needle in terms of earnings.
01:22:17.200 | I think the problem is, J. Cal, like, you kind of become afraid of your own shadow.
01:22:20.640 | Meaning, the folks that are really good at M&A, like, you look at Benioff.
01:22:24.960 | The thing with Benioff's M&A strategy is that he's been doing it for 20 years.
01:22:30.160 | And so, he's cut his teeth on small acquisitions.
01:22:34.320 | And the market learns to give him trust, so that when he proposes, like,
01:22:38.400 | the $27 billion Slack acquisition, he's allowed to do that.
01:22:42.240 | Another guy, you know, Nikesh Arora at Pan W. These last five years,
01:22:46.640 | people were very skeptical that he could actually roll up security because it was
01:22:49.840 | a super fragmented market. He's gotten permission.
01:22:53.040 | Then there are companies like Danaher that buy hundreds of companies.
01:22:56.080 | So, all of these folks are examples of, you start small and you earn the right to do more.
01:23:01.360 | Apple hasn't bought anything more than $50 or $100 million.
01:23:04.560 | And so, the idea that all of a sudden, they come out of the blue and buy a
01:23:08.160 | $10, $20 billion company, I think is just totally doesn't stand logic.
01:23:13.200 | It's just not possible for them because they'll be so afraid of their own shadow.
01:23:16.320 | That's the big problem. It's themselves.
01:23:18.000 | Well, if you're running out of in-house innovation and you can't do M&A,
01:23:23.600 | then your options are kind of limited.
01:23:25.280 | I mean, I do think that the fact that the big news out of Apple is the iPad's getting thinner
01:23:31.040 | does represent kind of the end of the road in terms of innovation.
01:23:34.080 | It's kind of like when they added the third camera to the iPhone.
01:23:38.080 | Yeah.
01:23:38.800 | It reminds me of those, remember like when the Gillette?
01:23:41.920 | Yeah, they did the five.
01:23:42.320 | Mach 3 came out and then they did the five.
01:23:43.680 | It was the best onion thing. It was like, "We're doing five. Eff it.
01:23:46.240 | We're doing five."
01:23:47.920 | But then Gillette actually came out with the Mach 5.
01:23:49.920 | So, like the parody became the reality. What are they going to do? Add two more
01:23:52.960 | cameras to the iPhone? You have five cameras on it?
01:23:55.520 | No, it makes no sense. And then, I don't know anybody wants to,
01:23:59.360 | remember the Apple Vision was like going to change everything?
01:24:01.760 | Plus, why are they body shaming the fat iPads?
01:24:04.560 | That's a fair point. Actually, you know what? Actually, this didn't come out yet,
01:24:11.120 | but it turns out the iPad is on Osempic. It's actually dropped.
01:24:14.720 | That would have been a funnier ad.
01:24:16.880 | Yeah.
01:24:17.200 | Yeah, exactly.
01:24:18.000 | O, O, O, Osempic. We could just workshop that right here.
01:24:23.040 | But there was another funny one, which was making the iPhone smaller and smaller and
01:24:26.320 | smaller and the iPod smaller and smaller and smaller to the point it was like a thumb-sized
01:24:30.480 | iPhone.
01:24:31.040 | Like the Ben Stiller phone in Zoolander?
01:24:33.360 | Yes. Correct.
01:24:34.960 | Yeah.
01:24:38.080 | That was a great scene.
01:24:39.440 | Is there a category that you can think of that you would love an Apple product for?
01:24:45.040 | There's a product in your life that you would love to have Apple's version of it.
01:24:50.960 | They killed it. I think a lot of people would be very open-minded to an Apple car.
01:24:56.320 | Okay.
01:24:56.720 | They just would. It's a connected internet device, increasingly so.
01:25:00.960 | Yeah.
01:25:01.680 | And they managed to flub it. They had a chance to buy Tesla. They managed to flub it.
01:25:07.920 | Yeah.
01:25:08.720 | Right? There are just too many examples here where these guys have so much money
01:25:12.240 | and not enough ideas. That's a shame.
01:25:14.080 | It's a bummer, yeah. The one I always wanted to see them do, Zach, was—
01:25:19.040 | The one I always wanted to see them do was the TV, and they were supposedly working on it, like
01:25:23.680 | the actual TV, not the little Apple TV box in the back. I think that would have been
01:25:27.360 | extraordinary to actually have a gorgeous big television.
01:25:31.920 | What about a gaming console? They could have done that. There's just all these things that
01:25:36.480 | they could have done. It's not a lack of imagination because these aren't exactly
01:25:41.920 | incredibly world-beating ideas. They're sitting right in front of your face. It's just the will
01:25:46.800 | to do it.
01:25:47.280 | Yeah.
01:25:48.320 | The all-in-one TV would have been good.
01:25:51.680 | If you think back on Apple's product lineup over the years, where they've really created
01:25:57.200 | value is on how unique the products are. They almost create new categories. Sure,
01:26:02.320 | there may have been a "tablet computer" prior to the iPad, but the iPad really defined the
01:26:06.640 | tablet computer era. Sure, there was a smartphone or two before the iPhone came along, but it
01:26:11.200 | really defined the smartphone. And sure, there was a computer before the Apple II, and then it
01:26:15.680 | came along and it defined the personal computer. In all these cases, I think Apple strives to
01:26:20.720 | define the category. So it's very hard to define a television, if you think about it,
01:26:24.880 | or a gaming console in a way that you take a step up and you say, "This is the new thing.
01:26:29.280 | This is the new platform."
01:26:30.400 | I don't know. That's the lens I would look at if I'm Apple in terms of, "Can I redefine a car?
01:26:36.080 | Can I make...?" We're all trying to fit them into an existing product bucket.
01:26:39.760 | But I think what they've always been so good at is identifying consumer needs
01:26:43.360 | and then creating an entirely new way of addressing that need in a real step change function.
01:26:47.920 | From the iPod, it was so different from any MP3 player ever.
01:26:52.720 | I think the reason why the car could have been completely reimagined by Apple is that they have
01:26:57.040 | a level of credibility and trust that I think probably no other company has, and absolutely
01:27:03.120 | no other tech company has. And we talked about this, but I think this was the third Steve Jobs
01:27:10.320 | story that I left out. But in 2000, and I don't know, was it one? I launched a 99-cent download
01:27:20.400 | store. I think I've told you this story in Winamp. And Steve Jobs just ran total circles around us.
01:27:28.480 | But the reason he was able to is he had all the credibility to go to the labels and get deals done
01:27:33.520 | for licensing music that nobody could get done before. I think that's an example of what Apple's
01:27:38.240 | able to do, which is to use their political capital to change the rules. So if the thing
01:27:43.360 | that we would all want is safer roads and autonomous vehicles, there are regions in
01:27:48.640 | every town and city that could be completely converted to level five autonomous zones.
01:27:54.560 | If I had to pick one company that had the credibility to go and change those rules,
01:27:58.640 | it's them. Because they could demonstrate that there was a methodical, safe approach to doing
01:28:04.480 | something. And so the point is that even in these categories that could be totally reimagined,
01:28:09.600 | it's not for a lack of imagination. Again, it just goes back to a complete lack of will. And
01:28:13.840 | I understand because if you had $200 billion of capital on your balance sheet,
01:28:19.920 | I think it's probably pretty easy to get fat and lazy.
01:28:23.120 | Yeah, it is. And they want to have everything built there. People don't remember, but they
01:28:28.160 | actually built one of the first digital cameras. You must have owned this, right, Friedberg?
01:28:31.520 | Oh, I remember this. Yeah, totally.
01:28:33.760 | It was beautiful. What did they call it? Was it the iCamera or something?
01:28:36.800 | QuickTake.
01:28:37.760 | QuickTake.
01:28:38.720 | QuickTake, yeah. The thing I would like to see Apple build, and I'm surprised they didn't,
01:28:42.960 | was a smart home system the way Apple has Nest. A drop cam, a door lock, you know, an AV system,
01:28:52.560 | go after Questron or whatever, and just have your whole home automated. Thermostat, Nest,
01:28:58.240 | all of that would be brilliant by Apple. And right now I'm an Apple family that has
01:29:03.840 | our, all of our home automation through Google. So it's just, it kind of sucks. I would like that
01:29:08.800 | all to be integrated. Actually, that would be pretty amazing. Like if they did a Questron or
01:29:11.760 | Savant. Because then when you just go to your Apple TV, all your cameras just work. You don't
01:29:15.840 | need to. Yes. That's the, that, I mean, and everybody has a home, and everybody automates
01:29:21.680 | their home. So just think. Well, everyone has Apple TV at this point. So you just make Apple TV
01:29:25.840 | the brain for the home system. Right. That would be your hub. And you can connect your phone to it,
01:29:32.080 | and then, yes, that would be very nice. Yeah. Like, can you imagine like the ring cameras,
01:29:37.600 | all that stuff being integrated? I don't know why they didn't go after that. That seems like
01:29:40.560 | the easy layup. Hey, you know, everybody's been talking, Friedberg, about this alpha fold,
01:29:47.920 | this folding proteins. And there's some new version out from Google. And also Google reportedly,
01:29:55.920 | we talked about this before, is also advancing talks to acquire HubSpot. So that rumor for
01:30:00.560 | the $30 billion market cap, HubSpot is out there as well. Friedberg, you're as our resident science
01:30:06.560 | sultan, our resident sultan of science, and as a Google alumni, pick either story and let's go for
01:30:14.320 | it. Yeah, I mean, I'm not sure there's much more to add on the HubSpot acquisition rumors. They
01:30:17.680 | are still just rumors. And I think we covered the topic a couple of weeks ago. But I will say that
01:30:22.000 | alpha fold three that was just announced today and demonstrated by Google is a real, I would say,
01:30:29.520 | breathtaking moment for biology, for bioengineering, for human health, for medicine.
01:30:35.200 | And maybe I'll just take 30 seconds to kind of explain it. You remember when they introduced
01:30:40.640 | alpha fold, alpha fold two, we talked about DNA codes for proteins. So every three letters of DNA
01:30:47.280 | codes for an amino acid. So a string of DNA codes for a string of amino acids. And that's called a
01:30:54.240 | gene that produces a protein. And that protein is basically a long like think about beads, there's
01:31:00.080 | 20 different types of beads, 20 different amino acids that can be strung together. And what
01:31:04.880 | happens is that necklace, that bead necklace basically collapses on itself. And all those
01:31:09.280 | little beads stick together with each other in some complicated way that we can't deterministically
01:31:13.520 | model. And that creates a three dimensional structure, which is called a protein, that
01:31:17.920 | molecule. And that molecule does something interesting, it can break apart other molecules,
01:31:22.720 | it can buy molecules, it can move molecules around. So it's basically the machinery of chemistry of
01:31:28.160 | biochemistry. And so proteins are what is encoded in our DNA. And then the proteins do all the work
01:31:34.000 | of making living organisms. So Google's alpha fold project took three dimensional images of proteins,
01:31:40.160 | and the DNA sequence that codes for those proteins. And then they built a predictive model
01:31:44.640 | that predicted the three dimensional structure of a protein from the DNA that codes for it.
01:31:49.520 | And that was a huge breakthrough years ago. What they just announced with alpha fold three today
01:31:53.920 | is that they're now including all small molecules. So all the other little molecules
01:31:59.200 | that go into chemistry and biology that drive the function of everything we see around us,
01:32:04.560 | and the way that all those molecules actually bind and fit together is part of the predictive model.
01:32:09.840 | Why is that important? Well, let's say that you're designing a new drug. And it's a protein based
01:32:14.560 | drug, which biologic drugs, which most drugs are today, you could find a biologic drug that
01:32:18.320 | binds to a cancer cell. And then you'll spend 10 years going to clinical trials. And billions of
01:32:23.760 | dollars later, you find out that that protein accidentally binds to other stuff and hurts other
01:32:28.480 | stuff in the body. And that's an off target effect or a side effect. And that drug is pulled from the
01:32:32.800 | clinical trials and it never goes to market. Most drugs go through that process. They are actually
01:32:37.920 | tested in in animals and then in humans. And we find all these side effects that arise from those
01:32:43.040 | drugs, because we don't know how those drugs are going to bind or interact with other things in
01:32:48.560 | our biochemistry. And we only discovered after we put it in. But now we can actually model that with
01:32:53.600 | software, we can take that drug, we can create a three dimensional representation of it using the
01:32:58.000 | software. And we can model how that drug might interact with all the other cells, all the other
01:33:02.960 | proteins, all the other small molecules in the body to find all the off target effects that may
01:33:07.840 | arise and decide whether or not that presents a good drug candidate. That is one example of how
01:33:13.920 | this capability can be used. And there are many, many others, including creating new proteins
01:33:19.760 | that could be used to bind molecules or stick molecules together, or new proteins that could
01:33:24.160 | be designed to rip molecules apart. We can now predict the function of three dimensional
01:33:29.760 | molecules using this this capability, which opens up all of the software based design
01:33:34.960 | of chemistry of biology of drugs. And it really is an incredible breakthrough moment.
01:33:39.520 | The interesting thing that happened, though, is Google alphabet has a subsidiary called isomorphic
01:33:45.920 | labs. It is a drug development subsidiary of alphabet. And they basically kept all the IP
01:33:51.840 | for alpha fold three in isomorphic. So Google is going to monetize the heck out of this capability.
01:33:58.080 | And what they made available was not open source code, but a web based viewer that scientists,
01:34:03.040 | for quote, non commercial purposes can use to do some fundamental research in a web based viewer
01:34:07.760 | and make some experiments and try stuff out and how interactions might occur.
01:34:10.560 | But no one can use it for commercial use. Only Google's isomorphic labs can.
01:34:15.360 | So number one, it's an incredible demonstration of what AI outside of LLM, which we just talked
01:34:21.120 | about with Sam today. And obviously, we talked about other models, but LLM is being kind of this
01:34:26.000 | consumer text, predictive model capability. But outside of that, there's this capability
01:34:31.280 | in things like chemistry, with these new AI models that can be trained
01:34:35.520 | and built to predict things like three dimensional chemical interactions,
01:34:39.920 | that is going to open up an entirely new era for human progress. And I think that's what's
01:34:44.960 | so exciting. I think the other side of this is Google is hugely advantaged. And they just showed
01:34:48.720 | the world a little bit about some of these jewels that they have in the treasure chest. And they're
01:34:52.080 | like, Look at what we got, we're gonna make all these drugs. And they've got partnerships with
01:34:55.360 | all these pharma companies, and isomorphic labs that they've talked about. And it's gonna usher
01:35:00.240 | in a new era of drug development design for human health. So all in all, I'd say it's a pretty like
01:35:05.040 | astounding day, a lot of people are going crazy over the capability that they just demonstrated.
01:35:08.960 | And then it begs all this really interesting question around, like, you know, what's Google
01:35:12.720 | going to do with it? And how much value is going to be created here. So anyway, I thought it was
01:35:15.840 | a great story. I just rambled on for a couple minutes, but I don't know, super interesting.
01:35:20.400 | Is this AI capable of making a science corner that David Sachs pays attention to?
01:35:26.000 | Well, it will, it will predict secure, I think, for the common cold and for herpes,
01:35:30.880 | so he should pay attention.
01:35:31.840 | Folding cells is the app that
01:35:35.840 | casual game Sachs just download is playing. How many? How many chess moves did you make
01:35:40.560 | during that segment? Sorry, let me just say one more thing. You guys remember,
01:35:43.520 | we talked about Yamanaka factors, and how challenging it is to basically we can reverse
01:35:47.760 | aging if we can get the right proteins into cells to tune the expression of certain genes to make
01:35:53.920 | those cells useful. Right now, it's a shotgun approach to trying millions of compounds and
01:35:59.520 | combinations of compounds to do them. There's a lot of companies actually trying to do this right
01:36:02.720 | now, to come up with a fountain of youth type product. We can now simulate that. So with this
01:36:08.480 | system, one of the things that this alpha fold three can do is predict what molecules will bind
01:36:14.000 | and promote certain sequences of DNA, which is exactly what we try and do with the Yamanaka
01:36:18.400 | factor based expression systems, and find ones that won't trigger off target expression. So
01:36:23.440 | meaning we can now go through the search space and software of creating a combination of molecules
01:36:29.120 | that theoretically could unlock this fountain of youth to de age all the cells in the body and
01:36:34.400 | introduce an extraordinary kind of health benefit. And that's just again, one example of the many
01:36:38.480 | things that are possible with this sort of platform. I and I'm really I gotta be honest,
01:36:42.080 | I'm really just sort of skimming the surface here of what this can do. The capabilities and the
01:36:47.680 | impact are going to be like, I don't know, I know I say this sort of stuff a lot, but it's gonna be
01:36:51.360 | pretty profound. There's a on the blog post, they have this incredible video that they show of the
01:36:56.560 | Coronavirus that creates a common cold, I think the seven p&m protein. And not only did they
01:37:04.320 | literally, like predicted accurately, they also predicted how it interacts with an antibody with
01:37:11.360 | a sugar. It's nuts. So you could see a world where like, I don't know, you just get a vaccine for the
01:37:17.760 | cold. And it's kind of like you never have colds again, missing. I mean, simple stuff, but so
01:37:22.800 | powerful. And you can filter out stuff that has off target effects. So so much of drug discovery
01:37:27.440 | and all the side effects stuff can start to be solved for in silica. And you could think about
01:37:31.520 | running extraordinarily large use a model like this, run extraordinarily large simulations,
01:37:36.960 | in a search space of chemistry to find stuff that does things in the body that can unlock,
01:37:42.640 | you know, all these benefits can do all sorts of amazing things to destroy cancer,
01:37:46.720 | to destroy viruses, to repair cells to DH cells. And this is $100 billion business, they say,
01:37:53.200 | Oh, my God, I mean, this alone, I feel like this is where I, I've said this before, I think Google's
01:37:58.320 | got this like portfolio of like, quiet, you know, extraordinarily. Yeah, yeah, what if they hit and
01:38:04.800 | the fact and I think the fact that they didn't open source everything in this says a lot about
01:38:09.120 | their intentions. Yeah, yeah, open source when you're behind closed source, lock it up when
01:38:13.920 | you're ahead. But shown Yamanaka actually, interestingly, Yamanaka is the Japanese whiskey
01:38:18.800 | that sax serves on his plane as well. It's delicious. I love Hokkaido.
01:38:22.320 | If you didn't find your way to Silicon Valley, you could be like a Vegas lounge comedy guy.
01:38:29.600 | Absolutely. Yeah, sure. Yeah, I was actually Yeah, somebody said I should do like those 1950s
01:38:34.560 | those 1950s talk shows where the guys would do like the stage show. Yeah, somebody told me I
01:38:39.280 | should do like Spalding Gray, Eric Boghossian style stuff. I don't know if you guys remember,
01:38:43.840 | like the the monologue is from the 80s in New York. It's like, Oh, that's interesting. Maybe.
01:38:47.920 | All right, everybody, thanks for tuning in to the world's number one podcast. Can you believe we did
01:38:53.680 | it Shama? podcast in the world. And the all in summit the TED killer if you are going to TED.
01:39:01.680 | Congratulations for genuflecting if you want to talk about real issues come to the all in summit.
01:39:07.840 | And if you're protesting at the all in summit, let us know what mock meat you would like to
01:39:13.600 | have. Freeberg is setting up mock meat stations for all of our protesters. And what do you like?
01:39:19.520 | Yeah, all being if you want if you're oatmeal, your preference of just please,
01:39:23.200 | when you come to different kinds of xanthan gum, you can have from right to all of the
01:39:27.680 | nut milks you can want and then they'll be mindful. So we have the soy like on the South
01:39:34.000 | Lawn will have the goat yoga going on. So just please very thoughtful for you to make sure that
01:39:42.720 | our protesters are going to be well, well, yes, we're actually freeberg is working on the protester
01:39:50.000 | gift bags, the protester gift bags. They're made of protein folding proteins. I think I saw them
01:39:59.120 | open for the Smashing Pumpkins in 2003. I'll be here for three more nights. Love you boys.
01:40:07.920 | Love you besties. Is this the all on Potter open mic night? What's going on? It's basically
01:40:11.840 | let your winners ride.
01:40:17.040 | Rain Man David
01:40:19.600 | we open source it to the fans and they've just gone crazy with it.
01:40:26.960 | Love you as a queen of
01:40:36.720 | We should all just get a room and just have one big huge orgy because they're all just useless.
01:40:50.160 | It's like this like sexual tension that they just need to release somehow.
01:40:54.320 | What you're about to be
01:40:57.840 | that's Episode 178. And now the plugs the all in summit is taking place in Los Angeles on September
01:41:17.200 | 8, through the 10th, you can apply for a ticket at summit.all in podcast.co scholarships will be
01:41:24.640 | coming soon. If you want to see the four of us interview Sam Altman, you can actually see the
01:41:29.360 | video of this podcast on YouTube, youtube.com slash at all in which search all in podcast,
01:41:36.320 | and hit the alert bell and you'll get updates when we post we're doing a q&a episode live when the
01:41:42.960 | YouTube channel hits 500,000. And we're going to do a party in Vegas, my understanding when we hit
01:41:49.200 | a million subscribers and look for that as well. You can follow us on x x.com slash the all in pod
01:41:55.760 | tick tock is all underscore in underscore talk, Instagram, the all in pod. And on LinkedIn,
01:42:01.920 | just search for the all in podcast, you can follow Chamath at x.com slash Chamath. And you can sign
01:42:07.600 | up for a sub stack at Chamath dot sub stack.com I do free bird can be followed at x.com slash
01:42:12.880 | free bird and Ohalo is hiring click on the careers page at Ohalo genetics.com. And you can follow
01:42:19.120 | sacks at x.com slash David sacks sacks recently spoke at the American moment conference and
01:42:24.320 | people are going crazy for it. It's into his tweet on his ex profile. I'm Jason Calacanis.
01:42:29.680 | I am x.com slash Jason. And if you want to see pictures of my bulldogs and the food I'm eating,
01:42:34.720 | go to instagram.com slash Jason in the first name club. You can listen to my other podcast this week
01:42:40.720 | in startups to search for it on YouTube or your favorite podcast player. We are hiring a researcher
01:42:45.760 | apply to be a researcher doing primary research and working with me and producer Nick working in
01:42:50.800 | data and science and being able to do great research, finance, etc. All in podcast.co slash
01:42:57.040 | research. It's a full time job working with us the besties. We'll see you all next time on the
01:43:01.360 | See you all in podcast.