back to indexMaking AI accessible with Andrej Karpathy and Stephanie Zhan
00:00:03.120 |
I'm thrilled to introduce our next and final speaker, 00:00:06.580 |
I think Karpathy probably needs no introduction. 00:00:08.760 |
Most of us have probably watched his YouTube videos at length. 00:00:12.840 |
But he's renowned for his research in deep learning. 00:00:17.620 |
He designed the first deep learning class at Stanford, 00:00:30.360 |
I think, Andrej, you've been such a dream speaker. 00:00:32.520 |
And so we're excited to have you and Stephanie close out 00:00:48.140 |
I don't know what year it was taken, but he's impressed. 00:00:53.000 |
Andrej, thank you so much for joining us today, 00:00:59.480 |
Fun fact that most people don't actually know-- 00:01:02.360 |
how many folks here know where OpenAI's original office was? 00:01:13.840 |
Right here on the opposite side of our San Francisco office, 00:01:18.440 |
where actually many of you guys were just in huddles. 00:01:20.560 |
So this is fun for us, because it brings us back 00:01:22.560 |
to our roots, back when I first started at Sequoia, 00:01:25.000 |
and when Andrej first started co-founding OpenAI. 00:01:29.040 |
Andrej, in addition to living out the Willy Wonka, 00:01:34.900 |
what were some of your favorite moments working from here? 00:01:39.640 |
And this was the first office after, I guess, 00:01:46.960 |
And the chocolate factory was just downstairs, 00:01:55.360 |
And yeah, we had a few very fun episodes here. 00:01:58.960 |
One of them was alluded to by Jensen at GTC that happened 00:02:05.800 |
So Jensen was describing how he brought the first DGX 00:02:17.120 |
wanted to give a little bit of backstory on some 00:02:20.920 |
As Sonia had introduced, he was trained by Jeff Hinton and then 00:02:26.240 |
His first claim to fame was his deep learning course 00:02:37.040 |
For folks who don't remember the context then, 00:02:39.960 |
Elon had just transitioned through six different autopilot 00:02:43.040 |
leaders, each of whom lasted six months each. 00:02:50.480 |
Not too long after that, he went back to OpenAI 00:03:01.880 |
he is basking in the ultimate glory of freedom 00:03:08.320 |
And so we're really excited to see what you have to share 00:03:11.720 |
A few things that I appreciate the most from Andrej 00:03:13.840 |
are that he is an incredible, fascinating, futurist thinker. 00:03:22.260 |
And so I think he'll share some of his insights around that 00:03:25.260 |
To kick things off, AGI, even seven years ago, 00:03:32.180 |
to achieve, even in the span of our lifetimes. 00:03:36.900 |
What is your view of the future over the next N years? 00:03:51.900 |
And you would think about different approaches. 00:04:01.380 |
And I think, roughly speaking, the way things are happening 00:04:04.020 |
is everyone is trying to build what I refer to 00:04:10.820 |
And basically, I like to think of it as an operating system. 00:04:16.020 |
that you plug into this new CPU or something like that. 00:04:18.460 |
The peripherals are, of course, like text, images, audio, 00:04:22.540 |
And then you have a CPU, which is the LLM transformer itself. 00:04:25.720 |
And then it's also connected to all the software 1.0 00:04:28.020 |
infrastructure that we've already built up for ourselves. 00:04:32.180 |
to build something like that and then make it available 00:04:37.620 |
as something that's customizable to all the different nooks 00:04:41.380 |
And so I think that's kind of roughly what everyone 00:04:43.220 |
is trying to build out and what we sort of also 00:04:50.540 |
headed is we can bring up and down these relatively 00:04:56.660 |
self-contained agents that we can give high-level tasks to 00:05:08.460 |
how should we all be living our lives differently? 00:05:15.420 |
I guess we have to try to build it, influence it, 00:05:23.220 |
So now that you're a free, independent agent, 00:05:25.980 |
I want to address the elephant in the room, which 00:05:34.900 |
are founders who are trying to carve out a little niche, 00:05:38.100 |
praying that open AI doesn't take them out overnight. 00:05:41.260 |
Where do you think opportunities exist for other players 00:05:47.060 |
versus what areas do you think open AI will continue 00:05:53.460 |
Yes, so my high-level impression is basically 00:05:55.340 |
open AI is trying to build out this LLMOS OS. 00:06:03.860 |
of which you can position different companies 00:06:06.300 |
Now, I think the OS analogy is also really interesting, 00:06:08.180 |
because when you look at something like Windows 00:06:10.180 |
or something like that-- these are also operating systems-- 00:06:16.980 |
And so I think, in the same way, open AI or any 00:06:19.140 |
of the other companies might come up with a few default 00:06:22.860 |
have different browsers that are running on it, 00:06:24.780 |
just like you can have different chat agents running 00:06:34.580 |
that are fine-tuned to all the different nooks 00:06:36.980 |
And I really like the analogy of the early iPhone apps 00:06:46.620 |
that we're going through the same thing right now. 00:06:48.180 |
People are trying to figure out, what is this thing good at? 00:06:54.420 |
How do I just actually get it to perform real tasks? 00:06:59.180 |
And what kind of oversight-- because it's quite autonomous, 00:07:04.940 |
So there's many things to think through and just 00:07:08.640 |
And I think that's what's going to take some time to figure out 00:07:11.220 |
exactly how to work with this infrastructure. 00:07:13.500 |
So I think we'll see that over the next few years. 00:07:28.700 |
How do you foresee the future of the ecosystem playing out? 00:07:32.620 |
So again, I think the operating systems analogy 00:07:37.540 |
we have basically an oligopoly of a few proprietary systems, 00:07:47.380 |
And so I think maybe it's going to look something like that. 00:07:49.540 |
I also think we have to be careful with the naming, 00:07:53.100 |
like Lama, Mistral, and so on, I wouldn't actually 00:07:55.960 |
And so it's kind of like tossing over a binary 00:08:00.380 |
Like, you can kind of work with it, and it's useful, 00:08:14.020 |
So there's Pythia models, LLM360, Olmo, et cetera. 00:08:20.060 |
And they're fully releasing the entire infrastructure that's 00:08:30.100 |
it's much better, of course, because you can fine-tune 00:08:33.740 |
But also, I think it's subtle, but you can't fully 00:08:36.380 |
fine-tune the model, because the more you fine-tune the model, 00:08:39.260 |
the more it's going to start regressing on everything else. 00:08:42.240 |
And so what you actually really want to do, for example, 00:08:44.620 |
if you want to add capability, and not regress 00:08:48.660 |
to train on some kind of like a mixture of the previous data 00:08:52.820 |
set distribution and the new data set distribution. 00:08:54.980 |
Because you don't want to regress the old distribution, 00:09:00.420 |
You need the training loop, you need the data set, et cetera. 00:09:08.260 |
but I think we need slightly better language for it, almost. 00:09:12.140 |
So there's open weights models, open source models, 00:09:20.340 |
And yeah, probably it's going to look very similar to the ones 00:09:24.300 |
And hopefully you'll continue to help build some of that out. 00:09:27.760 |
So I'd love to address the other elephant in the room, which 00:09:31.280 |
Simplistically, it seems like scale is all that matters. 00:09:33.800 |
Scale of data, scale of compute, and therefore 00:09:47.880 |
So I would say scale is definitely number one. 00:09:51.440 |
I do think there are details there to get right. 00:09:53.400 |
And I think a lot also goes into the data set preparation 00:09:57.760 |
and so on, making it very good and clean, et cetera. 00:10:01.120 |
These are all compute efficiency gains that you can get. 00:10:05.600 |
and then, of course, the training of the model 00:10:09.560 |
So I think scale will be the primary determining factor. 00:10:11.560 |
It's like the first principal component of things, for sure. 00:10:19.160 |
So it's almost like the scale sets some kind of a speed 00:10:31.040 |
If you're just going to be doing fine-tuning and so on, 00:10:36.240 |
But we haven't really seen that just yet fully play out. 00:10:39.160 |
And can you share more about some of the ingredients 00:10:41.280 |
that you think also matter, maybe lower in priority 00:10:51.760 |
If you're just given the money and the scale, 00:10:53.720 |
it's actually still really hard to build these models. 00:10:55.960 |
And part of it is that the infrastructure is still so new. 00:10:58.320 |
And it's still being developed and not quite there. 00:10:59.960 |
But training these models at scale is extremely difficult. 00:11:02.880 |
And it's a very complicated distributed optimization 00:11:09.560 |
And it just basically turns into this insane thing running 00:11:17.460 |
And so instrumenting that and getting that to work 00:11:19.640 |
is actually an extremely difficult challenge. 00:11:22.120 |
GPUs were not intended for 10,000 GPU workloads 00:11:44.680 |
both on the infrastructure side, the algorithm side, 00:11:48.100 |
and then the data side, and being careful with that. 00:11:55.500 |
Even some of the challenges we thought existed a year ago 00:12:00.680 |
hallucinations, context windows, multimodal capabilities, 00:12:12.780 |
What do you think are meaty enough problems, but also 00:12:15.360 |
solvable problems, that we can continue to go after? 00:12:22.920 |
is this distinct split between diffusion models 00:12:27.800 |
They're both ways of representing probability 00:12:30.200 |
And it just turns out that different modalities 00:12:32.120 |
are apparently a good fit for one of the two. 00:12:34.840 |
I think that there's probably some space to unify them 00:12:44.080 |
or figure out how we can get a hybrid architecture, and so on. 00:12:54.280 |
And it just feels wrong to me that there's nothing in between. 00:12:58.700 |
And I think there are interesting problems there. 00:13:00.840 |
And then the other thing that maybe I would point to 00:13:08.080 |
the energetic efficiency of running all this stuff. 00:13:13.280 |
Jensen was just talking at GTC about the massive supercomputers 00:13:17.800 |
These are-- the numbers are in mega megawatts, right? 00:13:21.420 |
And so maybe you don't need all that to run a brain. 00:13:27.320 |
off by a factor of 1,000 to a million somewhere there, 00:13:29.840 |
in terms of the efficiency of running these models. 00:13:32.820 |
And I think part of it is just because the computers we've 00:13:38.960 |
And I think NVIDIA GPUs are a good step in that direction, 00:13:44.240 |
in terms of you need extremely high parallelism. 00:13:46.520 |
We don't actually care about sequential computation that 00:13:57.960 |
or something you can think about it that way. 00:14:01.720 |
adapting the computer architecture to the new data 00:14:05.020 |
Number two is pushing on a few things that we're currently 00:14:17.520 |
4, 5, 6, or even 1.58, depending on which papers you read. 00:14:25.160 |
And then the second one, of course, is sparsity. 00:14:27.360 |
So that's also another big delta, I would say. 00:14:31.440 |
And so sparsity, I think, is another big lever. 00:14:34.940 |
like just the von Neumann architecture of computers 00:14:37.200 |
and how they build, where you're shuttling data in and out 00:14:39.080 |
and doing a ton of data movement between memory 00:14:41.000 |
and the cores that are doing all the compute. 00:14:42.880 |
This is all broken as well, and it's not how your brain works. 00:14:46.720 |
And so I think it should be a very exciting time 00:14:52.800 |
by a factor of a million, 1,000 to a million, 00:14:55.560 |
And there should be really exciting innovations there 00:15:02.200 |
I think there are at least a few builders in the audience 00:15:08.280 |
you've worked alongside many of the greats of our generation-- 00:15:12.480 |
Sam, Greg from OpenAI, and the rest of the OpenAI team, 00:15:17.100 |
Who here knows the joke about the rowing team, 00:15:28.920 |
And I think it reflects a lot of his philosophy 00:15:35.000 |
The Japanese team has four rowers and one steerer. 00:15:38.360 |
And the American team has four steerers and one rower. 00:15:42.880 |
And can anyone guess, when the American team loses, 00:15:55.840 |
as a reflection of how he thinks about hiring 00:16:05.400 |
like these incredible leaders, what have you learned? 00:16:12.000 |
Elon runs his companies in an extremely unique style. 00:16:25.560 |
I like to say that he runs the biggest startups. 00:16:32.880 |
I don't even know, basically, how to describe it. 00:16:35.440 |
It almost feels like it's a longer sort of thing 00:16:38.480 |
But number one is, so he likes very small, strong, highly 00:16:54.040 |
I would have to work and expend effort to hire people. 00:16:56.680 |
I would have to basically plead to hire people. 00:16:59.860 |
And then the other thing is that big companies, usually, 00:17:03.280 |
it's really hard to get rid of low performers. 00:17:05.200 |
And I think Elon is very friendly to, by default, 00:17:10.980 |
to keep them on the team, because he would, by default, 00:17:16.440 |
So keep a small, strong, highly technical team. 00:17:24.360 |
Number two is the vibes of how everything runs 00:17:27.160 |
and how it feels when he walks into the office. 00:17:43.480 |
He always encourages people to leave meetings 00:17:50.320 |
If you're not contributing and you're not learning, 00:17:54.680 |
And I think this is something that you don't normally see. 00:17:57.160 |
So I think vibes is a second big lever that I think he really 00:18:02.320 |
Maybe part of that also is, I think a lot of big companies, 00:18:17.360 |
is very unique and very interesting and very strange 00:18:23.120 |
So usually, a CEO of a company is a remote person, 00:18:38.760 |
Many of the meetings that we had were like, OK, 00:18:46.800 |
He doesn't want to talk just to the VPs and the directors. 00:18:50.000 |
So normally, people would spend like 99% of the time 00:19:00.600 |
then engineers and the code are the source of truth. 00:19:03.320 |
And so they have the source of truth, not some manager. 00:19:13.520 |
he's connected with the team and not something remote 00:19:16.840 |
And also, just like his large hammer and his willingness 00:19:26.840 |
OK, I just don't have enough GPUs to run my thing. 00:19:30.680 |
And if he hears that twice, he's going to be like, OK, 00:19:35.920 |
And when you don't have satisfying answers, he's like, 00:19:38.360 |
OK, I want to talk to the person in charge of the GPU cluster. 00:19:42.840 |
And he's just like, OK, double the cluster right now. 00:19:49.040 |
From now on, send me daily updates until the cluster 00:19:54.400 |
And they're like, OK, well, we have this procurement set up. 00:19:57.480 |
And NVIDIA says that we don't have enough GPUs. 00:20:04.120 |
And then he's like, OK, I want to talk to Jensen. 00:20:08.080 |
So I think the extent to which he's extremely involved 00:20:10.840 |
and removes bottlenecks and applies his hammer, 00:20:15.040 |
So I think there's a lot of these kinds of aspects 00:20:16.820 |
that are very unique, I would say, and very interesting. 00:20:19.200 |
And honestly, going to a normal company outside of that, 00:20:26.800 |
And so I think, yeah, maybe that's a long rant. 00:20:40.360 |
Hopefully, tactics that most people here can employ. 00:20:46.160 |
build some of the most generational companies. 00:20:49.960 |
for many people, many of whom are in the audience today, 00:21:00.440 |
education, tools, helping create more quality 00:21:08.700 |
As you think about the next chapter in your life, 00:21:13.960 |
Yeah, I think you've described it in the right way. 00:21:33.040 |
and all the nooks and crannies of the economy. 00:21:35.040 |
And I want the whole thing to be like this boiling 00:21:44.600 |
And I think that's why I love startups and I love companies. 00:21:48.040 |
And I want there to be a vibrant ecosystem of them. 00:21:50.760 |
And by default, I would say a little bit more hesitant 00:22:01.800 |
Especially with AGI being such a magnifier of power, 00:22:05.760 |
I'm worried about what that could look like and so on. 00:22:17.120 |
We'd love to have some questions from the audience. 00:22:24.240 |
Would you recommend founders follow Elon's management 00:22:41.520 |
Like, you have to have that same kind of a DNA 00:22:48.960 |
making it clear upfront that this is the kind of company 00:23:00.360 |
So as long as you do it from the start and you're consistent, 00:23:17.880 |
But I think it's a consistent model of company 00:23:28.160 |
I'm curious if there are any types of model composability 00:23:35.800 |
I'm not sure what you think about model merges, 00:23:46.720 |
I see papers in this area, but I don't know that anything 00:23:52.680 |
But there's a ton of work on primary efficient training 00:23:57.880 |
in the category of composability in the way I understand it. 00:24:01.380 |
It's only the case that, like, traditional code is very 00:24:04.980 |
And I would say neural nets are a lot more fully connected 00:24:10.180 |
But they do compose and can fine tune as a part of a whole. 00:24:18.840 |
it's very common that you pre-train components. 00:24:20.860 |
And then you plug them in and fine tune maybe 00:24:25.320 |
where you can pre-train small pieces of the cortex 00:24:33.460 |
so maybe those are my scattered thoughts on it. 00:24:35.420 |
But I don't know if I have anything very coherent 00:24:42.060 |
So we've got these next word prediction things. 00:24:50.420 |
has a mental model of physics that's self-consistent 00:24:53.460 |
and can generate new ideas for how you actually do fusion? 00:25:06.500 |
I think it's fundamentally different in one aspect. 00:25:11.580 |
Because the current models are just not good enough. 00:25:13.740 |
And I think there are big rocks to be turned here. 00:25:15.980 |
And I think people still haven't really seen what's 00:25:21.620 |
And roughly speaking, I think we've done step one of AlphaGo. 00:25:25.460 |
This is what the team-- we've done imitation learning part. 00:25:28.180 |
There's step two of AlphaGo, which is the RL. 00:25:37.620 |
And so I think there's big rocks and capability 00:25:47.300 |
And the details of that are kind of tricky, potentially. 00:25:51.060 |
But I think we just haven't done step two of AlphaGo, 00:25:57.340 |
for example, number one, how terrible the data collection 00:26:04.140 |
Some prompt is some kind of a mathematical problem. 00:26:06.300 |
A human comes in and gives the ideal solution 00:26:10.660 |
The problem is that the human psychology is different 00:26:16.300 |
are different to what's easy or hard for the model. 00:26:18.900 |
And so human kind of fills out some kind of a trace 00:26:23.540 |
But some parts of that are trivial to the model. 00:26:30.580 |
And then everything else is polluted by that later. 00:26:43.060 |
Maybe it's not very good at four-digit addition, 00:26:45.820 |
so it's going to fall back and use a calculator. 00:26:54.780 |
It's a good initializer, though, for something agent-like. 00:26:57.620 |
And then the other thing is we're doing reinforcement 00:27:01.220 |
But that's a super weak form of reinforcement learning. 00:27:03.620 |
It doesn't even count as reinforcement learning, 00:27:17.980 |
you would be giving two people two boards and said, 00:27:27.180 |
It's like, number one, it's just vibes of the board. 00:27:31.060 |
Number two, if it's a reward model that's a neural net, 00:27:33.460 |
then it's very easy to overfit to that reward model 00:27:37.700 |
And it's going to find all these spurious ways of hacking 00:27:46.660 |
because they have a very clear objective function 00:27:50.380 |
So RLHF is like nowhere near, I would say, RL. 00:27:54.140 |
And the other thing is imitation learning, super silly. 00:27:56.380 |
RLHF is nice improvement, but it's still silly. 00:27:59.780 |
And I think people need to look for better ways of training 00:28:02.580 |
these models so that it's in the loop with itself 00:28:05.580 |
And I think there will probably be unlocks in that direction. 00:28:09.300 |
So it's sort of like graduate school for AI models. 00:28:24.880 |
Those are prompts to you to exercise the material, right? 00:28:30.140 |
not just reading left or right, number one, you're exercising. 00:28:36.220 |
You're doing a lot of manipulation of this knowledge 00:28:41.020 |
And we haven't seen equivalence of that at all in LLMs. 00:28:51.140 |
Yeah, it's cool to be optimal and practical at the same time. 00:29:01.060 |
be aligning the priority of A, either doing cost reduction 00:29:04.420 |
and revenue generation, or B, finding the better quality 00:29:14.940 |
do is they start out with the most capable model that 00:29:20.320 |
So you use GPT-4, you use super prompted, et cetera. 00:29:24.700 |
So you're just trying to get your thing to work. 00:29:43.740 |
It's kind of like the paradigm that I've seen-- 00:29:45.580 |
a few people that I've talked to about this say works for them. 00:29:51.540 |
And maybe it's not even just a single prompt. 00:29:53.460 |
I like to think about, what are the ways in which you can even 00:30:01.460 |
and you pick the best one, and you have some debate, 00:30:04.800 |
you can come up with, just get your thing to work really well. 00:30:07.540 |
Because if you have a thing that works really well, 00:30:09.660 |
then one other thing you can do is you can distill that. 00:30:14.740 |
You run your super expensive thing on it to get your labels. 00:30:21.620 |
go after getting it to work as well as possible, 00:30:25.460 |
And then make it cheaper, is the thing I would suggest. 00:30:31.580 |
So this past year, we saw a lot of impressive results 00:30:38.820 |
that will continue to keep pace, or not keep pace, 00:30:41.300 |
with closed source development as the models continue 00:30:54.740 |
Fundamentally, these models are so capital intensive, right? 00:30:57.180 |
Like, one thing that is really interesting is, for example, 00:31:02.920 |
But then it's also not part of-- it's not the thing that they do. 00:31:05.300 |
And it's not involved-- like, their money printer 00:31:13.460 |
so that they empower the ecosystem as a whole, 00:31:15.500 |
so they can actually borrow all the best ideas. 00:31:22.460 |
And so I think they should actually go further. 00:31:36.700 |
Maybe they should try to just find data sources 00:31:40.900 |
that they think are very easy to use or something like that 00:31:46.400 |
So I would say those are kind of our champions, potentially. 00:31:50.580 |
And I would like to see more transparency also coming from-- 00:31:55.020 |
and I think Meta and Facebook are doing pretty well. 00:32:11.380 |
Maybe this is an obvious answer given the previous question. 00:32:13.940 |
But what do you think would make the AI ecosystem cooler 00:32:20.700 |
Or do you think there's other stuff that is also a big thing 00:32:32.540 |
Yeah, I certainly think one big aspect is just 00:32:36.460 |
I had a tweet recently about, number one, build the thing. 00:32:40.480 |
I would say there's a lot of people building a thing. 00:32:43.780 |
of building the ramps so that people can actually 00:32:50.780 |
We all need to ramp up and collaborate to some extent 00:32:55.620 |
So I would love for people to be a lot more open with respect 00:32:59.120 |
to what they've learned, how they've trained all this, 00:33:01.980 |
how what works, what doesn't work for them, et cetera. 00:33:04.700 |
And yes, just from us to learn a lot more from each other, 00:33:11.500 |
there is quite a bit of momentum in the open ecosystems as well. 00:33:17.100 |
And maybe there's some opportunities for improvement 00:33:30.220 |
To get to the next big performance leap from models, 00:33:36.820 |
the transformer architecture with, say, thought tokens 00:33:42.820 |
and come up with a new fundamental building block 00:33:44.900 |
to take us to the next big step forward or AGI? 00:33:52.180 |
Well, the first thing I would say is transformer is amazing. 00:33:59.280 |
I don't think I would have seen that coming for sure. 00:34:05.140 |
I thought there would be an insane diversification 00:34:12.260 |
It's a complete-- it's all the same model, actually. 00:34:17.500 |
I don't know that it's the final neural network. 00:34:26.340 |
been in it for a while, it's really hard to say 00:34:33.080 |
will be able to find a pretty big change to how 00:34:36.320 |
I would say on the front of the autoregressive 00:34:38.160 |
or diffusion, which is kind of like the modeling 00:34:40.160 |
and the loss setup, I would say there's definitely 00:34:44.360 |
But also on the transformer, and like I mentioned, 00:34:46.840 |
these levers of precision and sparsity and as we drive that, 00:34:50.120 |
and together with the co-design of the hardware 00:34:52.320 |
and how that might evolve, and just making network 00:34:55.320 |
architectures that are a lot more sort of well-tuned 00:35:05.280 |
like transformer is kind of designed for the GPU, 00:35:07.620 |
That was the big leap, I would say, in the transformer paper. 00:35:11.420 |
is we want an architecture that is fundamentally 00:35:16.760 |
has sequential dependencies, terrible for GPU, 00:35:19.500 |
transformer basically broke that through the attention. 00:35:21.760 |
And this was like the major sort of insight there. 00:35:27.360 |
like the neural GPU and other papers at Google 00:35:31.100 |
But that is a way of targeting the algorithm to the hardware 00:35:35.320 |
So I would say that's kind of like in that same spirit. 00:35:45.720 |
I have to say, like, it came out many years ago now. 00:35:54.080 |
Yeah, so you know, like the original transformer 00:35:58.280 |
and what we're using today are not super different. 00:36:05.160 |
As a parting message to all the founders and builders 00:36:09.640 |
give them as they dedicate the rest of their lives 00:36:17.640 |
So yeah, I don't usually have crazy generic advice. 00:36:21.560 |
I think maybe the thing that's top of my mind is I 00:36:24.560 |
think founders, of course, care a lot about their startup. 00:36:28.660 |
I also want, like, how do we have a vibrant ecosystem 00:36:32.440 |
How do startups continue to win, especially with respect 00:36:47.840 |
Thank you so much for joining us, Andre, for this