back to indexAndrew Ng on AI's Potential Effect on the Labor Force | WSJ
00:00:00.000 |
To sum it up to start, what would you say are going to be the biggest positive and 00:00:04.880 |
negative impacts on the workforce from AI over the next, say, five years? 00:00:09.940 |
>> I think there will be massive productivity boost for existing job roles and 00:00:16.740 |
And I don't want to pretend that there will be no job loss. 00:00:20.880 |
I think it may not be as bad as people are worried right now. 00:00:24.900 |
I know that we're having an important societal discussion about AI's impact on 00:00:32.640 |
I actually find it even more useful to not think about AI automating jobs, but 00:00:39.600 |
So it turns out that most jobs, you can think of as a bundle of tasks. 00:00:44.640 |
And when I work with large companies, well often, many CEOs would come and 00:00:48.680 |
say, hey, Andrew, I have 50,000 or 100,000 employees. 00:00:51.680 |
What are all my people actually doing, right? 00:00:53.960 |
Turns out none of us really know in detail what our workforces are doing. 00:00:57.640 |
But I found that if you look at the jobs and break them down into tasks, 00:01:00.880 |
then analyzing individual tasks for potential for AI automation or 00:01:05.280 |
augmentation often leads to interesting opportunities to use AI. 00:01:10.960 |
And maybe one concrete example, radiologists. 00:01:14.200 |
We've talked about AI maybe automating some parts of radiology. 00:01:17.320 |
But it turns out that radiologists do many tasks. 00:01:19.480 |
They read x-rays, but they also do patient intake, gather patient histories. 00:01:24.440 |
They consult with patients, mentor younger doctors. 00:01:26.400 |
They operate the machines, maintain the machines. 00:01:30.240 |
And we found that when we go into businesses and do this task-based analysis, 00:01:38.320 |
And regarding the job question, it turns out that for many jobs, if AI 00:01:42.740 |
automates 20, 30% of the tasks in a job, then the job maybe is actually decently safe. 00:01:49.520 |
But what will happen is not that AI will replace people, but 00:01:54.160 |
I think people that use AI will replace other people that don't. 00:01:57.120 |
>> What are the types of tasks that you're seeing? 00:02:01.880 |
And if you can say, what professions do you think are most, 00:02:06.960 |
where you have the highest concentration of those types of tasks? 00:02:10.440 |
>> So some job roles really disrupted right now, call centers, 00:02:14.440 |
call center operations, the customer support is one. 00:02:16.680 |
It feels like tons of companies are using AI to nearly automate that or 00:02:22.160 |
automate a large fraction of that, I think sales operations, sales back office, 00:02:26.480 |
those routine tasks are being automated, and I think a bunch of others. 00:02:30.680 |
I feel like we see different teams trying to automate some of the lower legal work, 00:02:37.080 |
some of the lower level marketing work, a bunch of others. 00:02:39.480 |
But I would say the two biggest I'm seeing are customer service and 00:02:46.040 |
But I think there's a lot of opportunities out there. 00:02:49.320 |
How do you think it's going to change the role of CIOs, the folks in this room? 00:02:53.000 |
>> I think it's an exciting time to be a CIO. 00:02:58.080 |
One thing that my team AI fund does is we often work with 00:03:02.120 |
large corporations to identify and then execute on AI projects. 00:03:07.600 |
So over the last week, I think I spent almost half day sessions with two 00:03:12.160 |
Fortune 500 companies where I got to hear from their technology leadership 00:03:19.560 |
And some patterns I've seen every time we spend a little bit of time brainstorming 00:03:23.200 |
AI projects, they're always way more promising ideas than anyone has 00:03:29.720 |
And so it becomes a fascinating kind of a prioritization exercise to 00:03:36.360 |
And then the other decisions I'm seeing is after you do a task-based analysis, 00:03:40.560 |
identify tasks and jobs for ideas, or after your team learns about AI and 00:03:46.120 |
brainstorms ideas, after you prioritize there's a usual kind of buy versus 00:03:54.040 |
And it turns out that we seem to be in an opportunity-rich environment where 00:04:01.200 |
actually what AI Fund winds up often doing is often the company will say, 00:04:10.800 |
But there's so many other ideas that you just don't want to pay completely by 00:04:14.800 |
yourself for the development of and then kind of we then help our corporate 00:04:19.440 |
partners build it outside so you can still get the capability without needing 00:04:26.320 |
But I find that at AI Fund we see so many startup and project ideas that we 00:04:33.200 |
wind up having to use task management software to just keep track of all these 00:04:37.480 |
ideas because no one on our team can keep straight of these kind of hundreds of 00:04:41.000 |
ideas that we see and have to prioritize among. 00:04:43.600 |
So you asked the generative AI to keep track of all the tasks that generative 00:04:50.920 |
Oh, that'd be interesting, but we actually use Asana to keep track of all the 00:04:54.480 |
different ideas, so generally I summarize it. 00:04:58.600 |
You've talked for years about the importance of lifelong learning, the 00:05:09.760 |
Is it realistic to think that people will be able to re-skill, to educate 00:05:15.120 |
themselves at a pace that keeps up with the development of this technology? 00:05:18.640 |
Not necessarily the folks in this room, but the people whose tasks are being 00:05:25.680 |
How big of an issue do you think that's going to be, the displacement? 00:05:28.120 |
Because when we talk about technology and jobs, we always talk about in the 00:05:32.040 |
long run, "Look, we used to all be farmers, now we're not." 00:05:35.400 |
In the long run it'll be fine, but in the meantime there's a lot of 00:05:42.880 |
Honestly, I think it's realistic, but I'm a little bit nervous about it. 00:05:47.040 |
But I think it is up to all of us collectively in leadership roles to make 00:05:54.040 |
One thing I'm seeing, so the last wave of tech innovation, when deep learning, 00:05:58.560 |
predictive AI, labeling technology, whether you call it started to work really 00:06:02.080 |
well 10 years ago, it tended to be more of the routine, repetitive tasks like 00:06:09.680 |
With generative AI, it seems to be more of the knowledge workers' work that AI 00:06:17.880 |
And to the reskilling point, I think that almost all knowledge workers today can 00:06:22.760 |
get a productivity boost by using generative AI right away, pretty much 00:06:28.680 |
But the challenge is there is reskilling needed. 00:06:31.680 |
We've all seen the stories about a lawyer generating hallucinated 00:06:36.960 |
court citations, and then getting in trouble with the judge. 00:06:39.800 |
So I feel like people need just a little bit of training to use AI responsibly 00:06:46.080 |
But with just a little bit of training, I think almost all knowledge workers, 00:06:49.440 |
including all the way up to the C-suite, can get a productivity boost right away. 00:06:54.280 |
But I think it is exciting, but also, frankly, daunting challenge to think about 00:06:59.960 |
how do we help all of these knowledge workers gain those skills. 00:07:03.960 |
Is that problem you alluded to, the hallucination problem, the accuracy, 00:07:09.600 |
concern, is that fixable with AI, or is it more that we just have to learn to use it 00:07:22.400 |
I myself do not see a path to solving hallucinations and making AI never 00:07:28.240 |
hallucinate in the same way that I don't see a path to solving the problem that 00:07:35.080 |
But we've figured out how to work with humans and for humans and so on. 00:07:41.560 |
And I think because gen 2 AI bursts onto the scene so suddenly, a lot of people 00:07:46.080 |
have not yet gotten used to the workflow and processes of how to work with them 00:07:51.960 |
So I know that when an AI makes a mistake, sometimes it goes viral on social 00:07:58.200 |
media or it draws a lot of attention, but I think that it's probably not as bad as 00:08:06.160 |
Yes, AI makes mistakes, but plenty of businesses are figuring out, despite some 00:08:11.240 |
baseline error rate, how to deploy it safely and responsibly. 00:08:15.520 |
And I'm not saying that it's never a blocker to get anything deployed, but I'm 00:08:19.320 |
seeing tons of stuff deployed in very useful ways. 00:08:23.440 |
Just don't use gen 2 AI to render medical diagnosis and output directly what drug 00:08:32.560 |
But there are lots of other use cases where it seems very responsible and safe 00:08:40.880 |
Do you think improvements on error rates will increase the use cases? 00:08:46.760 |
I mean, right now, maybe we'll never get to a point or not in the foreseeable 00:08:50.800 |
future where you want the AI doctor to directly prescribe, but are there other 00:08:56.160 |
cases that are not optimal now because we're still figuring out error rates that 00:09:05.320 |
Yeah, it's been exciting to see how AI technology improves month over month. 00:09:11.240 |
So I think today we have much better tools for guarding against hallucinations 00:09:18.240 |
But just one example, if you ask the AI to use retrieve augmented generation, so 00:09:23.760 |
don't just generate text, but ground it in a specific trusted article and give a 00:09:31.120 |
And then further, if AI generates something, you really want it to be right. 00:09:34.840 |
It turns out you can ask the AI to check his own work. 00:09:40.920 |
Read both carefully and tell me if everything is justified based on the 00:09:45.680 |
And this won't squash hallucinations completely to zero, but it won't massively 00:09:49.920 |
squash it compared to if you ask AI to just say what it had on his mind. 00:09:55.600 |
So I think hallucinations is-- it is an issue, but I think it's not as bad an issue 00:10:10.080 |
And the technology has been through lots of multiple hype cycles and declines and 00:10:18.680 |
What do you think is different about this moment, the last 15 months or so, 00:10:29.760 |
So I think-- so I feel like compared to 10, 15 years ago, we've not really had 00:10:42.840 |
So today, years back, I used to lead the Google Brain team, which seemed to help 00:10:49.400 |
And the economics-- fundamental economics are very strong. 00:10:53.960 |
I'm using deep learning to drive online advertising. 00:10:57.560 |
Maybe not the most inspiring thing I've worked on, but the economic fundamentals 00:11:01.720 |
have been really strong for 10-ish plus years now. 00:11:07.080 |
And I feel like the economic-- the fundamentals for generative AI also feel 00:11:12.920 |
quite strong in the sense that we can ultimately augment a lot of tasks and 00:11:18.240 |
drive a lot of very fundamental business efficiency. 00:11:23.440 |
I think Sequoia posted an interesting article asking, over the last year or last 00:11:29.000 |
year, we collectively invested, I don't know, maybe something like $50 billion in 00:11:33.400 |
capital infrastructure. We are buying GPUs and data centers. 00:11:37.000 |
And I think we better figure out the applications to make sure that pays off. 00:11:41.120 |
So I don't think we overinvested, but to me, whenever there's a new wave of 00:11:49.240 |
technology, almost all of the attention is on the technology layer. 00:11:52.680 |
So we all want to talk about what Google and OpenAI and Microsoft and Amazon and 00:11:57.800 |
so on are doing, because it's fun to talk about the technology. 00:12:01.480 |
But it turns out that for every wave of technology, for the two builders, like 00:12:06.440 |
these companies, to be successful, there's another layer that had better be even more 00:12:10.680 |
successful, which is the applications you build on top of these tools. 00:12:14.640 |
Because the applications that better generate even more revenue so that they 00:12:20.960 |
And for whatever reason, society of interest or whatever, the applications tend 00:12:30.320 |
But I think for many of you, in organizations where you are not trying to be 00:12:35.240 |
the next large language model, foundation model provider, I think that as we look 00:12:41.520 |
into the many years in the future, there would be more revenue generated, at least 00:12:45.760 |
there better be, in the applications that you might build than just in the two 00:12:51.640 |
That gets to a question that I find fascinating about this. 00:12:55.000 |
What is the effect on the power dynamics in the tech industry and the economy more 00:13:00.720 |
And to what degree is this a technology that is a disruptive technology that is 00:13:06.680 |
ushering in a new wave of companies that 10 years from now will be big, and even 00:13:12.360 |
though we hadn't heard of them two years ago, to what degree is it just going to 00:13:15.440 |
make Microsoft and Amazon and Google, et cetera, more powerful than they've ever 00:13:23.080 |
So I think the cloud businesses are decently positioned, because it turns out 00:13:36.400 |
They generate so much efficiency that even though I may have a huge bill, I need 00:13:41.080 |
to pay them, I don't mind paying it because it's much better than the 00:13:45.840 |
But they also are very profitable businesses. 00:13:49.480 |
And it turns out that if you look at some of the generative AI startups today, the 00:13:53.720 |
switching costs of my using, you know, one startup's API versus switching into AWS 00:14:00.680 |
or zero or GCP, Google Cloud, the switching costs are actually still quite low. 00:14:06.440 |
So the moat of a lot of the generative AI startups, I'm not quite sure how strong 00:14:13.360 |
But in terms of the cloud businesses have very high surface area. 00:14:16.840 |
I mean, you know, frankly, once you build a deep tech stack on one of the clouds, it's 00:14:20.600 |
really painful to move off that if you didn't, you know, design for multi-cloud 00:14:24.480 |
from day one or whatever, which is part of what makes the cloud business such a 00:14:30.720 |
So I think a lot of the cloud businesses will do OK selling API calls and 00:14:35.920 |
integrating this with the rest of their existing cloud offerings. 00:14:39.720 |
And the market dynamics are very interesting, right? 00:14:41.760 |
So Meta has been a really interesting kind of spoiler for some other businesses by 00:14:46.640 |
releasing open source software, open-source generative AI software. 00:14:50.520 |
And I think Meta, you know, it was my former team, Google Brain, that released 00:14:57.360 |
And I think that it would make logical sense for Meta. 00:15:01.720 |
So Meta was really, you know, hurt by having to build on Android and iOS 00:15:08.880 |
When Apple changed their privacy policies, that really damaged Meta's business. 00:15:13.200 |
So, you know, kind of makes logical sense that Meta would be worried if Google 00:15:19.520 |
Brain, my old team, released the dominant AI-developed platform, TensorFlow, and 00:15:25.400 |
everyone had to build on TensorFlow, what are the implications on Meta's business? 00:15:28.360 |
So frankly, Meta played its hand beautifully with open-source PyTorch as an 00:15:34.480 |
I think again today, Genzware is very valuable for online advertising and also, 00:15:40.680 |
And so it actually makes a very strong logical sense that Meta would be quite 00:15:45.800 |
happy to have an open-source platform to build on, to make sure it's not locked 00:15:49.560 |
into like an iOS-like platform in the Gen. AI era. 00:15:53.200 |
Fortunately, the good news is for almost all of us, Meta's work and many other 00:15:57.880 |
parties' work on open-sourcing AI gives all of us, you know, free tools to build 00:16:04.720 |
It gives us those building blocks that lets us innovate cheaply and build 00:16:10.640 |
Sorry, not sure if there was two inside the baseball on, you know, tech comfy 00:16:15.320 |
No, I think, I mean, I wouldn't answer for everybody out there, but I thought it was 00:16:19.400 |
And I want to come back to open-source in a minute, but from the point of view of 00:16:25.480 |
CIOs and other corporate leaders across the economy, I think there are lots of 00:16:35.080 |
You know, lots of people trying to sell products, lots of people saying this 00:16:38.800 |
service will change your business, and part of the job is, you know, figuring out 00:16:44.920 |
How do you -- do you have any advice on how to tell in a moment where the technology 00:16:49.960 |
is fairly nascent and fast developing how to tell apart the sort of people who have 00:16:57.520 |
You know, I'll tell you the thing that I think is tough. 00:17:00.000 |
Even our VC friends right here on San Jo-- some of them on San Jo Road, the one thing 00:17:04.920 |
that's still quite tough is the technical judgment, because AI is evolving so quickly. 00:17:10.360 |
So I've seen, you know, really good investors here on San Jo Road. 00:17:14.160 |
They'll be pitched on some startup, and sometimes, you know, someone, let's say, 00:17:18.680 |
open AI or someone just released a new API, and the startup built something really 00:17:22.920 |
cool over a weekend on top of a new API that someone just released. 00:17:26.360 |
But unless you know, you know, about that new capability and what the startup really 00:17:30.960 |
did, I've seen VCs come to me and say, "Wow, Andrew, this is so exciting. 00:17:35.320 |
Look, these three, you know, college students built this thing. 00:17:40.480 |
And I'll go, "No, I just saw 50 other startups doing the exact same thing." 00:17:44.840 |
And so I think that technical judgment, because the tech is evolving so quickly, that's the 00:17:52.120 |
And then maybe I should say, for any of the work of corporate startups, we tend to work 00:17:56.240 |
with corporates to go through a systematic brainstorming process, but I'll just mention 00:17:59.600 |
one other thing that I think could probably in many CIOs' interests, which is we've all 00:18:04.000 |
seen, when we buy a new solution, you know, often we end up creating another data silo 00:18:12.120 |
And I feel like if we're able to work with vendors that, you know, let us continue to 00:18:19.600 |
have full access to our data in a reasonably interchangeable format, that significantly 00:18:26.000 |
reduces vendor lock-in, so that if one vendor -- you decide to swap out for a different 00:18:32.520 |
So that's one thing I tend to pay heavy attention to myself, is if I buy it from a vendor, don't 00:18:39.080 |
do stuff with my data, because I want them to, that's what I'm paying them for. 00:18:42.620 |
But is there that transparency and interoperability to make sure that I control my own data and 00:18:47.680 |
the ability to swap -- to my own team take a look at it or swap out for a different vendor? 00:18:52.160 |
This does run counter, you know, to the interests of all the vendors that want right lock-in 00:18:56.640 |
candidly, but this is one thing I tend to rate higher than, you know, some of my colleagues 00:19:02.960 |
in my vendor selection and buying process maybe. 00:19:06.640 |
>> It sounds like you see a world where the folks in this room are implementing AI, generative 00:19:13.360 |
AI, through a multiple -- multiplicity of different providers. 00:19:19.400 |
It's not going to be like, yeah, we're with Microsoft. 00:19:22.960 |
It's going to be, yeah, we use Microsoft for this, we use this company over here for that. 00:19:28.920 |
I would say it seems like the Microsoft sales reps -- oh, I'm actually -- well, should we 00:19:35.880 |
How many of you had that Microsoft sales reps push co-pilots really hard to you? 00:19:44.000 |
You know, love the team, really capable, co-pilots can give a part of the reviews, but there's 00:19:49.480 |
I think it's worthwhile to, you know, take a look at multiple options and buy the right 00:19:57.960 |
>> You touched on the hardware costs, the amount that's been invested so far. 00:20:03.360 |
How concerned are you about the hardware bottleneck and the lack of GPUs, TPUs, you know, whatever 00:20:09.680 |
one wants to call it, and, you know, Nvidia's relative strength over the last year or two? 00:20:17.560 |
And what do you think of what we reported as Sam Altman's plans to raise potentially trillions 00:20:27.200 |
It would be -- Sam was my student at Stanford way back, so I've known him for years. 00:20:35.400 |
I don't know where we find trillions -- $7 trillion in the Washington County. 00:20:42.080 |
>> That lets you buy Apple twice, right, more than twice at this point, so it's an interesting 00:20:48.840 |
I think that over the next year, I think in a year or so, the semiconductor shortage, 00:20:54.920 |
I think will feel much better, and I want to give, you know, AMD credit, AMD and Intel 00:21:02.960 |
So Nvidia's been -- Nvidia's -- one of Nvidia's mode has been the CUDA programming language, 00:21:10.040 |
but AMD's open-source alternative called ROCM has been making really good progress, and so 00:21:15.840 |
some of my teams, you know, would build stuff on AMD hardware, and sometimes -- I don't 00:21:21.600 |
think it's not at parity, but it's also so much better than a year ago. 00:21:26.400 |
So I think AMD is worth a careful look at, and Intel Gaudi is also, you know -- so we'll 00:21:42.120 |
It comes up in the regulatory discussion, and I think one argument that resonates is, 00:21:47.760 |
well, if we have, you know, at least if we have these proprietary models, there's a handful 00:21:52.360 |
of companies with these big powerful LLMs that we can focus on, make sure they're doing 00:21:58.360 |
the right thing to prevent this technology from being misused. 00:22:01.800 |
Open-source proliferates, and you're talking about not five or 10, but 500 or 1,000 or 00:22:07.560 |
even larger numbers of people who have these tools, and, you know, how do you know what 00:22:11.240 |
they're going to do with it, and how do you control it? 00:22:13.880 |
What's your answer to the people who have that concern about open-source? 00:22:17.600 |
So I think over the last year or so, there's been intense lobbying efforts by a number 00:22:24.360 |
of -- usually the bigger players in generative AI that would rather not have to compete with 00:22:31.560 |
You know, they've invested hundreds of millions of dollars, right, to build a proprietary 00:22:35.880 |
Boy, isn't it annoying if someone open-sources something similar for free? 00:22:40.520 |
Just it's not good, so the level of intensity and lobbying in DC, it really took me by surprise. 00:22:48.920 |
And so the main argument has been AI is dangerous, maybe you can wipe out humanity, sort of put 00:22:55.160 |
in place your regulatory licensing requirements before you build AI, you need to report to 00:23:01.200 |
the government and maybe even get a license and prove a save and basically put in place 00:23:06.240 |
these really heavy regulatory burdens, in my opinion, in a false name of safety that 00:23:13.480 |
I think would really risk squashing open-source. 00:23:16.360 |
It turns out that if these lobbying efforts succeed, I think almost all of us in this 00:23:20.360 |
room will be losers, and there will be a very small number of, you know, people that will 00:23:27.400 |
So there's actually a large community here in Silicon Valley that's been -- and around 00:23:31.560 |
the world that's been actively pushing back against this narrative. 00:23:35.120 |
I think to all of you having the ability to open-source components to build on that lets 00:23:39.960 |
you control your own stack, it means that some vendor can't deprecate one version, this has 00:23:46.040 |
And then you have your whole software stack needs to be re-architected and so on. 00:23:50.640 |
And then to answer the safety thing, I feel like, you know, to me at the heart of it is, 00:23:58.080 |
do we want more or less intelligence in the world? 00:24:02.200 |
So recently, our primary source of intelligence has been human intelligence. 00:24:06.840 |
Now we also have artificial intelligence or machine intelligence. 00:24:10.440 |
And yes, intelligence can be used for nefarious purposes, but I think a lot of civilizations' 00:24:15.840 |
progress has been through people getting training and getting smarter and getting more intelligent. 00:24:20.120 |
And I think we're actually all much better off with more, rather than less intelligence 00:24:25.160 |
So I think open-source is a very positive contribution. 00:24:28.080 |
And then lastly, as far as I can tell, a lot of the fears of harm and affairs actors, it's 00:24:35.000 |
There are a few, but when I look at it, I think a lot of the fears have been overblown 00:24:43.720 |
So I want to get to questions in a moment, but just to follow up on that, I interviewed 00:24:48.440 |
you something like seven years ago at a WHO conference and asked you about the singularity 00:24:54.000 |
and those concerns, and I think you said that worrying about evil AI robots is equivalent 00:25:11.800 |
At this point in time, so I feel like that super-intelligence singularity is much more 00:25:15.760 |
science fiction than anything that any of us AI builders know how to build. 00:25:20.720 |
You were saying that you've seen less of that type of talk, like you were just at Davos. 00:25:26.400 |
In the regulatory discussions, there's less of this like, "Oh my God, we've got to stop 00:25:31.640 |
We're building this thing that's so amazing, it might take over humanity," is not as much 00:25:40.800 |
Last May, there was a statement signed by a bunch of people that I think made an analogy 00:25:44.640 |
between AI and nuclear weapons without justification. 00:25:47.240 |
AI brings intelligence to the world, nuclear weapons blows up cities. 00:25:50.760 |
I don't know why they have anything to do with each other, but that analogy just created 00:25:56.200 |
Fortunately, since May, that degree of alarm, when I speak of people in the US government 00:26:02.400 |
about AI human extension, I'm very happy to see eye rolls at this point. 00:26:09.320 |
I think Europe takes it a little more seriously than the US, but I just see the tenure dying 00:26:18.240 |
We want self-driving cars to be safe, we want medical devices to be safe. 00:26:21.200 |
Instead of worrying about the AI tech, let's look at the concrete applications. 00:26:25.520 |
Because when we look at general purpose technology like AI, it's hard to regulate that without 00:26:29.840 |
just slowing everything down, but when we look at the concrete applications, we can 00:26:33.920 |
say what do and don't we want in financial services? 00:26:37.440 |
What is fair and what is unfair in underwriting? 00:26:43.960 |
With regulations, the application layer would be a very positive thing to even unlock innovation, 00:26:49.120 |
but these big fears say intelligence is dangerous and AI is dangerous. 00:26:53.680 |
That just tends to lead to regulatory capture and lobbyists having very strange agendas. 00:26:59.640 |
I see some down here, can we get a microphone? 00:27:09.840 |
Hi, thank you for all you do for this community. 00:27:18.680 |
All innovation follows some kind of an S-curve and we're in this rapid acceleration of innovation 00:27:29.800 |
Where do you think the plateau is and what are the rate limiters to drive us towards 00:27:36.120 |
How much farther can this be pushed before we start to see ourselves hitting a plateau 00:27:42.600 |
Yeah, so I think large language models, they are getting harder and harder to scale. 00:27:47.880 |
I think there is still more juice in that onion, but the exciting thing is the core 00:27:52.600 |
innovation of large language models, we're now stacking other S-curves on top of the 00:27:57.800 |
So even the first S-curve's plateaus, I'm excited, for example, about edge AI or on 00:28:05.160 |
If you don't yet, it's easier than you might think and keeps all your data confidential 00:28:13.280 |
Instead of you prompting AI, it responds in a few seconds. 00:28:16.520 |
We now see AI systems where I can tell it, dear AI, please do research for me and write 00:28:22.000 |
It goes off for half an hour and brows the Internet, summarizes all things, comes back 00:28:27.840 |
This is kind of, you know, it's not working as well as I just distracted, but it's working 00:28:31.840 |
much better now than three months ago, so I'm excited about AI autonomous agents, goes 00:28:39.280 |
We saw the unlock of text processing with large language models, with large vision models, 00:28:45.720 |
which at a much earlier stage of development, I think we're trying to see a revolution in 00:28:51.240 |
image processing in the same way that we saw a revolution in text processing. 00:28:55.320 |
So these are some of the other S-curves being stacked up on top, and then some are even 00:28:59.640 |
further out, so I'm not seeing an overall plateau in AI yet. 00:29:04.720 |
Maybe there'll be one, but I'm not seeing it yet. 00:29:13.520 |
It's a great dialogue, and our sophomore at Berkeley spends more time watching your videos 00:29:18.840 |
So you mentioned automating tasks and also human intelligence. 00:29:23.800 |
The knowledge of the task are still owned by the humans. 00:29:26.920 |
In your dialogues with clients, are you seeing resistance to unpack the tasks that humans 00:29:32.160 |
do accurately so that you can apply AI to it? 00:29:35.080 |
And if you are seeing resistance, what is the solve for that? 00:29:41.480 |
So I feel like -- I find that when we have a realistic conversation -- so let's see, 00:29:47.240 |
when we work with corporations -- so AI, we often work with corporations to brainstorm 00:29:51.200 |
project ideas and figure out what can we help build. 00:29:53.960 |
That's actually -- as an AI person, I learned that my slim line is AI, but all of these 00:29:58.760 |
exciting businesses apply to it that I just don't know anything about, so a core part 00:30:02.520 |
of what our strategy is to work with large corporate partners that are much smarter than 00:30:11.360 |
So what I'm finding is that at the executive level, which probably, you know, who we work 00:30:15.440 |
with the most day-to-day, there's not resistance at all. 00:30:22.240 |
Maybe one unlock that I found is I teach a class on Coursera, a janitor AI for everyone. 00:30:28.600 |
It was the fastest-growing course on Coursera last year. 00:30:31.560 |
But I did that to try to give business leaders and others a non-technical understanding of 00:30:36.760 |
AI and what they can and cannot do, and we found that when some of our partners take 00:30:40.360 |
gen-2 AI for everyone, you know, that non-technical understanding of AI unlocks a lot of brainstorming 00:30:46.560 |
So that's the executive level, kind of learn about what gen-2 AI, brainstorming, execute, 00:30:52.880 |
And then many businesses are sensitive to the, you know, broader employee-based concerns 00:31:00.600 |
And I find that when we have a really candid conversation, the fears usually go down. 00:31:07.600 |
I don't want to pretend there's zero job loss. 00:31:11.920 |
But when we do the task-based analysis of jobs, you know, pay if AI automates 20 percent 00:31:20.620 |
I can be more productive, focus more on the other 80 percent of tasks. 00:31:24.200 |
So on average, once we have that more candid conversation, you know, I'm thinking of this 00:31:29.840 |
one time the union stopped us from even installing one camera. 00:31:33.040 |
So there are some of that, but most of the time, it's a pretty rational and okay conversation.