To sum it up to start, what would you say are going to be the biggest positive and negative impacts on the workforce from AI over the next, say, five years? >> I think there will be massive productivity boost for existing job roles and it will create many new job roles.
And I don't want to pretend that there will be no job loss. There will be some job loss, but I think it may not be as bad as people are worried right now. I know that we're having an important societal discussion about AI's impact on jobs, and from a business perspective, I actually find it even more useful to not think about AI automating jobs, but instead AI is automating tasks.
So it turns out that most jobs, you can think of as a bundle of tasks. And when I work with large companies, well often, many CEOs would come and say, hey, Andrew, I have 50,000 or 100,000 employees. What are all my people actually doing, right? Turns out none of us really know in detail what our workforces are doing.
But I found that if you look at the jobs and break them down into tasks, then analyzing individual tasks for potential for AI automation or augmentation often leads to interesting opportunities to use AI. And maybe one concrete example, radiologists. We've talked about AI maybe automating some parts of radiology.
But it turns out that radiologists do many tasks. They read x-rays, but they also do patient intake, gather patient histories. They consult with patients, mentor younger doctors. They operate the machines, maintain the machines. So they actually do many different tasks. And we found that when we go into businesses and do this task-based analysis, it often surfaces interesting opportunities.
And regarding the job question, it turns out that for many jobs, if AI automates 20, 30% of the tasks in a job, then the job maybe is actually decently safe. But what will happen is not that AI will replace people, but I think people that use AI will replace other people that don't.
>> What are the types of tasks that you're seeing? And if you can say, what professions do you think are most, where you have the highest concentration of those types of tasks? >> So some job roles really disrupted right now, call centers, call center operations, the customer support is one.
It feels like tons of companies are using AI to nearly automate that or automate a large fraction of that, I think sales operations, sales back office, those routine tasks are being automated, and I think a bunch of others. I feel like we see different teams trying to automate some of the lower legal work, some of the lower level marketing work, a bunch of others.
But I would say the two biggest I'm seeing are customer service and then maybe some sort of sales operations. But I think there's a lot of opportunities out there. How do you think it's going to change the role of CIOs, the folks in this room? >> I think it's an exciting time to be a CIO.
One thing that my team AI fund does is we often work with large corporations to identify and then execute on AI projects. So over the last week, I think I spent almost half day sessions with two Fortune 500 companies where I got to hear from their technology leadership about the use cases they are pursuing.
And some patterns I've seen every time we spend a little bit of time brainstorming AI projects, they're always way more promising ideas than anyone has the resources to execute. And so it becomes a fascinating kind of a prioritization exercise to just decide what to do. And then the other decisions I'm seeing is after you do a task-based analysis, identify tasks and jobs for ideas, or after your team learns about AI and brainstorms ideas, after you prioritize there's a usual kind of buy versus build decision.
And it turns out that we seem to be in an opportunity-rich environment where actually what AI Fund winds up often doing is often the company will say, "These projects only keep close to my heart. I will pay for 100% of this." But there's so many other ideas that you just don't want to pay completely by yourself for the development of and then kind of we then help our corporate partners build it outside so you can still get the capability without needing having to pay for it entirely by yourself.
But I find that at AI Fund we see so many startup and project ideas that we wind up having to use task management software to just keep track of all these ideas because no one on our team can keep straight of these kind of hundreds of ideas that we see and have to prioritize among.
So you asked the generative AI to keep track of all the tasks that generative AI can do? Oh, that'd be interesting, but we actually use Asana to keep track of all the different ideas, so generally I summarize it. You've talked for years about the importance of lifelong learning, the enhanced importance of that in the AI world.
Is it realistic to think that people will be able to re-skill, to educate themselves at a pace that keeps up with the development of this technology? Not necessarily the folks in this room, but the people whose tasks are being automated. How big of an issue do you think that's going to be, the displacement?
Because when we talk about technology and jobs, we always talk about in the long run, "Look, we used to all be farmers, now we're not." In the long run it'll be fine, but in the meantime there's a lot of dislocation. Yeah, so this really can't answer. Honestly, I think it's realistic, but I'm a little bit nervous about it.
But I think it is up to all of us collectively in leadership roles to make sure that we do manage this well. One thing I'm seeing, so the last wave of tech innovation, when deep learning, predictive AI, labeling technology, whether you call it started to work really well 10 years ago, it tended to be more of the routine, repetitive tasks like factory automation that we could automate.
With generative AI, it seems to be more of the knowledge workers' work that AI can now automate or augment. And to the reskilling point, I think that almost all knowledge workers today can get a productivity boost by using generative AI right away, pretty much right now. But the challenge is there is reskilling needed.
We've all seen the stories about a lawyer generating hallucinated court citations, and then getting in trouble with the judge. So I feel like people need just a little bit of training to use AI responsibly and safely. But with just a little bit of training, I think almost all knowledge workers, including all the way up to the C-suite, can get a productivity boost right away.
But I think it is exciting, but also, frankly, daunting challenge to think about how do we help all of these knowledge workers gain those skills. Is that problem you alluded to, the hallucination problem, the accuracy, concern, is that fixable with AI, or is it more that we just have to learn to use it the right way and assume an error rate?
Yeah, so I don't see a path. I myself do not see a path to solving hallucinations and making AI never hallucinate in the same way that I don't see a path to solving the problem that humans sometimes make mistakes. But we've figured out how to work with humans and for humans and so on.
It seems to go OK most of the time. And I think because gen 2 AI bursts onto the scene so suddenly, a lot of people have not yet gotten used to the workflow and processes of how to work with them safely and responsibly. So I know that when an AI makes a mistake, sometimes it goes viral on social media or it draws a lot of attention, but I think that it's probably not as bad as the widespread perception.
Yes, AI makes mistakes, but plenty of businesses are figuring out, despite some baseline error rate, how to deploy it safely and responsibly. And I'm not saying that it's never a blocker to get anything deployed, but I'm seeing tons of stuff deployed in very useful ways. Just don't use gen 2 AI to render medical diagnosis and output directly what drug to tell a patient to take.
That would be really irresponsible. But there are lots of other use cases where it seems very responsible and safe to use gen 2 AI. Do you think improvements on error rates will increase the use cases? I mean, right now, maybe we'll never get to a point or not in the foreseeable future where you want the AI doctor to directly prescribe, but are there other cases that are not optimal now because we're still figuring out error rates that will become more usable over time?
Yeah, it's been exciting to see how AI technology improves month over month. So I think today we have much better tools for guarding against hallucinations compared to, say, six months ago. But just one example, if you ask the AI to use retrieve augmented generation, so don't just generate text, but ground it in a specific trusted article and give a citation that reduces hallucinations.
And then further, if AI generates something, you really want it to be right. It turns out you can ask the AI to check his own work. Dear AI, look at this thing you just wrote. Look at this trusted source. Read both carefully and tell me if everything is justified based on the trusted source.
And this won't squash hallucinations completely to zero, but it won't massively squash it compared to if you ask AI to just say what it had on his mind. So I think hallucinations is-- it is an issue, but I think it's not as bad an issue as people fear it to be right now.
You've been involved in AI for decades. And the technology has been through lots of multiple hype cycles and declines and winters, AI winters. What do you think is different about this moment, the last 15 months or so, since the-- of the boom of generative AI? Is this more lasting?
So I think-- so I feel like compared to 10, 15 years ago, we've not really had another AI winter. I think it's been growing in value. So today, years back, I used to lead the Google Brain team, which seemed to help Google adopt deep learning. And the economics-- fundamental economics are very strong.
I'm using deep learning to drive online advertising. Maybe not the most inspiring thing I've worked on, but the economic fundamentals have been really strong for 10-ish plus years now. And I feel like the economic-- the fundamentals for generative AI also feel quite strong in the sense that we can ultimately augment a lot of tasks and drive a lot of very fundamental business efficiency.
Now, there is one question. I think Sequoia posted an interesting article asking, over the last year or last year, we collectively invested, I don't know, maybe something like $50 billion in capital infrastructure. We are buying GPUs and data centers. And I think we better figure out the applications to make sure that pays off.
So I don't think we overinvested, but to me, whenever there's a new wave of technology, almost all of the attention is on the technology layer. So we all want to talk about what Google and OpenAI and Microsoft and Amazon and so on are doing, because it's fun to talk about the technology.
There's nothing wrong with that. But it turns out that for every wave of technology, for the two builders, like these companies, to be successful, there's another layer that had better be even more successful, which is the applications you build on top of these tools. Because the applications that better generate even more revenue so that they can afford to pay the two builders.
And for whatever reason, society of interest or whatever, the applications tend to get less interest than the two builders. But I think for many of you, in organizations where you are not trying to be the next large language model, foundation model provider, I think that as we look into the many years in the future, there would be more revenue generated, at least there better be, in the applications that you might build than just in the two builders.
That gets to a question that I find fascinating about this. What is the effect on the power dynamics in the tech industry and the economy more broadly? And to what degree is this a technology that is a disruptive technology that is ushering in a new wave of companies that 10 years from now will be big, and even though we hadn't heard of them two years ago, to what degree is it just going to make Microsoft and Amazon and Google, et cetera, more powerful than they've ever been before?
So I think the cloud businesses are decently positioned, because it turns out that AWS is your GCP. Those are beautiful businesses. They generate so much efficiency that even though I may have a huge bill, I need to pay them, I don't mind paying it because it's much better than the alternative most of the time.
But they also are very profitable businesses. And it turns out that if you look at some of the generative AI startups today, the switching costs of my using, you know, one startup's API versus switching into AWS or zero or GCP, Google Cloud, the switching costs are actually still quite low.
So the moat of a lot of the generative AI startups, I'm not quite sure how strong their moat is. But in terms of the cloud businesses have very high surface area. I mean, you know, frankly, once you build a deep tech stack on one of the clouds, it's really painful to move off that if you didn't, you know, design for multi-cloud from day one or whatever, which is part of what makes the cloud business such a beautiful business model.
So I think a lot of the cloud businesses will do OK selling API calls and integrating this with the rest of their existing cloud offerings. And the market dynamics are very interesting, right? So Meta has been a really interesting kind of spoiler for some other businesses by releasing open source software, open-source generative AI software.
And I think Meta, you know, it was my former team, Google Brain, that released TensorFlow. And I think that it would make logical sense for Meta. So Meta was really, you know, hurt by having to build on Android and iOS platforms, right? When Apple changed their privacy policies, that really damaged Meta's business.
So, you know, kind of makes logical sense that Meta would be worried if Google Brain, my old team, released the dominant AI-developed platform, TensorFlow, and everyone had to build on TensorFlow, what are the implications on Meta's business? So frankly, Meta played its hand beautifully with open-source PyTorch as an alternative.
I think again today, Genzware is very valuable for online advertising and also, you know, user engagement. And so it actually makes a very strong logical sense that Meta would be quite happy to have an open-source platform to build on, to make sure it's not locked into like an iOS-like platform in the Gen.
AI era. Fortunately, the good news is for almost all of us, Meta's work and many other parties' work on open-sourcing AI gives all of us, you know, free tools to build on top of. It gives us those building blocks that lets us innovate cheaply and build maybe more exciting applications on.
Sorry, not sure if there was two inside the baseball on, you know, tech comfy market. No, I think, I mean, I wouldn't answer for everybody out there, but I thought it was fascinating. And I want to come back to open-source in a minute, but from the point of view of CIOs and other corporate leaders across the economy, I think there are lots of options coming at you right now.
You know, lots of people trying to sell products, lots of people saying this service will change your business, and part of the job is, you know, figuring out what's wheat, what's chaff. How do you -- do you have any advice on how to tell in a moment where the technology is fairly nascent and fast developing how to tell apart the sort of people who have real solutions from the snake oil salesman?
You know, I'll tell you the thing that I think is tough. Even our VC friends right here on San Jo-- some of them on San Jo Road, the one thing that's still quite tough is the technical judgment, because AI is evolving so quickly. So I've seen, you know, really good investors here on San Jo Road.
They'll be pitched on some startup, and sometimes, you know, someone, let's say, open AI or someone just released a new API, and the startup built something really cool over a weekend on top of a new API that someone just released. But unless you know, you know, about that new capability and what the startup really did, I've seen VCs come to me and say, "Wow, Andrew, this is so exciting.
Look, these three, you know, college students built this thing. This is amazing. I want to fund it." And I'll go, "No, I just saw 50 other startups doing the exact same thing." And so I think that technical judgment, because the tech is evolving so quickly, that's the one thing that I find difficult.
And then maybe I should say, for any of the work of corporate startups, we tend to work with corporates to go through a systematic brainstorming process, but I'll just mention one other thing that I think could probably in many CIOs' interests, which is we've all seen, when we buy a new solution, you know, often we end up creating another data silo within our organization.
And I feel like if we're able to work with vendors that, you know, let us continue to have full access to our data in a reasonably interchangeable format, that significantly reduces vendor lock-in, so that if one vendor -- you decide to swap out for a different vendor in a month or two.
So that's one thing I tend to pay heavy attention to myself, is if I buy it from a vendor, don't do stuff with my data, because I want them to, that's what I'm paying them for. But is there that transparency and interoperability to make sure that I control my own data and the ability to swap -- to my own team take a look at it or swap out for a different vendor?
This does run counter, you know, to the interests of all the vendors that want right lock-in candidly, but this is one thing I tend to rate higher than, you know, some of my colleagues in my vendor selection and buying process maybe. >> It sounds like you see a world where the folks in this room are implementing AI, generative AI, through a multiple -- multiplicity of different providers.
It's not going to be like, yeah, we're with Microsoft. It's going to be, yeah, we use Microsoft for this, we use this company over here for that. Is that right? >> Yeah. I would say it seems like the Microsoft sales reps -- oh, I'm actually -- well, should we do a poll?
How many of you had that Microsoft sales reps push co-pilots really hard to you? Yeah, I thought so. I forgot everyone. Right? So Microsoft is great. You know, love the team, really capable, co-pilots can give a part of the reviews, but there's so much stuff out in the market.
I think it's worthwhile to, you know, take a look at multiple options and buy the right two for the right job. >> You touched on the hardware costs, the amount that's been invested so far. How concerned are you about the hardware bottleneck and the lack of GPUs, TPUs, you know, whatever one wants to call it, and, you know, Nvidia's relative strength over the last year or two?
And what do you think of what we reported as Sam Altman's plans to raise potentially trillions of dollars to solve this? >> Yeah. It would be -- Sam was my student at Stanford way back, so I've known him for years. He's a smart guy. I can't argue results. I don't know where we find trillions -- $7 trillion in the Washington County.
>> Up to you. >> That lets you buy Apple twice, right, more than twice at this point, so it's an interesting figure to try to raise. I think that over the next year, I think in a year or so, the semiconductor shortage, I think will feel much better, and I want to give, you know, AMD credit, AMD and Intel maybe.
So Nvidia's been -- Nvidia's -- one of Nvidia's mode has been the CUDA programming language, but AMD's open-source alternative called ROCM has been making really good progress, and so some of my teams, you know, would build stuff on AMD hardware, and sometimes -- I don't think it's not at parity, but it's also so much better than a year ago.
So I think AMD is worth a careful look at, and Intel Gaudi is also, you know -- so we'll see how the market evolves. >> You mentioned open-source several times. I know you're a champion of that. It comes up in the regulatory discussion, and I think one argument that resonates is, well, if we have, you know, at least if we have these proprietary models, there's a handful of companies with these big powerful LLMs that we can focus on, make sure they're doing the right thing to prevent this technology from being misused.
Open-source proliferates, and you're talking about not five or 10, but 500 or 1,000 or even larger numbers of people who have these tools, and, you know, how do you know what they're going to do with it, and how do you control it? What's your answer to the people who have that concern about open-source?
>> Yeah. So I think over the last year or so, there's been intense lobbying efforts by a number of -- usually the bigger players in generative AI that would rather not have to compete with open-source. You know, they've invested hundreds of millions of dollars, right, to build a proprietary AI model.
Boy, isn't it annoying if someone open-sources something similar for free? Just it's not good, so the level of intensity and lobbying in DC, it really took me by surprise. And so the main argument has been AI is dangerous, maybe you can wipe out humanity, sort of put in place your regulatory licensing requirements before you build AI, you need to report to the government and maybe even get a license and prove a save and basically put in place these really heavy regulatory burdens, in my opinion, in a false name of safety that I think would really risk squashing open-source.
It turns out that if these lobbying efforts succeed, I think almost all of us in this room will be losers, and there will be a very small number of, you know, people that will benefit from this. So there's actually a large community here in Silicon Valley that's been -- and around the world that's been actively pushing back against this narrative.
I think to all of you having the ability to open-source components to build on that lets you control your own stack, it means that some vendor can't deprecate one version, this has happened, right? And then you have your whole software stack needs to be re-architected and so on. And then to answer the safety thing, I feel like, you know, to me at the heart of it is, do we want more or less intelligence in the world?
So recently, our primary source of intelligence has been human intelligence. Now we also have artificial intelligence or machine intelligence. And yes, intelligence can be used for nefarious purposes, but I think a lot of civilizations' progress has been through people getting training and getting smarter and getting more intelligent. And I think we're actually all much better off with more, rather than less intelligence in the world.
So I think open-source is a very positive contribution. And then lastly, as far as I can tell, a lot of the fears of harm and affairs actors, it's not that there are no negative use cases. There are a few, but when I look at it, I think a lot of the fears have been overblown relative to the actual risk.
So I want to get to questions in a moment, but just to follow up on that, I interviewed you something like seven years ago at a WHO conference and asked you about the singularity and those concerns, and I think you said that worrying about evil AI robots is equivalent to worrying about overpopulation on Mars.
We're not even there yet. Are we on Mars yet in this metaphor? Where are we in that progress? Yeah. At this point in time, so I feel like that super-intelligence singularity is much more science fiction than anything that any of us AI builders know how to build. So I still feel that way.
You were saying that you've seen less of that type of talk, like you were just at Davos. In the regulatory discussions, there's less of this like, "Oh my God, we've got to stop this. We're building this thing that's so amazing, it might take over humanity," is not as much part of the discussion now?
It's really dying down. Last May, there was a statement signed by a bunch of people that I think made an analogy between AI and nuclear weapons without justification. AI brings intelligence to the world, nuclear weapons blows up cities. I don't know why they have anything to do with each other, but that analogy just created a lot of hype.
Fortunately, since May, that degree of alarm, when I speak of people in the US government about AI human extension, I'm very happy to see eye rolls at this point. I think Europe takes it a little more seriously than the US, but I just see the tenure dying down to talk more about concrete homes.
We want self-driving cars to be safe, we want medical devices to be safe. Instead of worrying about the AI tech, let's look at the concrete applications. Because when we look at general purpose technology like AI, it's hard to regulate that without just slowing everything down, but when we look at the concrete applications, we can say what do and don't we want in financial services?
What is fair and what is unfair in underwriting? What standards should medical devices meet? With regulations, the application layer would be a very positive thing to even unlock innovation, but these big fears say intelligence is dangerous and AI is dangerous. That just tends to lead to regulatory capture and lobbyists having very strange agendas.
Do we have any questions in the audience? I see some down here, can we get a microphone? The gentleman and then the lady. Hi, thank you for all you do for this community. I think your online courses are amazing. All innovation follows some kind of an S-curve and we're in this rapid acceleration of innovation around generative AI and machine learning.
Where do you think the plateau is and what are the rate limiters to drive us towards the plateau? How much farther can this be pushed before we start to see ourselves hitting a plateau and what's going to limit that? Yeah, so I think large language models, they are getting harder and harder to scale.
I think there is still more juice in that onion, but the exciting thing is the core innovation of large language models, we're now stacking other S-curves on top of the first one. So even the first S-curve's plateaus, I'm excited, for example, about edge AI or on device AI. I run an L on my laptop all the time.
If you don't yet, it's easier than you might think and keeps all your data confidential with open source AI. You can run on your laptop. I said about agents. Instead of you prompting AI, it responds in a few seconds. We now see AI systems where I can tell it, dear AI, please do research for me and write a report.
It goes off for half an hour and brows the Internet, summarizes all things, comes back in half an hour with a report. This is kind of, you know, it's not working as well as I just distracted, but it's working much better now than three months ago, so I'm excited about AI autonomous agents, goes and works for an extended period of time.
We saw the unlock of text processing with large language models, with large vision models, which at a much earlier stage of development, I think we're trying to see a revolution in image processing in the same way that we saw a revolution in text processing. So these are some of the other S-curves being stacked up on top, and then some are even further out, so I'm not seeing an overall plateau in AI yet.
Maybe there'll be one, but I'm not seeing it yet. >> Do you have a very quick question? >> Yeah. >> Okay. Thank you. >> Thank you. It's a great dialogue, and our sophomore at Berkeley spends more time watching your videos than taking courses, so thank you again. So you mentioned automating tasks and also human intelligence.
The knowledge of the task are still owned by the humans. In your dialogues with clients, are you seeing resistance to unpack the tasks that humans do accurately so that you can apply AI to it? And if you are seeing resistance, what is the solve for that? >> Let's see.
So I feel like -- I find that when we have a realistic conversation -- so let's see, when we work with corporations -- so AI, we often work with corporations to brainstorm project ideas and figure out what can we help build. That's actually -- as an AI person, I learned that my slim line is AI, but all of these exciting businesses apply to it that I just don't know anything about, so a core part of what our strategy is to work with large corporate partners that are much smarter than I am about the business domains to apply to.
So what I'm finding is that at the executive level, which probably, you know, who we work with the most day-to-day, there's not resistance at all. There's just enthusiasm. Maybe one unlock that I found is I teach a class on Coursera, a janitor AI for everyone. It was the fastest-growing course on Coursera last year.
But I did that to try to give business leaders and others a non-technical understanding of AI and what they can and cannot do, and we found that when some of our partners take gen-2 AI for everyone, you know, that non-technical understanding of AI unlocks a lot of brainstorming ideation.
So that's the executive level, kind of learn about what gen-2 AI, brainstorming, execute, lots of exciting projects. And then many businesses are sensitive to the, you know, broader employee-based concerns about job loss. And I find that when we have a really candid conversation, the fears usually go down. I don't want to pretend there's zero job loss.
That's just not true. But when we do the task-based analysis of jobs, you know, pay if AI automates 20 percent of my job, to a lot of people, that's great. I can be more productive, focus more on the other 80 percent of tasks. So on average, once we have that more candid conversation, you know, I'm thinking of this one time the union stopped us from even installing one camera.
So there are some of that, but most of the time, it's a pretty rational and okay conversation.