the jorogan experience and wondering what the potential for the future is and whether or not that's a good thing. I think it's gonna be a great thing but I think it's not gonna be all a great thing and that that is where I think that's where all of the complexity comes in for people it's not this like clean story of we're gonna do this and it's all gonna be great it's we're gonna do this it's gonna be net great but it's gonna be like a technological revolution it's gonna be a societal revolution and those always come with change and even if it's like net wonderful you know there's things we're gonna lose along the way some kinds of jobs some kind of parts of our way of life some parts of the way we live are gonna change go away and eat no matter how tremendous the upside is there and I believe it will be tremendously good you know there's a lot of stuff we got to navigate through to make sure that's that's a complicated thing for anyone to wrap their heads around and there's you know deep and super understandable emotions around that that's a very honest answer that it's not all gonna be good but it seems inevitable at this point it's yeah I mean it's definitely inevitable my my view of the world you know when you're like a kid in school you learn about this technological revolution and then that one and then that one and my view of the world now sort of looking backwards and forwards is that this is like one long technological revolution and we had sure like first we had to figure out agriculture so that we had the resources and time to figure out how to build machines then we got this industrial revolution and that made us learn about a lot of stuff a lot of other scientific discovery to let us do the computer revolution and that's now letting us as we scale up to these massive systems do the AI revolution but it really is just one long story of humans discovering science and technology and co-evolving with it and I think it's the most exciting story of all time I think it's how we get to this world of abundance and although you know although we do have these things to navigate and there will be these downsides if you think about what it means for the world and for people's quality of lives if we can get to a world where the the cost of intelligence and the abundance that comes with that the cost dramatically falls the abundance goes ways up goes way up I think we'll do the same thing with the energy and I think those are the two sort of key inputs to everything else we want so if we can have abundant and cheap energy and intelligence that will transform people's lives largely for the better and I think it's gonna in the same way that if we could go back now 500 years and look at someone's life we'd say well there there's some great things but they didn't have this they didn't have that can you believe they didn't have modern medicine that's what people are gonna look back at us like but in 50 years when you think about the people that currently rely on jobs that AI will replace when you think about whether it's truck drivers or automation workers people that working in factory assembly lines what if anything what strategies can be put to mitigate the negative downsides of those jobs being eliminated by AI so I'll talk about some general thoughts but I find making very specific predictions difficult because the way the technology goes has been so different than even my own intuitions or certainly my own intuitions maybe we should stop there and back up a little what we what were your initial thoughts if you had asked me ten years ago I would have said first AI is gonna come for blue-collar labor basically it's gonna drive trucks and do factory work and you know it'll handle heavy machinery then maybe after that it'll do like some kinds of cognitive labor kind of you know but not it won't be off doing what I think of personally is the really hard stuff it won't be off proving new mathematical theorems won't be off you know discovering new science won't be off writing code and then eventually maybe but maybe last of all maybe never because human creativity is this magic special special thing last of all it'll come for the creative jobs that's what I would have said now a it looks to me like and for a while AI is much better at doing tasks than doing jobs it can do these little pieces super well but sometimes it goes off the rails it can't keep like very long coherence so people are instead just able to do their existing jobs way more productively but you really still need the human there today and then B it's going exactly the other direction could do the creative work first stuff like coding second they can do things like other kinds of cognitive labor third and we're the furthest away from like humanoid robots hmm so back to the initial question if we do have something that completely eliminates factory workers completely eliminates truck drivers delivery drivers things along those lines that creates this massive vacuum in our society so I think there's things that we're gonna do that are good to do but not sufficient so I think at some point we will do something like a UBI or some other kind of like very long-term unemployment insurance something but we'll have some way of giving people like redistributing money in society and as a cushion for people as people figure out the new jobs but you know and I maybe I should touch on that I I'm not a believer at all that there won't be lots of new jobs I think human creativity desire for status wanting different ways to compete invent new things feel part of a community feel valued that's not gonna go anywhere people have worried about that forever what happens is we get better tools and we just invent new things and more amazing things to do and there's a big universe out there and and I think I mean that like literally in that there's like space is really big but also there's just so much stuff we can all do if we do get to this world of abundant intelligence where you can sort of just think of a new idea and it gets created but but again that doesn't to the point we started with that that that doesn't provide like great solace to people who are losing their jobs today so saying there's gonna be this great indefinite stuff in the future people are like what are we doing today so you know well I think we will as a society do things like UBI and other ways of redistribution but I don't think that gets at the core of what people want I think what people want is like agency self-determination the ability to play a role in architecting the future along with the rest of society the ability to express themselves and create something meaningful to them and also I think a lot of people work jobs they hate and I think there's we as a society are always a little bit confused about whether we want to work more work less but but somehow that we all get to do something meaningful and we all get to play our role in driving the future forward that's really important and what I hope is as those truck driving long-haul truck driving jobs go away which you know people have been wrong about predicting how fast that's gonna happen but it's gonna happen we figure out not just a way to solve the economic problem by like giving people the equivalent of money every month but that there's a way that and we've got a lot of ideas about this there's a way that we like share ownership and decision-making over the future I think I say a lot about AGI is that everyone everyone realizes we're gonna have to share the benefits of that but we also have to share like the decision-making over it and access to the system itself like I'd be more excited about a world where we say rather than give everybody on earth like one eight billionth of the AGI money which we should do that to we say you get like one eight billionth of a one eight billionth slice of the system you can sell it to somebody else you can sell to a company you can pool it with other people you can use it for whatever creative pursuit you want you can use it to figure out how to start some new business and with that you get sort of like a voting right over how this is all going to be used and so the better the AGI gets the more your little one eight billionth ownership is is worth to you we were joking around the other day on the podcast where I was saying that what we need is an AI government that we should have an AI president and have AI make all the decisions yeah I have something that's completely unbiased absolutely rational has the accumulated knowledge of the entire human history yeah at its disposal including all knowledge of psychology and psychological study including UBI because that comes with a host of you know pitfalls and and issues that people have with it so I'll say something there um I think we're still very far away from a system that is capable enough and reliable enough that you that any of us would want that but I'll tell you something I love about that someday let's say that thing gets built the fact that it can go around and talk to every person on earth understand their exact preferences at a very deep level you know how they think about this issue and that one how they balance the trade-offs and what they want and then understand all of that and and like collectively optimize optimize for the collective preferences of humanity or of citizens of the u.s.
that's awesome as long as it's not co-opted right our government currently is co-opted that's for sure we know for sure that our government is heavily influenced by special interests if we could have an artificial intelligence government that has no influence nothing has influence on it what a fascinating idea it's possible and I think it might be the only way where you're gonna get completely objective the absolute most intelligent decision for virtually every problem every dilemma that we face currently in society would you truly be comfortable handing over like final decision-making and say alright AI you got it no no but I'm not comfortable doing that with anybody right you know I mean I don't write I was uncomfortable with the Patriot Act I'm uncomfortable with you know many decisions of people that are being made it's just there's so much obvious evidence that decisions that are being made are not being made in the best interests of the overall well the people it's being made in the decisions of whatever gigantic corporations that have donated to and what whatever the military industrial complex and pharmaceutical industrial complex and and it's just the money it's that's really what we know today that the money has a massive influence on on our society and the choices that get made and the overall good or bad for the population yeah I have no disagreement at all that the current system is super broken not working for people super corrupt corrupt and for sure like unbelievably run by money yeah and and I think there is a way to do a better job than that with AI just in some way but and this might just be like a factor of sitting with the systems all day and watching all of the ways they fail we got a long way to go a long way to go I'm sure but when you think of AGI when you think of the possible future like where it goes to do you ever extrapolate do you ever like sit and pause and say well if this thing if this becomes sentient and it has the ability to make better versions of itself how long before we're literally dealing with a god so the way that I think about this is it used to be that like AGI was this very binary moment it was before and after and I think I was totally wrong about that and the right way to think about it is this continue continuum of intelligence this smooth exponential curve back all the way to that sort of smooth curve curve of technological revolution the the amount of compute power we can put into the system the scientific ideas about how to make it more efficient and smarter to give it the ability to do reasoning to think about how to improve itself that will all come but my model for a long time I think if you look at the world of AGI thinkers there's there's sort of two particularly around the safety issues you're talking about there's two axes that matter there's the short what called short timelines or long timelines you know to the first milestone of AGI whatever that's gonna be is that gonna happen in a few years a few decades maybe even longer although at this point I think most people are a few years a few decades and then there's takeoff speed once we get there from there at that point you're talking about where it's capable of the rapid self-improvement is that a slower a faster process the the world that I think we're heading that we're in and also the world that I think is the most controllable and the safest is the short timelines and slow takeoff quadrant and I think we're gonna have you know there were a lot of very smart people for a while we're like the thing you were just talking about happens in a day or three days and I don't that doesn't seem likely to me given the shape of the technology as we understand it now now even if that happens in a decade or three decades it's still like the blink of an eye from a historical perspective and there are gonna be some real challenges to getting that right and the decisions we make the the sort of safety systems and the and the checks that the world puts in place how we think about global regulation or rules of the road from a safety perspective for those projects it's super important because you can imagine many things going horribly wrong but I've been I feel cheerful about the progress the world is making towards taking this seriously and you know it reminds me of what I've read about the conversations that the world had right around the development of nuclear weapons