back to index

Essentials: Machines, Creativity & Love | Dr. Lex Fridman


Chapters

0:0 Lex Fridman; Artificial Intelligence (AI), Machine Learning, Deep Learning
2:23 Supervised vs Self-Supervised Learning, Self-Play Mechanism
9:6 Tesla Autopilot, Autonomous Driving, Robot & Human Interaction
14:26 Human & Robot Relationship, Loneliness, Time
19:18 Authenticity, Robot Companion, Emotions
24:34 Robot & Human Relationship, Manipulation, Rights
29:19 Dogs, Homer, Companion, Cancer, Death
33:18 Dogs, Costello, Decline, Joy, Loss
41:7 Closing

Whisper Transcript | Transcript Only Page

00:00:00.000 | Welcome to Huberman Lab Essentials, where we revisit past episodes for the most potent and actionable science-based tools for mental health, physical health, and performance.
00:00:10.040 | And now, my conversation with Dr. Lex Friedman.
00:00:14.620 | We meet again.
00:00:15.940 | We meet again.
00:00:16.820 | I have a question that I think is on a lot of people's minds, or ought to be on a lot of people's minds.
00:00:23.880 | What is artificial intelligence, and how is it different from things like machine learning and robotics?
00:00:32.080 | So, I think of artificial intelligence, first, as a big philosophical thing.
00:00:38.860 | It's our longing to create other intelligence systems, perhaps systems more powerful than us.
00:00:46.220 | At the more narrow level, I think it's also a set of tools that are computational mathematical tools to automate different tasks.
00:00:55.580 | And then also, it's our attempt to understand our own mind.
00:01:00.140 | So, build systems that exhibit some intelligent behavior in order to understand what is intelligence in our own selves.
00:01:09.420 | So, all those things are true.
00:01:10.780 | Of course, what AI really means is a community, as a set of researchers and engineers, it's a set of tools, a set of computational techniques that allow you to solve various problems.
00:01:22.280 | There's a long history that approaches the problem from different perspectives.
00:01:27.720 | What's always been throughout one of the threads, one of the communities, goes under the flag of machine learning, which is emphasizing in the AI space, the task of learning.
00:01:41.100 | How do you make a machine that knows very little in the beginning, follow some kind of process, and learns to become better and better in a particular task?
00:01:51.460 | What's been most very effective in the recent, about 15 years, is a set of techniques that fall under the flag of deep learning that utilize neural networks.
00:02:01.480 | It's a network of these little basic computational units called neurons, artificial neurons.
00:02:07.360 | And they have, these architectures have an input and an output.
00:02:11.640 | They know nothing in the beginning, and they're tasked with learning something interesting.
00:02:16.100 | What that something interesting is, usually involves a particular task.
00:02:21.320 | There's a lot of ways to talk about this and break this down.
00:02:25.320 | One of them is how much human supervision is required to teach this thing.
00:02:31.260 | So supervised learning, this broad category, is the neural network knows nothing in the beginning, and then it's given a bunch of examples in computer vision.
00:02:43.740 | That would be examples of cats, dogs, cars, traffic signs, and then you're given the image, and you're given the ground truth.
00:02:51.180 | of what's in that image, and when you get a large database of such image examples, where you know the truth, the neural network is able to learn by example.
00:03:01.860 | That's called supervised learning.
00:03:03.240 | The question, there's a lot of fascinating questions within that, which is how do you provide the truth?
00:03:09.240 | When you're given an image of a cat?
00:03:14.280 | How do you provide to the computer that this image contains a cat?
00:03:18.420 | Do you just say the entire image is a picture of a cat?
00:03:20.940 | Do you do what's very commonly been done, which is a bounding box?
00:03:24.960 | You have a very crude box around the cat's face saying this is a cat.
00:03:29.960 | Do you do semantic segmentation?
00:03:32.140 | Mind you, this is a 2D image of a cat.
00:03:34.220 | So it's not a, the computer knows nothing about our three-dimensional world.
00:03:38.960 | It's just looking at a set of pixels.
00:03:40.580 | So semantic segmentation is drawing a nice, very crisp outline around the cat and saying that's a cat.
00:03:48.380 | That's really difficult to provide that truth.
00:03:50.480 | And the, one of the fundamental open questions in computer vision is, is that even a good representation of the truth?
00:03:56.560 | Now there's another contrasting set of ideas, their attention, their overlapping is, uh, what's used to be called unsupervised learning.
00:04:07.500 | What's commonly now called self-supervised learning, which is trying to get less and less and less human supervision into the, into, uh, into the task.
00:04:17.380 | So self-supervised learning is, uh, more, uh, it's been very successful in the domain of a language model, natural language processing, and now more and more is being successful in computer vision task.
00:04:29.620 | And what's the idea there is let the machine without any ground truth annotation, just look at pictures on the internet or look at text on the internet and try to learn something, uh, generalizable about the ideas.
00:04:46.380 | that are at the core of language or at the core of vision.
00:04:50.340 | And based on that, we humans at its best like to call that common sense.
00:04:56.280 | So with this, we have this giant base of knowledge on top of which we build more sophisticated knowledge, but we have this kind of common sense knowledge.
00:05:04.980 | And so the idea was self-supervised learning is to build this common sense knowledge about what are the fundamental visual ideas that make up a cat and a dog and all those kinds of things without ever having human supervision.
00:05:19.140 | the dream there is the, you just, you just let an AI system that's, uh, self-supervised run around the internet for a while, watch YouTube videos for millions and millions of hours and without any supervision, be primed and ready to actually learn with very few examples.
00:05:37.740 | once the human is, once the human is able to show up, we think of, uh, children in this way, human children is your parents only give one or two examples to teach a concept.
00:05:47.340 | that the, the dream was self-supervised learning is that would be the same with, with, uh, machines that they would, uh, watch millions of hours of, uh, YouTube videos and then come to a human and be able to understand when the human shows them.
00:06:01.260 | This is a cat, like, remember this, a cat, they will understand that a cat is not just a thing with pointy ears or a cat cat is a thing that's orange or it's furry.
00:06:11.820 | They'll, they'll see something more fundamental that we humans might not actually be able to introspect and understand.
00:06:17.620 | Like if I asked you what makes a cat versus a dog, you wouldn't probably not be able to answer that.
00:06:22.500 | But if I showed you brought to you a cat and a dog, you'll be able to tell the difference.
00:06:27.820 | what are the ideas that your brain uses to make that difference?
00:06:31.580 | Uh, that's the whole dream of self-supervised learning is it would be able to learn that on its own, that set of common sense knowledge that's able to tell the difference.
00:06:41.140 | And then there's like a lot of incredible uses of self-supervised learning, uh, very weirdly called self-play mechanism.
00:06:50.420 | That's the mechanism behind, uh, uh, the reinforcement learning successes of, uh, the systems that want it.
00:06:57.780 | that, uh, go at, uh, alpha zero, uh, that want to chess.
00:07:01.900 | Oh, I see that play games.
00:07:04.220 | That play games.
00:07:04.980 | Got it.
00:07:05.340 | So the idea of self-play is probably applies, uh, to other domains than just games is a system that just plays against itself.
00:07:14.260 | And this is fascinating in all kinds of domains, but, uh, it knows nothing in the beginning.
00:07:19.820 | And the whole idea is it creates a bunch of mutations of itself and plays against those, uh, versions of itself.
00:07:28.500 | And then through this process of interacting with systems just a little better than you, you start following this process where everybody starts getting better and better and better and better until you are several orders of magnitude better than the world champion in chess, for example.
00:07:42.460 | And it's fascinating because it's like a runaway system, one of the most terrifying and exciting things that, uh, David Silver, the creator of alpha go and alpha zero, one of the leaders of the team said, uh, to me is, uh, they haven't found the ceiling for alpha zero.
00:08:00.460 | Meaning it could just arbitrarily keep improving now in the realm of chess, that doesn't matter to us, that it's like, it just ran away with the game of chess.
00:08:09.900 | Like, it's like just so much better than humans.
00:08:13.020 | But the question is what, if you can create that in the realm that does have a, a bigger, deeper effect on human beings and societies, uh, that can be a terrifying process.
00:08:24.940 | To me, it's an exciting process if you supervise it correctly, if you inject, uh, if, uh, what's called, uh, value alignment, you, uh, you make sure that the goals that the AI is optimizing is aligned with human beings and human societies.
00:08:41.060 | There's a lot of fascinating things to talk about within the, uh, specifics of neural networks and all the problems that people are, are working on.
00:08:50.580 | But I would say the really big, exciting one is self-supervised learning, where trying to get less and less human supervision, uh, uh, less and less human supervision of neural networks.
00:09:03.340 | And also just to comment and I'll shut up.
00:09:06.500 | No, please keep going.
00:09:07.900 | I'm learning.
00:09:08.820 | Uh, I have questions, but I'm learning.
00:09:10.460 | So please keep going.
00:09:11.260 | So that to me, what's exciting is not the theory.
00:09:14.420 | It's always the application.
00:09:16.220 | One of the most exciting applications of artificial intelligence, specifically neural networks and machine learning is Tesla autopilot.
00:09:23.780 | So these are systems that are working in the real world.
00:09:26.220 | This isn't an academic exercise.
00:09:28.460 | This is human lives at stake.
00:09:30.060 | Even though it's called, uh, FSD full self-driving, it is currently not fully autonomous, meaning human supervision is required.
00:09:39.900 | So human is tasked with overseeing the systems.
00:09:42.860 | In fact, liability wise, the human is always responsible.
00:09:46.940 | This is a human factor psychology question, which is fascinating.
00:09:51.100 | I'm fascinated by the, the, the whole space, which is a whole nother space of human robot interaction.
00:09:58.300 | When AI systems and humans work together to accomplish tasks, that dance to me is, uh, is one of the smaller communities, but I think it would be one of the most important open problems once they're solved is how do humans and robots dance together.
00:10:16.140 | To me, semi-autonomous driving is one of those spaces.
00:10:19.380 | So for, uh, for, uh, for Elon, for example, he doesn't see it that way.
00:10:23.820 | He sees, uh, semi-autonomous driving as a stepping stone towards fully autonomous driving.
00:10:30.300 | Like humans and robots can't dance well together.
00:10:34.300 | Like humans and humans dance and robots and robots dance.
00:10:37.460 | Like we need to, this is an engineering problem.
00:10:40.100 | We need to design a perfect robot that solves this problem to me forever.
00:10:44.740 | Maybe this is not the case with driving, but the world is going to be full of problems where it's always humans and robots have to interact because I think robots will always be flawed.
00:10:55.540 | Just like humans are going to be flawed are flawed.
00:10:59.460 | And that's what makes life beautiful that they're flawed.
00:11:02.980 | That's where learning happens at the edge of your capabilities.
00:11:07.700 | So you always have to figure out how can flawed robots and flawed humans interact together such that they, uh, like the, the sum is bigger than the whole as opposed to focusing on just building the perfect robot.
00:11:24.180 | So, so that's one of the most exciting applications I would say of artificial intelligence to me is autonomous driving and semi-autonomous driving.
00:11:31.620 | And that's a really good example of machine learning because those systems are constantly learning.
00:11:36.980 | And, uh, there's a, there's a process there that maybe I can comment on at the Andre Karpathy, who's the head of autopilot calls it the data engine.
00:11:47.300 | And this process applies for a lot of machine learning, which is you build a system that's pretty good at doing stuff.
00:11:53.780 | You send any, you send it out into the real world, it starts doing the stuff and then it runs into what are called edge cases, like failure cases, where it screws up.
00:12:03.060 | You know, we do this as kids that, you know, you, you do this as adults, we do this as adults, exactly.
00:12:10.180 | But we learn really quickly, but the whole point, and this is the fascinating thing about driving is you realize there's millions of edge cases.
00:12:18.180 | Uh, there's just like weird situations that you did not expect.
00:12:21.620 | And so the data engine processes, you collect those edge cases and then you go back to the drawing board and learn from them.
00:12:29.460 | And so you have to create this data pipeline where all these cars, hundreds of thousands of cars, that you're driving around and something weird happens.
00:12:37.460 | And so whenever this weird detector fires, it's another important concept, that piece of data goes back, uh, to the mothership for the, for the training, for the retraining of the system.
00:12:52.740 | And through this data engine process, it keeps improving and getting better and better and better and better.
00:12:57.460 | So basically you send out a pretty clever AI systems out into the world and let it find the edge cases, let it screw up just enough to figure out where the edge cases are.
00:13:10.100 | And then go back and learn from them and then send out that new version and keep updating that version.
00:13:16.020 | one of the fascinating things about humans is we figure out objective functions for ourselves.
00:13:22.260 | Like we are, um, it's the meaning of life.
00:13:24.820 | Like, why the hell are we here?
00:13:27.300 | And, uh, a machine currently has to have, uh, a hard coded statement about why it has to have a meaning of artificial intelligence based life.
00:13:38.340 | Right. If you want a machine to be able to be good at stuff, it has to be given very clear statements of what good at stuff means.
00:13:47.940 | That's one of the challenges of artificial intelligence is in order to solve a problem, you have to formalize it and you have to provide, uh, both like the full sensory information.
00:13:57.700 | You have to be very clear about what is the data that's being collected.
00:14:02.100 | And you have to also be clear about the objective function.
00:14:05.460 | What is the goal that you're trying to reach ultimately currently the, uh, there has to be a formal objective function.
00:14:13.860 | Now you could argue that humans also has a set of objective functions.
00:14:16.900 | We're trying to optimize.
00:14:18.260 | We're just not able to introspect them.
00:14:20.180 | Yeah.
00:14:21.220 | We don't actually know what we're looking for and seeking and doing.
00:14:25.620 | I think you've already told us the answer, but does interacting with a robot change you?
00:14:31.300 | Does it, in other words, do, do we develop relationships to robots?
00:14:35.460 | I believe that most people have, uh, a notion of loneliness in them that we haven't discovered that, that we haven't explored, I should say.
00:14:46.660 | And I see AI systems as helping us explore that so that we can become better humans, uh, better people towards each other.
00:14:56.660 | So I think that connection between human and AI human and robot, uh, is, is not only possible, but, uh, will help us understand ourselves in ways that are like several orders of magnitude, uh, deeper than we ever could have imagined.
00:15:15.620 | So when I think about human relationships, I, I don't, um, always break them down into variables, but we could explore a few of those variables and see how they map to human robot relationships.
00:15:26.980 | Um, one is just time, uh, if you spend zero time with another person, uh, at all in, in cyberspace or on the phone or in person, you essentially have no relationship to them.
00:15:37.140 | If you spend a lot of time, you have a relationship, this is obvious, but I guess one variable would be time.
00:15:41.860 | How much time you spend with the other entity, robot or human, the other would be, um, wins and successes.
00:15:49.940 | You know, you enjoy successes together.
00:15:51.860 | The other would be failures.
00:15:53.380 | When you struggle with somebody, you know, when you struggle with somebody, you grow closer.
00:15:58.580 | So I've never conceptualized robot human interactions this way.
00:16:01.780 | Uh, um, so tell me more about how this might look.
00:16:05.620 | Are we thinking about, um, a human appearing robot?
00:16:09.220 | Um, what is the ideal human robot relationship?
00:16:13.940 | So there's, uh, a lot to be said here, but you actually pinpointed one of the big, big first steps, which is this idea of time.
00:16:22.660 | But I think that time element, forget everything else, just sharing moments together, that changes everything.
00:16:30.420 | I believe that changes everything.
00:16:32.180 | Uh, there's specific things that are more in terms of systems that I can explain you.
00:16:36.580 | Um, it's, it's more technical and probably a little bit offline.
00:16:40.980 | Cause I have kind of wild ideas how that can revolutionize, um, social networks and, um, and operating systems.
00:16:48.980 | But the point is that element alone, forget all the other things we're talking about you, like emotions, um, saying no, all that.
00:16:57.620 | Just remember sharing moments together would change everything.
00:17:01.380 | We don't currently have systems that, uh, um, share, share moments together.
00:17:07.060 | Like even just you and your fridge, just all those times you went late at night and ate the thing you shouldn't have eaten.
00:17:14.580 | And that was a secret moment you had with your refrigerator.
00:17:17.700 | You shared that moment, that darkness or that beautiful moment where you just, uh, you know, like heartbroken for some reason, you're eating that ice cream or whatever.
00:17:27.540 | That's a special moment.
00:17:29.140 | And that refrigerator was there for you.
00:17:31.140 | And the fact that it missed the opportunity to remember that, uh, is, is, is tragic.
00:17:37.460 | And once it does remember that, I think you're gonna be very attached to the refrigerator.
00:17:43.700 | You, you're gonna go through some, through some hell with that refrigerator.
00:17:47.220 | Most of us have like in, in, uh, in the developed world have weird relationships with food, right?
00:17:53.220 | So you can go through some, uh, some deep moments of trauma and triumph with food.
00:17:58.660 | And at the core of that is the refrigerator.
00:18:00.900 | So a smart refrigerator, I believe would, uh, change society, not just the refrigerator, but the, these ideas in the systems all around us.
00:18:11.860 | So that I just want to comment on how powerful that idea of time is.
00:18:15.860 | And then there's a bunch of elements of actual interaction of, uh, uh, allowing you as a human to feel like you're being heard, truly heard, truly understood.
00:18:30.580 | And I think there's a lot of ideas of how to make AI assistance to be able to ask the right questions and truly hear another human.
00:18:39.060 | This is what we try to do with podcasting, right?
00:18:42.260 | Uh, I think there's ways to do that with AI, but above all else, just remembering the collection of moments that make up the day, the week, the months.
00:18:52.740 | I think, uh, you maybe have some of this as well.
00:18:57.140 | Some of my closest friends still are the friends from high school.
00:19:00.100 | That's time we've been through a bunch of shit together and that like, we've, we're very different people, but just the fact that we've been through that.
00:19:09.700 | And we remember those moments and those moments somehow create a depth of connection, like nothing else like you and your refrigerator.
00:19:17.460 | There may be relationships that are far better than the sorts of relationships that we can conceive in our minds right now, based on what these machine relationship interactions could teach us.
00:19:30.020 | Do I have that right?
00:19:31.860 | Yeah, I think so.
00:19:32.980 | I think there's no reason to see machines as, uh, somehow, uh, incapable of teaching us something that's deeply human.
00:19:39.860 | I don't think, uh, humans have a monopoly on that.
00:19:43.380 | I think we understand ourselves very poorly and we need to have the kind of, uh, uh, prompting from, uh, from a machine.
00:19:52.340 | Uh, maybe the thing we want to optimize for isn't necessarily, uh, like some, uh, sexy, uh, like quick clips.
00:20:01.460 | Maybe we, what we want is long form authenticity.
00:20:04.660 | Depth.
00:20:05.620 | Depth.
00:20:06.740 | from a very specific engineering perspective is, I think, a fascinating open problem that hasn't
00:20:14.340 | been really worked on very much. Early on in life, and also in the recent years, I've interacted with
00:20:21.700 | a few robots where I understood there's magic there. And that magic could be shared by millions
00:20:30.100 | if it's brought to light. When I first met Spot from Boston Dynamics, I realized there's magic
00:20:36.660 | there that nobody else is seeing. It's the dog. The dog, sorry. Spot is the four-legged robot from
00:20:44.180 | Boston Dynamics. Some people might have seen it. It's this yellow dog. This magic is something that
00:20:49.860 | could be every single device in the world. The way that I think maybe Steve Jobs thought about the
00:20:57.620 | personal computer. And so for me, I'd love to see a world where there's every home has a robot
00:21:03.940 | and not a robot that washes the dishes, but more like a companion.
00:21:08.260 | A family member.
00:21:09.460 | A family member, the way a dog is. But a dog that's able to speak your language too. So not just
00:21:17.220 | connect the way a dog does by looking at you and looking away and almost like smiling with its soul
00:21:23.620 | in that kind of way. But also to actually understand what the hell, like, why are you so excited about
00:21:30.260 | the successes? Like understand the details, understand the traumas.
00:21:34.180 | I love this desire to share the delight of an interaction with a robot. And as you describe it,
00:21:40.100 | I actually, I find myself starting to crave that because we all have those elements from childhood
00:21:45.460 | where, or from adulthood where we experience something, we want other people to feel that.
00:21:50.500 | And I think that you're right. I think a lot of people are scared of AI. I think a lot of people
00:21:54.500 | are scared of robots. My only experience of a robotic like thing is my Roomba vacuum, where it goes
00:22:02.740 | about actually was pretty good at picking up Costello's hair when he was shed. And then, and I was grateful for
00:22:08.980 | it. But then when it would, when I was on a call or something and it would get caught on a, on a wire or
00:22:13.940 | something, I would find myself getting upset with the Roomba in that moment. I'm like, what are you
00:22:17.620 | doing? You know? And I, and obviously it's just doing what it does, but, but that's a kind of mostly
00:22:23.380 | positive, but slightly negative interaction. But what you're describing has so much more richness and layers
00:22:30.580 | of detail that I can only imagine what those relationships are like.
00:22:34.740 | Well, there's a few, just a quick comment. So I've had, they're not currently in Boston. I have
00:22:39.220 | a bunch of Roombas from iRobot and I did this experiment.
00:22:43.060 | Wait, how many Roombas?
00:22:44.100 | Sounds like a fleet of Roombas.
00:22:47.140 | Yeah. So I, uh, probably seven or eight.
00:22:49.620 | Well, that's a lot of Roombas.
00:22:51.220 | So you're going to, so you have these seven or so Roombas, you deploy all seven at once?
00:22:56.100 | Oh no, I do different experience with them, uh, different experiments with them.
00:23:00.100 | So one of the things I want to mention, I got them to, uh, to scream in pain
00:23:04.660 | and moan in pain, um, whenever they, uh, were kicked or contacted.
00:23:10.340 | And I did that experiment to see how I would feel.
00:23:13.780 | I meant to do like a YouTube video on it, but then it just seemed very cruel.
00:23:18.340 | Did any Roomba rights activists come out?
00:23:20.420 | Yeah, that's fine. Like, I think if I release that video, I think it's going to make me look insane,
00:23:27.780 | which I know people know I'm already insane.
00:23:30.340 | Now you, now you have to release the video.
00:23:32.260 | Sure. Well, I, I think maybe if I contextualize it by showing other robots,
00:23:37.620 | like to show why this is fascinating, because ultimately I felt like they were human almost
00:23:44.420 | immediately. And that display of pain was, uh, what did that?
00:23:48.020 | Giving them a voice. Giving them a voice, especially a voice of, uh, dislike of, of pain.
00:23:54.260 | Mm-hmm. So is the video available online?
00:23:56.420 | No, I haven't, uh, I haven't recorded it. I just hit a bunch of Roombas that are able to scream in pain,
00:24:02.660 | um, in my Boston, uh, in my Boston place.
00:24:05.540 | What about, um, like, uh, shouts of glee and delight?
00:24:10.100 | Well, I don't know how to, I don't, how to, uh, to me, delight is quiet.
00:24:13.860 | Right. But there's a way to frame it's, uh, it being quite dumb as, uh, almost cute.
00:24:22.980 | You know, almost connecting with it for its dumbness. Uh, and I think that's a artificial
00:24:28.260 | intelligence problem. I think flaws are, should be a feature, not a bug.
00:24:33.780 | So along the lines of this, um, the different sorts of relationships that one could have with
00:24:38.660 | robots and the fear, but also some of the positive relationships that one could have, uh, there's so
00:24:44.580 | much dimensionality. There's so much to explore, but, uh, power dynamics in relationships are very
00:24:50.980 | interesting because the, the obvious ones that, um, the unsophisticated view of this is, you know,
00:24:57.060 | one, there's a master and a servant, right? But there's also manipulation. There's benevolent
00:25:04.180 | manipulation. You know, uh, children do this with parents. Puppies do this. Puppies turn their head
00:25:09.620 | and look cute and maybe give out a little, little, um, noise. Kids, coo. And parents always think that
00:25:15.380 | they're, you know, they're doing this because, you know, they, they love the parent, but
00:25:20.660 | in many ways, studies show that those coups are ways to extract the sorts of behaviors and
00:25:25.620 | expressions from the parent that they want. The child doesn't know it's doing this. It's
00:25:28.660 | completely subconscious, but it's benevolent manipulation. So there's one version of fear
00:25:33.700 | of robots that I hear a lot about that. I think most people can relate to where the robots take over
00:25:39.780 | and they become the masters and we become the servants, but there could be another version that,
00:25:45.540 | um, uh, you know, in certain communities that I'm certainly not a part of, but they call topping
00:25:51.380 | from the bottom where the robot is actually manipulating you into doing things, but you are
00:25:58.740 | under the belief that you are in charge, but actually they're in charge. And so I think that's one that,
00:26:06.500 | um, uh, if we could explore that for a second, you could imagine it wouldn't necessarily be bad,
00:26:12.500 | although it could lead to bad things. Um, the reason I want to explore this is I think people
00:26:17.140 | always, uh, default to the, the extreme, like the robots take over and we're in little jail cells and
00:26:22.420 | they're out having fun and ruling the universe. Uh, what, what, what sorts of manipulation can a robot
00:26:29.380 | potentially carry out good or bad? Yeah. So there's a lot of good and bad manipulation between humans,
00:26:35.780 | right? Just like you said to me,
00:26:40.020 | especially, uh, like you said, uh, topping from the bottom, is that the term? Uh, so I think someone
00:26:48.100 | from MIT told me that term. It wasn't Lex. Uh, I think, so first of all, there's power dynamics,
00:26:56.500 | uh, in bed and power dynamics in relationships and power dynamics on the street and in the work
00:27:01.780 | environment, those are all very different. Uh, I think, um, I think power dynamics can make
00:27:08.900 | human relationships, especially romantic romantic relationships, uh, fascinating and rich and fulfilling
00:27:16.260 | and exciting and all those kinds of things. So I don't, uh, I don't think in themselves they're bad.
00:27:24.340 | And the same goes with robots. I really love the idea that a robot would be a top or a bottom in
00:27:30.420 | terms of like power dynamics. Uh, and I think everybody should be aware of that. And the manipulation is
00:27:36.500 | not so much manipulation, but, uh, a dance of like pulling away, a push and pull and all those kinds of
00:27:42.660 | things. Uh, in terms of control, I, I think we're very, very, very far away from AI systems. They're able
00:27:49.860 | to lock us up to lock us up in, uh, in it, you know, like to have so much control that we basically
00:27:57.700 | cannot live our lives in the way that we want. I think there's, uh, in terms of dangers of AI systems,
00:28:04.980 | there's much more dangers that have to do with autonomous weapon systems and all those kinds of
00:28:09.300 | things. So the power dynamics as exercised in the struggle between nations and war and all those kinds of
00:28:15.300 | things. But in terms of personal relationships, I think power dynamics are a beautiful thing.
00:28:21.060 | I do believe that robots will have rights down the line. And I think in order for, in order for us to
00:28:27.860 | have deep meaningful relationship with robots, we would have to consider them as entities in themselves
00:28:33.460 | that, uh, deserve respect. And that's a really interesting concept that, uh, I think people are
00:28:40.020 | starting to start to talk about a little bit more, but it's very difficult for us to understand how entities
00:28:45.300 | that are other than human, I mean, the same as with dogs and other animals can have rights on a level
00:28:50.980 | as humans. We can't, and nor should we do whatever we want with animals. We have a USDA, we do, we have
00:28:57.540 | departments of, uh, of agriculture that deal with, um, you know, animal care and use committees for
00:29:04.020 | research for agro, you know, for farming and ranching and all that. So I, I, while it, when you first said
00:29:10.180 | it, I thought, wait, why would have, there would be a bill of robotic rights, but it absolutely makes
00:29:14.020 | sense. Um, in the context of everything we've been talking about up until now, it let's, um,
00:29:20.020 | if you're willing, I'd love to talk about dogs because you've mentioned dogs a couple of times,
00:29:26.820 | a robot dog. Um, you had a, a biological dog. Yeah. Yeah. I had, uh, a Newfoundland, uh, named Homer,
00:29:36.740 | uh, for many years growing up. In Russia or in the US? In the United States. And, uh, he was about,
00:29:44.420 | he was over 200 pounds. That's a big dog. That's a big dog. If people know, people know Newfoundland,
00:29:49.460 | so he's this black dog, that's a really, uh, long hair and just a kind soul. I think perhaps
00:29:57.140 | that's true for a lot of large dogs, but he thought he was a small dog. So he moved like that. And was
00:30:02.180 | he your dog? Yeah. Yeah. So you had him since he was fairly young? Uh, since, yeah, since the very,
00:30:07.620 | very beginning to the very, very end. And one of the things, I mean, he had this kind of, uh, we mentioned
00:30:14.580 | like the Roombas, he had, uh, kind hearted dumbness about him that was just overwhelming. It's part of
00:30:22.980 | the reason, uh, I named him Homer because it's after Homer Simpson, in case people are wondering which
00:30:29.860 | Homer I'm referring to. I'm not, you know, so that there's a clumsiness that was just, uh, something
00:30:40.660 | that immediately led to a deep love for each other. And one of the, um, I mean, he was always,
00:30:47.860 | it's the shared moments. He was always there for so many, uh, nights together. That's a, that's a
00:30:52.900 | powerful thing about a dog that, um, he was there through all the loneliness, through all the tough
00:30:58.660 | times, through the successes and all those kinds of things. And I remember, um, I mean, that was a
00:31:03.940 | really moving moment for me. I still miss him to this day. How long ago did he die?
00:31:08.820 | Um, maybe 15 years ago. So it's, it's been a while, but it was the first time I've really experienced like
00:31:19.940 | the feeling of death. So what happened is, uh, he, uh, he got cancer and so he was dying slowly.
00:31:31.540 | And then at a certain point he couldn't get up anymore. Uh, there's a lot of things I could say
00:31:36.420 | here. Um, you know, that I struggle with that. Maybe, uh, maybe he suffered much longer than he
00:31:43.620 | needed to. That's something I really think about a lot, but I remember I had to take him to the hospital
00:31:52.740 | and the nurses couldn't carry him. Right. So you talk about 200 pound dog. I was really into power
00:32:00.900 | lifting at the time. I remember like they, they, they tried to figure out all these kinds of ways to,
00:32:06.580 | uh, so in order to put them to sleep, they had to take them, um, into, into a room. And so I had to carry
00:32:14.180 | him everywhere. And here's this dying friend of mine that I just had to, uh, first of all, it's really
00:32:21.620 | difficult to carry somebody that heavy when they're not helping you out. And, um, yeah, so
00:32:28.500 | I remember it was the first time seeing a friend laying there and seeing life drain from his body.
00:32:38.420 | And that realization that we're here for a short time was made so real that here's a friend that was
00:32:47.620 | there for me the week before the day before, and now he's gone. And that was, um, I don't know that,
00:32:55.220 | that spoke to the fact that you could be deeply connected with the dog. Also spoke to the fact that,
00:33:02.500 | uh, the, the shared moments together that led to that deep friendship is, um,
00:33:09.940 | are, will make life so amazing, but also spoke to the fact that death is a .
00:33:17.540 | Um, so I know you've lost Costello recently. Yeah. And you've been going.
00:33:22.340 | And as you're saying this, I'm definitely fighting back, uh, the tears. I, um, I, uh,
00:33:27.620 | thank you for sharing that, that, uh, I guess we're about to both cry over our dead dogs,
00:33:32.980 | that it was, it was bound to happen just given when this is, when this is happening. Um, yeah, it's, uh,
00:33:40.020 | uh, how long, how long did you know that Costello was not doing well? Um, well, let's see a year ago
00:33:47.460 | during the start of about six months into the pandemic, he started getting abscesses and he was
00:33:54.180 | not, his behavior changed and something really changed. And then, um, I put him on testosterone
00:34:00.740 | because, uh, which helped a lot of things. It certainly didn't cure everything, but it helped a lot
00:34:05.860 | of things he was dealing with joint pain, sleep issues. And then it just became a very slow decline
00:34:13.140 | to the point where, you know, two, three weeks ago he had, you know, a closet full of medication.
00:34:20.820 | I mean, this dog was, it was like a pharmacy. It's amazing to me when I looked at it the other day,
00:34:25.300 | I still haven't cleaned up and removed all his things. Cause I can't quite bring myself to do it.
00:34:29.380 | But, um, do you think he was suffering? Well, so what happened was about a week ago,
00:34:35.300 | it was really just about a week ago. It's amazing. He was going up the stairs. I saw him slip
00:34:39.940 | and he was a big dog. He wasn't 200 pounds, but he was about 90 pounds. He's a bulldog. That's pretty
00:34:44.420 | big. And he was fit. Um, and then I noticed that he wasn't carrying the, a foot in the back,
00:34:49.940 | like it was injured. It had no feeling at all. He never liked me to touch his hind paws and I could do,
00:34:54.420 | that thing was just flopping there. And then, uh, the vet found some spinal degeneration and I was told
00:34:59.940 | that the next one would go. Did he suffer? Uh, sure. Hope not. Um, but something changed in his
00:35:06.180 | eyes. Yeah. Yeah. It's the eyes again. I know you and I spend long hours on the phone and talking
00:35:11.060 | about like the eyes and how, what they convey and what they mean about internal states and for sake of
00:35:15.940 | robots and biology of other kinds. But do you think, uh, something about him was gone in his eyes?
00:35:22.820 | I, I think he was real. Here I am anthropomorphizing. I think he was realizing
00:35:29.620 | that one of his great joys in life, which was to walk and sniff and pee on things.
00:35:37.460 | This dog. The fundamentals. Loved to pee on things. It was amazing.
00:35:42.900 | I wondered where he put it. He was like a reservoir of urine. It was incredible. I think,
00:35:48.180 | oh, that's it. He's just, he'd put like one drop on the 50 millionth plant. And then we get to
00:35:52.740 | the 50 millionth and one plant and he just have, you know, leave a puddle. And here I am talking
00:35:57.700 | about Costello peeing. Um, he was losing that ability to stand up and do that. He was falling
00:36:03.140 | down while he was doing that. And I, I do think he started to realize, and the passage was easy and
00:36:09.700 | peaceful, but, um, uh, you know, I'll say this. I, I'm not ashamed to say it. I mean, I wake up every
00:36:15.620 | morning since then, just, I don't even make the conscious decision to allow myself to cry. I wake up crying.
00:36:21.620 | And, uh, I'm fortunately able to make it through the day, thanks to the great support of my friends
00:36:25.540 | and you and my family. But, um, I miss him, man. You miss him. Yeah. I miss him. And I feel like,
00:36:32.100 | uh, he, you know, Homer Costello, you know, the relationship to one's dog is so specific, but,
00:36:38.260 | um, so that, that part of you is gone. That's the hard thing. You know, um,
00:36:46.980 | what's, what, what I think is different is that I made the mistake. I think, I hope it was a good
00:36:56.980 | decision, but sometimes I think I made the mistake of, I brought Costello a little bit to the world
00:37:02.020 | through the podcast, through posting about him. I gave, I anthropomorphized about him in public.
00:37:07.060 | Let's be honest. I have no idea what his mental life was or his relationship to me. And I'm just
00:37:10.980 | exploring all this for the first time because he was my first dog, but I raised him since he was seven
00:37:14.740 | weeks. Yeah. You got to hold it together. I noticed the episode, uh, you released on Monday,
00:37:20.500 | you mentioned Costello. Like you, you brought him back to life for me for that brief moment.
00:37:25.860 | Yeah. But he's, he's, he's gone. That's the, he's going to be gone for a lot of people too.
00:37:32.260 | Well, this is what I'm struggling with. I know how to take care of myself pretty well. Yeah. Not
00:37:38.100 | perfectly, but pretty well. And I have good support. I do worry a little bit about how it's going to land
00:37:43.540 | and how people will feel. I'm, I'm concerned about their internalization. Um, so that's something I'm still,
00:37:50.100 | I'm still iterating. And you have to, they have to watch you struggle, which is fascinating.
00:37:54.180 | Right. And I've mostly been shielding them from this, but, um, what would make me happiest if,
00:37:59.140 | is if people would internalize some of Costello's best traits and his best traits were that he was
00:38:05.380 | incredibly tough. I mean, he was a S you know, 22 inch neck bulldog, the whole thing. He was just born
00:38:13.140 | that way, but was what was so beautiful is that his toughness is never what he rolled forward. It was just how
00:38:19.700 | sweet and kind he was. And so if people can take that, then, um, then there's a win in there someplace.
00:38:26.420 | So I think there's some ways in which he should probably live on in your podcast too.
00:38:32.500 | You should, uh, I mean, it's such a, one of the things I loved about, uh, his role in your podcast
00:38:40.660 | is that he brought so much joy to you. I mentioned the robots, right? I think, uh, that's such a powerful
00:38:48.420 | thing to bring that joy into like allowing yourself to experience that joy, to bring that joy to others,
00:38:55.460 | to share it with others. Uh, that's really powerful. And I mean, not to, this is, this is like the Russian
00:39:01.700 | thing is, um, it's, uh, it's, it touched me when, uh, Louis CK had that moment that I keep thinking
00:39:09.620 | about in this, um, his show, Louis, where like an old man was criticizing Louis for whining about
00:39:16.260 | breaking up with his girlfriend. And he was saying like the most, uh, the, the most beautiful thing,
00:39:22.260 | um, about, uh, love. They made a song that's catchy now. That's not making me feel horrible saying it,
00:39:30.420 | but like, is the loss, the loss really also is making you realize how much that person,
00:39:39.220 | that dog meant to you and like allowing yourself to feel that loss and not run away from that loss
00:39:44.740 | is really powerful. And in some ways that's also sweet, just like the love was the loss is also sweet
00:39:52.900 | because you know that you felt a lot for that, um, for your friend. So I, you know, and I continue
00:40:00.820 | bringing that joy. I think it would be amazing to the podcast. Uh, I hope to do the same with,
00:40:06.340 | with robots or whatever else is the source of joy. Right. Um, and maybe, uh, do you think about
00:40:14.100 | one day getting, uh, another dog? Yeah. In time, um, you're hitting on all the key buttons here.
00:40:22.340 | Uh, I want that to, we're thinking about, um, you know, ways to kind of immortalize Costello in a
00:40:29.620 | way that's real, not just, you know, creating some little logo or something silly, you know,
00:40:34.660 | Costello much like David Goggins is a, a person, but Goggins also has grown into kind of a verb.
00:40:41.860 | You're going to Goggins this or you're, and there's an adjective like that's extreme. Like, um, I think
00:40:47.460 | that for me, Costello was all those things. He was a, he was a being, he was his own being. He
00:40:51.780 | was a noun, uh, a verb and an adjective. So, and he had this amazing superpower that I wish I could
00:40:57.860 | get, which is this ability to get everyone else to do things for you without doing a damn thing.
00:41:02.340 | The Costello effect, as I call it. So it was an idea. I hope he lives on.
00:41:06.980 | Um, there's a saying that I heard when I was a graduate student that I, that's just been
00:41:11.220 | ringing in my mind throughout this conversation in such a, I think appropriate way, which is that,
00:41:17.140 | uh, Lex, you are in a minority of one, you are truly, uh, extraordinary in your ability to encapsulate so
00:41:27.060 | many aspects of science, engineering, public communication about so many topics, uh, martial
00:41:34.420 | arts and the emotional depth that you bring to it and just the purposefulness. And I think if it's not
00:41:41.060 | clear to people, it absolutely should be stated, but I think it's abundantly clear that just the amount
00:41:47.220 | of time and thinking that you put into things is, uh, it, it is the ultimate mark of respect. Um,
00:41:54.340 | so I'm just extraordinarily grateful for your friendship and for this conversation.
00:41:59.220 | I'm, uh, proud to be your friend and I just wish you showed me the same kind of respect by wearing a
00:42:04.100 | suit and make your father proud. Maybe next time, next time. Indeed. Thanks so much, my friend.
00:42:10.580 | Thank you. Thank you, Andrew.