back to index

David Ferrucci: What is Intelligence? | AI Podcast Clips


Chapters

0:0 What is intelligence
1:15 Understanding the world
2:5 Picking a goal
4:0 Alien Intelligence
6:10 Proof
7:54 Social constructs
9:56 We are bound together
10:48 How hard is that
13:50 Optimistic notion
14:35 Emotional manipulation

Whisper Transcript | Transcript Only Page

00:00:00.000 | - So let me ask, you've kind of alluded to it,
00:00:04.560 | but let me ask again, what is intelligence?
00:00:07.680 | Underlying the discussions we'll have
00:00:10.760 | with Jeopardy and beyond,
00:00:13.800 | how do you think about intelligence?
00:00:15.400 | Is it a sufficiently complicated problem,
00:00:18.080 | being able to reason your way through solving that problem?
00:00:20.760 | Is that kind of how you think about
00:00:22.100 | what it means to be intelligent?
00:00:23.720 | - So I think of intelligence primarily two ways.
00:00:27.960 | One is the ability to predict.
00:00:31.560 | So in other words, if I have a problem,
00:00:34.040 | can I predict what's gonna happen next?
00:00:35.840 | Whether it's to predict the answer of a question
00:00:39.120 | or to say, look, I'm looking at all the market dynamics
00:00:42.120 | and I'm gonna tell you what's gonna happen next,
00:00:44.400 | or you're in a room and somebody walks in
00:00:47.600 | and you're gonna predict what they're gonna do next
00:00:49.560 | or what they're gonna say next.
00:00:51.080 | - So in a highly dynamic environment full of uncertainty,
00:00:54.780 | be able to-- - Lots of, you know,
00:00:56.880 | the more variables, the more complex,
00:00:59.720 | the more possibilities, the more complex.
00:01:02.320 | But can I take a small amount of prior data
00:01:05.960 | and learn the pattern and then predict
00:01:08.120 | what's gonna happen next accurately and consistently?
00:01:11.240 | That's certainly a form of intelligence.
00:01:15.160 | - What do you need for that, by the way?
00:01:16.520 | You need to have an understanding of the way the world works
00:01:21.100 | in order to be able to unroll it into the future, right?
00:01:24.600 | What do you think is needed to predict--
00:01:26.280 | - Depends what you mean by understanding.
00:01:27.760 | I need to be able to find that function,
00:01:30.520 | and this is very much what-- - What's a function.
00:01:32.560 | - Deep learning does, machine learning does,
00:01:34.360 | is if you give me enough prior data
00:01:37.280 | and you tell me what the output variable is that matters,
00:01:40.240 | I'm gonna sit there and be able to predict it.
00:01:42.760 | And if I can predict it accurately
00:01:45.560 | so that I can get it right more often than not, I'm smart.
00:01:49.240 | If I can do that with less data and less training time,
00:01:53.080 | I'm even smarter.
00:01:55.300 | If I can figure out what's even worth predicting,
00:01:58.880 | I'm smarter, meaning I'm figuring out
00:02:02.140 | what path is gonna get me toward a goal.
00:02:04.620 | - What about picking a goal?
00:02:05.820 | Sorry to interrupt again.
00:02:06.740 | - Well, that's interesting.
00:02:07.580 | About picking a goal is sort of an interesting thing,
00:02:09.300 | and I think that's where you bring in
00:02:11.460 | what are you pre-programmed to do?
00:02:13.260 | We talk about humans, and humans are pre-programmed
00:02:16.300 | to survive, so sort of their primary driving goal.
00:02:21.300 | What do they have to do to do that?
00:02:22.940 | And that can be very complex, right?
00:02:25.620 | So it's not just figuring out that you need to run away
00:02:29.940 | from the ferocious tiger,
00:02:31.900 | but we survive in a social context as an example.
00:02:36.900 | So understanding the subtleties of social dynamics
00:02:40.580 | becomes something that's important for surviving,
00:02:43.700 | finding a mate, reproducing, right?
00:02:45.460 | So we're continually challenged
00:02:47.620 | with complex sets of variables, complex constraints,
00:02:52.020 | rules, if you will, or patterns,
00:02:55.140 | and we learn how to find the functions
00:02:57.580 | and predict the things, in other words,
00:02:59.420 | represent those patterns efficiently,
00:03:01.820 | and be able to predict what's gonna happen,
00:03:03.180 | and that's a form of intelligence.
00:03:04.340 | That doesn't really require anything specific
00:03:09.340 | other than the ability to find that function
00:03:11.660 | and predict that right answer.
00:03:14.100 | It's certainly a form of intelligence.
00:03:16.700 | But then when we say, well, do we understand each other?
00:03:21.580 | In other words, would you perceive me as intelligent
00:03:26.580 | beyond that ability to predict?
00:03:29.300 | So now I can predict, but I can't really articulate
00:03:33.500 | how I'm going through that process,
00:03:36.100 | what my underlying theory is for predicting,
00:03:39.500 | and I can't get you to understand what I'm doing
00:03:41.940 | so that you can follow,
00:03:43.940 | you can figure out how to do this yourself
00:03:46.340 | if you did not have, for example,
00:03:49.060 | the right pattern-managing machinery that I did.
00:03:52.100 | And now we potentially have this breakdown
00:03:54.020 | where, in effect, I'm intelligent,
00:03:57.380 | but I'm sort of an alien intelligence relative to you.
00:04:00.940 | - You're intelligent, but nobody knows about it.
00:04:03.780 | - Well, I can see the output.
00:04:06.940 | - So you're saying, let's sort of separate the two things.
00:04:09.980 | One is you explaining why you were able
00:04:14.180 | to predict the future,
00:04:17.460 | and the second is me being able to,
00:04:21.660 | impressing me that you're intelligent,
00:04:23.820 | me being able to know that you successfully
00:04:25.580 | predicted the future.
00:04:26.940 | Do you think that's--
00:04:27.900 | - Well, it's not impressing you that I'm intelligent.
00:04:29.660 | In other words, you may be convinced
00:04:31.940 | that I'm intelligent in some form.
00:04:34.260 | - So how, what would convince--
00:04:35.460 | - Because of my ability to predict.
00:04:37.140 | - So I would look at the metrics.
00:04:37.980 | - When you can't, I say, wow, you're right all,
00:04:40.380 | you're right more times than I am.
00:04:43.260 | You're doing something interesting.
00:04:44.580 | That's a form of intelligence.
00:04:47.420 | But then what happens is, if I say, how are you doing that?
00:04:51.700 | And you can't communicate with me,
00:04:53.580 | and you can't describe that to me,
00:04:56.020 | now I may label you a savant.
00:04:59.020 | I may say, well, you're doing something weird,
00:05:01.540 | and it's just not very interesting to me,
00:05:04.660 | because you and I can't really communicate.
00:05:07.660 | And so now, so this is interesting, right?
00:05:10.660 | Because now this is, you're in this weird place
00:05:13.420 | where for you to be recognized as intelligent
00:05:17.620 | the way I'm intelligent, then you and I
00:05:20.300 | sort of have to be able to communicate.
00:05:22.580 | And then my, we start to understand each other,
00:05:26.820 | and then my respect and my appreciation,
00:05:31.780 | my ability to relate to you starts to change.
00:05:35.060 | So now you're not an alien intelligence anymore.
00:05:37.380 | You're a human intelligence now,
00:05:39.340 | because you and I can communicate.
00:05:42.180 | And so I think when we look at animals, for example,
00:05:46.380 | animals can do things we can't quite comprehend,
00:05:48.980 | we don't quite know how they do them,
00:05:50.060 | but they can't really communicate with us.
00:05:52.700 | They can't put what they're going through in our terms.
00:05:56.620 | And so we think of them as sort of,
00:05:57.980 | well, they're these alien intelligences,
00:05:59.780 | and they're not really worth necessarily what we're worth.
00:06:01.860 | We don't treat them the same way as a result of that.
00:06:04.620 | But it's hard, because who knows what's going on.
00:06:09.900 | - So just a quick elaboration on that.
00:06:13.900 | The explaining that you're intelligent,
00:06:16.220 | explaining the reasoning that went into the prediction
00:06:20.540 | is not some kind of mathematical proof.
00:06:25.340 | If we look at humans, look at political debates
00:06:28.500 | and discourse on Twitter, it's mostly just telling stories.
00:06:33.500 | So your task is, sorry, your task is not to tell
00:06:39.340 | an accurate depiction of how you reason,
00:06:43.420 | but to tell a story, real or not,
00:06:46.700 | that convinces me that there was a mechanism
00:06:49.380 | by which you--
00:06:50.220 | - Ultimately, that's what a proof is.
00:06:51.900 | I mean, even a mathematical proof is that.
00:06:54.500 | Because ultimately, the other mathematicians
00:06:56.500 | have to be convinced by your proof.
00:06:58.300 | Otherwise, in fact, there have been--
00:07:01.260 | - That's the metric of success, yeah.
00:07:02.740 | - Yeah, there have been several proofs out there
00:07:04.340 | where mathematicians would study for a long time
00:07:06.300 | before they were convinced
00:07:07.140 | that it actually proved anything.
00:07:08.860 | Right, you never know if it proved anything
00:07:10.500 | until the community mathematicians decided that it did.
00:07:13.140 | So I mean, but it's a real thing.
00:07:16.940 | And that's sort of the point, right,
00:07:19.220 | is that ultimately, this notion of understanding,
00:07:22.740 | us understanding something is ultimately a social concept.
00:07:26.500 | In other words, I have to convince enough people
00:07:29.000 | that I did this in a reasonable way.
00:07:32.020 | I did this in a way that other people can understand
00:07:34.660 | and replicate and that it makes sense to them.
00:07:38.100 | So human intelligence is bound together in that way.
00:07:43.100 | We're bound up in that sense.
00:07:45.740 | We sort of never really get away with it
00:07:47.820 | until we can sort of convince others
00:07:50.860 | that our thinking process makes sense.
00:07:54.140 | - Did you think the general question of intelligence
00:07:57.380 | is then also a social construct?
00:07:59.260 | So if we ask questions of an artificial intelligence system,
00:08:04.260 | is this system intelligent?
00:08:06.900 | The answer will ultimately be a socially constructed--
00:08:10.900 | - I think, so I think, I'm making two statements.
00:08:14.300 | I'm saying we can try to define intelligence
00:08:16.260 | in this super objective way that says, here's this data.
00:08:21.260 | I wanna predict this type of thing.
00:08:24.020 | Learn this function, and then if you get it right,
00:08:27.100 | often enough, we consider you intelligent.
00:08:30.340 | - But that's more like a savant.
00:08:32.700 | - I think it is.
00:08:34.020 | It doesn't mean it's not useful.
00:08:35.860 | It could be incredibly useful.
00:08:36.940 | It could be solving a problem we can't otherwise solve
00:08:39.780 | and can solve it more reliably than we can.
00:08:42.820 | But then there's this notion of,
00:08:45.260 | can humans take responsibility
00:08:48.740 | for the decision that you're making?
00:08:51.980 | Can we make those decisions ourselves?
00:08:54.420 | Can we relate to the process that you're going through?
00:08:57.140 | And now, you as an agent,
00:08:59.460 | whether you're a machine or another human, frankly,
00:09:02.840 | are now obliged to make me understand
00:09:06.960 | how it is that you're arriving at that answer
00:09:09.180 | and allow me, me or obviously a community
00:09:12.180 | or a judge of people to decide whether or not
00:09:15.100 | that makes sense.
00:09:15.940 | And by the way, that happens with humans as well.
00:09:18.520 | You're sitting down with your staff, for example,
00:09:20.360 | and you ask for suggestions about what to do next,
00:09:23.840 | and someone says, "Oh, I think you should buy,
00:09:26.880 | "and I think you should buy this much,"
00:09:28.880 | or whatever, or sell, or whatever it is,
00:09:31.440 | or I think you should launch the product today or tomorrow
00:09:34.000 | or launch this product versus that product,
00:09:35.360 | whatever the decision may be, and you ask why,
00:09:38.120 | and the person says, "I just have a good feeling about it."
00:09:41.080 | And you're not very satisfied.
00:09:42.680 | Now, that person could be, you might say,
00:09:46.560 | "Well, you've been right before,
00:09:49.080 | "but I'm gonna put the company on the line.
00:09:52.340 | "Can you explain to me why I should believe this?"
00:09:55.120 | - And that explanation may have nothing to do
00:09:58.280 | with the truth.
00:10:00.080 | It's how to convince the other person.
00:10:01.760 | It could still be wrong.
00:10:03.600 | - It's just gotta be convincing.
00:10:04.600 | - But it's ultimately gotta be convincing.
00:10:06.120 | And that's why I'm saying we're bound together.
00:10:10.440 | Our intelligences are bound together in that sense.
00:10:12.440 | We have to understand each other.
00:10:13.640 | And if, for example, you're giving me an explanation,
00:10:17.160 | and this is a very important point,
00:10:19.280 | you're giving me an explanation,
00:10:21.340 | and I'm not good,
00:10:27.640 | and then I'm not good at reasoning well
00:10:31.800 | and being objective and following logical paths
00:10:36.320 | and consistent paths,
00:10:37.440 | and I'm not good at measuring
00:10:39.680 | and sort of computing probabilities across those paths,
00:10:43.800 | what happens is collectively,
00:10:45.480 | we're not gonna do well.
00:10:48.400 | - How hard is that problem, the second one?
00:10:51.440 | So I think we'll talk quite a bit about the first
00:10:56.240 | on a specific objective metric benchmark performing well.
00:11:01.240 | But being able to explain the steps, the reasoning,
00:11:07.120 | how hard is that problem?
00:11:08.840 | - I think that's very hard.
00:11:10.080 | I mean, I think that that's,
00:11:11.560 | well, it's hard for humans.
00:11:16.440 | - The thing that's hard for humans, as you know,
00:11:19.240 | may not necessarily be hard for computers
00:11:21.200 | and vice versa.
00:11:22.720 | So, sorry, so how hard is that problem for computers?
00:11:27.720 | - I think it's hard for computers,
00:11:30.920 | and the reason why I related to,
00:11:32.880 | or saying that it's also hard for humans
00:11:34.700 | is because I think when we step back
00:11:36.620 | and we say we wanna design computers to do that,
00:11:40.220 | one of the things we have to recognize
00:11:44.760 | is we're not sure how to do it well.
00:11:48.800 | I'm not sure we have a recipe for that,
00:11:51.220 | and even if you wanted to learn it,
00:11:53.600 | it's not clear exactly what data we use
00:11:56.680 | and what judgments we use to learn that well.
00:12:01.980 | And so what I mean by that is,
00:12:03.720 | if you look at the entire enterprise of science,
00:12:07.760 | science is supposed to be about objective reason, right?
00:12:11.960 | So we think about, gee, who's the most intelligent person
00:12:15.940 | or group of people in the world?
00:12:18.780 | Do we think about the savants who can close their eyes
00:12:22.320 | and give you a number?
00:12:23.800 | We think about the think tanks,
00:12:25.960 | or the scientists or the philosophers
00:12:27.760 | who kind of work through the details
00:12:30.960 | and write the papers and come up
00:12:32.560 | with the thoughtful, logical proofs
00:12:35.360 | and use the scientific method,
00:12:36.880 | and I think it's the latter.
00:12:38.940 | And my point is that, how do you train someone to do that?
00:12:44.040 | And that's what I mean by it's hard.
00:12:45.880 | What's the process of training people to do that well?
00:12:49.060 | That's a hard process.
00:12:50.660 | We work, as a society, we work pretty hard
00:12:54.300 | to get other people to understand our thinking
00:12:57.520 | and to convince them of things.
00:13:00.500 | Now we could persuade them,
00:13:02.300 | obviously we talked about this,
00:13:03.580 | like human flaws or weaknesses,
00:13:05.780 | we can persuade them through emotional means,
00:13:10.460 | but to get them to understand and connect to
00:13:14.420 | and follow a logical argument is difficult.
00:13:18.240 | We try it, we do it as scientists,
00:13:20.720 | we try to do it as journalists,
00:13:22.480 | we try to do it as even artists in many forms,
00:13:25.540 | as writers, as teachers.
00:13:28.040 | We go through a fairly significant training process
00:13:31.200 | to do that, and then we could ask,
00:13:33.500 | well, why is that so hard?
00:13:36.180 | But it's hard, and for humans, it takes a lot of work.
00:13:41.220 | And when we step back and say,
00:13:44.240 | well, how do we get a machine to do that?
00:13:47.420 | It's a vexing question.
00:13:48.900 | - How would you begin to try to solve that?
00:13:53.540 | And maybe just a quick pause,
00:13:55.660 | because there's an optimistic notion
00:13:58.100 | in the things you're describing,
00:13:59.300 | which is being able to explain something through reason.
00:14:03.360 | But if you look at algorithms that recommend things
00:14:06.920 | that we'll look at next,
00:14:08.060 | whether it's Facebook, Google,
00:14:10.060 | advertisement-based companies,
00:14:12.860 | you know, their goal is to convince you to buy things
00:14:17.660 | based on anything.
00:14:20.660 | So that could be reason,
00:14:23.740 | 'cause the best of advertisement
00:14:25.460 | is showing you things that you really do need
00:14:27.900 | and explain why you need it.
00:14:30.300 | But it could also be through emotional manipulation.
00:14:33.960 | The algorithm that describes why a certain reason,
00:14:39.100 | a certain decision was made,
00:14:42.080 | how hard is it to do it through emotional manipulation?
00:14:46.480 | And why is that a good or a bad thing?
00:14:50.220 | So you've kind of focused on reason, logic,
00:14:55.160 | really showing in a clear way why something is good.
00:14:59.780 | One, is that even a thing that us humans do?
00:15:04.220 | And two, how do you think of the difference
00:15:08.180 | in the reasoning aspect and the emotional manipulation?
00:15:11.700 | - So you call it emotional manipulation,
00:15:15.620 | but more objectively, it's essentially saying,
00:15:18.460 | there are certain features of things
00:15:20.920 | that seem to attract your attention.
00:15:22.700 | I mean, it kind of give you more of that stuff.
00:15:25.080 | - Manipulation is a bad word.
00:15:26.540 | - Yeah, I mean, I'm not saying it's right or wrong.
00:15:29.400 | It works to get your attention,
00:15:31.240 | and it works to get you to buy stuff.
00:15:32.700 | And when you think about algorithms
00:15:34.240 | that look at the patterns of features
00:15:38.280 | that you seem to be spending your money on,
00:15:40.200 | and say, I'm gonna give you something
00:15:41.540 | with a similar pattern,
00:15:43.080 | so I'm gonna learn that function,
00:15:44.360 | because the objective is to get you to click on it
00:15:46.480 | or get you to buy it or whatever it is.
00:15:48.500 | I don't know, I mean, it is what it is.
00:15:51.680 | I mean, that's what the algorithm does.
00:15:54.120 | You can argue whether it's good or bad.
00:15:55.720 | It depends what your goal is.
00:15:58.720 | - I guess this seems to be very useful for convincing,
00:16:02.440 | for telling a story.
00:16:03.280 | - I think for convincing humans, it's good,
00:16:05.960 | because again, this goes back to,
00:16:07.880 | what is the human behavior like?
00:16:10.360 | How does the human brain respond to things?
00:16:15.280 | I think there's a more optimistic view of that, too,
00:16:17.640 | which is that if you're searching
00:16:20.280 | for certain kinds of things,
00:16:21.400 | you've already reasoned that you need them.
00:16:24.440 | And these algorithms are saying, look, that's up to you
00:16:28.320 | to reason whether you need something or not.
00:16:30.460 | That's your job.
00:16:31.960 | You may have an unhealthy addiction to this stuff,
00:16:35.200 | or you may have a reasoned and thoughtful explanation
00:16:40.200 | for why it's important to you,
00:16:42.800 | and the algorithms are saying, hey, that's whatever.
00:16:45.520 | That's your problem.
00:16:46.360 | All I know is you're buying stuff like that,
00:16:48.880 | you're interested in stuff like that.
00:16:50.200 | Could be a bad reason, could be a good reason.
00:16:52.220 | That's up to you.
00:16:53.240 | I'm gonna show you more of that stuff.
00:16:55.800 | And I think that that's, it's not good or bad.
00:17:00.520 | It's not reasoned or not reasoned.
00:17:01.840 | The algorithm is doing what it does,
00:17:03.200 | which is saying, you seem to be interested in this,
00:17:05.200 | I'm gonna show you more of that stuff.
00:17:07.640 | And I think we're seeing this not just in buying stuff,
00:17:09.520 | but even in social media.
00:17:10.480 | You're reading this kind of stuff.
00:17:12.280 | I'm not judging on whether it's good or bad.
00:17:14.040 | I'm not reasoning at all.
00:17:15.240 | I'm just saying, I'm gonna show you other stuff
00:17:17.500 | with similar features.
00:17:19.120 | And that's it, and I wash my hands from it,
00:17:21.840 | and I say, that's all that's going on.
00:17:24.240 | - People are so harsh on AI systems.
00:17:30.200 | So one, the bar of performance is extremely high,
00:17:33.200 | and yet we also ask them to, in the case of social media,
00:17:37.840 | to help find the better angels of our nature,
00:17:41.200 | and help make a better society.
00:17:44.240 | So what do you think about the role of AI there?
00:17:46.640 | - I agree with you.
00:17:48.240 | That's the interesting dichotomy, right?
00:17:49.840 | Because on one hand, we're sitting there,
00:17:52.440 | and we're sort of doing the easy part,
00:17:54.160 | which is finding the patterns.
00:17:56.240 | We're not building, the system's not building a theory
00:18:00.080 | that is consumable and understandable by other humans
00:18:02.480 | that can be explained and justified.
00:18:04.640 | And so on one hand, to say, oh, AI is doing this.
00:18:09.640 | Why isn't it doing this other thing?
00:18:11.960 | Well, this other thing's a lot harder.
00:18:14.560 | And it's interesting to think about why it's harder.
00:18:18.440 | And because you're interpreting the data
00:18:22.240 | in the context of prior models.
00:18:24.520 | In other words, understandings of what's important
00:18:27.200 | in the world, what's not important.
00:18:28.480 | What are all the other abstract features
00:18:30.280 | that drive our decision-making?
00:18:33.640 | What's sensible, what's not sensible, what's good,
00:18:35.680 | what's bad, what's moral, what's valuable, what isn't?
00:18:38.280 | Where is that stuff?
00:18:39.400 | No one's applying the interpretation.
00:18:41.520 | So when I see you clicking on a bunch of stuff,
00:18:44.880 | and I look at these simple features, the raw features,
00:18:48.040 | the features that are there in the data,
00:18:49.360 | like what words are being used,
00:18:51.600 | or how long the material is,
00:18:55.960 | or other very superficial features,
00:18:58.920 | what colors are being used in the material.
00:19:00.840 | Like I don't know why you're clicking
00:19:02.200 | on the stuff you're looking,
00:19:03.280 | or if it's products, what the price is,
00:19:05.920 | or what the category is, and stuff like that.
00:19:07.880 | And I just feed you more of the same stuff.
00:19:09.880 | That's very different than kind of getting in there
00:19:12.080 | and saying, what does this mean?
00:19:14.320 | The stuff you're reading, like why are you reading it?
00:19:18.680 | What assumptions are you bringing to the table?
00:19:22.240 | Are those assumptions sensible?
00:19:24.720 | Does the material make any sense?
00:19:27.320 | Does it lead you to thoughtful, good conclusions?
00:19:32.320 | Again, there's interpretation and judgment involved
00:19:35.720 | in that process.
00:19:37.240 | That isn't really happening in the AI today.
00:19:41.120 | That's harder.
00:19:43.760 | Because you have to start getting at the meaning
00:19:46.840 | of the stuff, of the content.
00:19:50.320 | You have to get at how humans interpret the content
00:19:54.040 | relative to their value system
00:19:57.000 | and deeper thought processes.
00:19:58.880 | - So that's what meaning means,
00:20:00.440 | is not just some kind of deep, timeless, semantic thing
00:20:05.440 | that the statement represents,
00:20:09.240 | but also how a large number of people
00:20:11.680 | are likely to interpret.
00:20:13.520 | So it's again, even meaning is a social construct.
00:20:17.380 | It's so you have to try to predict
00:20:19.800 | how most people would understand this kind of statement.
00:20:22.800 | - Yeah, meaning is often relative,
00:20:25.560 | but meaning implies that the connections
00:20:28.120 | go beneath the surface of the artifacts.
00:20:30.120 | If I show you a painting, it's a bunch of colors on a canvas,
00:20:33.760 | what does it mean to you?
00:20:35.400 | And it may mean different things to different people
00:20:37.680 | because of their different experiences.
00:20:40.520 | It may mean something even different
00:20:43.000 | to the artist who painted it.
00:20:44.720 | As we try to get more rigorous with our communication,
00:20:48.980 | we try to really nail down that meaning.
00:20:51.520 | So we go from abstract art to precise mathematics,
00:20:56.520 | precise engineering drawings and things like that.
00:20:59.800 | We're really trying to say,
00:21:01.680 | I wanna narrow that space of possible interpretations
00:21:06.560 | because the precision of the communication
00:21:09.000 | ends up becoming more and more important.
00:21:11.680 | And so that means that I have to specify,
00:21:16.160 | and I think that's why this becomes really hard.
00:21:19.640 | Because if I'm just showing you an artifact
00:21:22.440 | and you're looking at it superficially,
00:21:24.240 | whether it's a bunch of words on a page
00:21:26.480 | or whether it's brushstrokes on a canvas
00:21:30.200 | or pixels in a photograph,
00:21:31.880 | you can sit there and you can interpret
00:21:33.360 | lots of different ways at many, many different levels.
00:21:36.060 | But when I wanna align our understanding of that,
00:21:43.240 | I have to specify a lot more stuff
00:21:46.680 | that's actually not directly in the artifact.
00:21:50.600 | Now I have to say, well, how are you interpreting
00:21:54.280 | this image and that image?
00:21:55.560 | And what about the colors and what do they mean to you?
00:21:57.680 | What perspective are you bringing to the table?
00:22:00.880 | What are your prior experiences with those artifacts?
00:22:03.920 | What are your fundamental assumptions and values?
00:22:07.120 | What is your ability to kind of reason
00:22:09.160 | to chain together logical implication
00:22:11.940 | as you're sitting there and saying,
00:22:12.780 | well, if this is the case, then I would conclude this.
00:22:14.840 | If that's the case, then I would conclude that.
00:22:17.400 | So your reasoning processes and how they work,
00:22:20.800 | your prior models and what they are,
00:22:23.640 | your values and your assumptions,
00:22:25.480 | all those things now come together into the interpretation.
00:22:28.900 | Getting in sync of that is hard.
00:22:32.060 | - And yet humans are able to intuit some of that
00:22:35.880 | without any pre--
00:22:37.840 | - Because they have the shared experience.
00:22:39.840 | - And we're not talking about shared,
00:22:41.200 | two people having a shared experience.
00:22:42.680 | I mean, as a society--
00:22:43.840 | - That's correct.
00:22:44.880 | We have the shared experience and we have similar brains.
00:22:49.500 | So we tend to, in other words,
00:22:52.360 | part of our shared experience
00:22:53.400 | is our shared local experience.
00:22:54.780 | Like we may live in the same culture,
00:22:56.160 | we may live in the same society,
00:22:57.360 | and therefore we have similar educations.
00:23:00.320 | We have similar, what we like to call prior models
00:23:02.400 | about the prior experiences.
00:23:04.160 | And we use that as a,
00:23:05.680 | think of it as a wide collection of interrelated variables
00:23:09.240 | and they're all bound to similar things.
00:23:11.080 | And so we take that as our background
00:23:13.360 | and we start interpreting things similarly.
00:23:15.840 | But as humans, we have a lot of shared experience.
00:23:20.160 | We do have similar brains, similar goals,
00:23:23.280 | similar emotions under similar circumstances
00:23:26.360 | because we're both humans.
00:23:27.320 | So now one of the early questions you asked,
00:23:29.720 | how is biological and computer information systems
00:23:34.720 | fundamentally different?
00:23:36.280 | Well, one is humans come with a lot of pre-programmed stuff,
00:23:42.160 | a ton of program stuff,
00:23:44.240 | and they're able to communicate
00:23:45.520 | because they have a lot of,
00:23:46.640 | because they share that stuff.
00:23:48.400 | (silence)
00:23:50.560 | (silence)
00:23:52.720 | (silence)
00:23:54.880 | (silence)
00:23:57.040 | (silence)
00:23:59.200 | (silence)
00:24:01.360 | (silence)
00:24:03.520 | [BLANK_AUDIO]