back to index

What is Wolfram Language? (Stephen Wolfram) | AI Podcast Clips


Chapters

0:0 Intro
1:0 Symbolic language
3:45 Random sample
7:50 History of AI
11:30 The Dream of Machine Learning
13:45 The State of Wolfram Language
19:2 Wolfram Knowledge Base
26:16 Optimism
27:20 The Internet
30:57 Encoding ideologies
33:8 Different value systems

Whisper Transcript | Transcript Only Page

00:00:00.000 | what is Wolfram language in terms of,
00:00:04.400 | sort of, I mean I can answer the question for you,
00:00:08.440 | but is it basically, not the philosophical,
00:00:12.040 | deep, the profound, the impact of it,
00:00:14.020 | I'm talking about in terms of tools,
00:00:15.800 | in terms of things you can download,
00:00:17.200 | in terms of stuff you can play with, what is it?
00:00:19.300 | What does it fit into the infrastructure?
00:00:21.560 | What are the different ways to interact with it?
00:00:23.440 | - Right, so I mean the two big things
00:00:25.280 | that people have sort of perhaps heard of
00:00:27.840 | that come from Wolfram language,
00:00:29.280 | one is Mathematica, the other is Wolfram Alpha.
00:00:31.720 | So Mathematica first came out in 1988,
00:00:34.760 | it's this system that is basically
00:00:37.640 | an instance of Wolfram language,
00:00:40.800 | and it's used to do computations,
00:00:43.880 | particularly in sort of technical areas,
00:00:47.200 | and the typical thing you're doing
00:00:49.080 | is you're typing little pieces of computational language,
00:00:52.540 | and you're getting computations done.
00:00:54.720 | - It's very kind of, there's like a symbolic,
00:00:59.400 | yeah, it's a symbolic language.
00:01:00.600 | - It's a symbolic language, so I mean,
00:01:02.360 | I don't know how to cleanly express that,
00:01:04.120 | but that makes it very distinct
00:01:05.600 | from how we think about sort of,
00:01:08.000 | I don't know, programming in a language
00:01:10.640 | like Python or something.
00:01:11.680 | - Right, so the point is that
00:01:13.800 | in a traditional programming language,
00:01:15.600 | the raw material of the programming language
00:01:18.080 | is just stuff that computers intrinsically do,
00:01:21.300 | and the point of Wolfram language
00:01:23.800 | is that what the language is talking about
00:01:27.000 | is things that exist in the world
00:01:28.800 | or things that we can imagine and construct,
00:01:31.400 | not, it's not sort of,
00:01:34.040 | it's aimed to be an abstract language from the beginning,
00:01:37.560 | and so, for example, one feature it has
00:01:39.120 | is that it's a symbolic language,
00:01:41.000 | which means that the thing called,
00:01:43.280 | you have an X, just type in X,
00:01:46.400 | and Wolfram language will just say, oh, that's X.
00:01:49.020 | It won't say error, undefined thing.
00:01:51.520 | I don't know what it is, computation,
00:01:53.520 | but in terms of the internals of the computer.
00:01:55.780 | Now, that X could perfectly well be the city of Boston.
00:02:00.720 | That's a thing, that's a symbolic thing,
00:02:03.240 | or it could perfectly well be the trajectory
00:02:06.720 | of some spacecraft represented as a symbolic thing,
00:02:09.880 | and that idea that one can work with,
00:02:14.480 | sort of computationally work with these different,
00:02:17.000 | these kinds of things that exist in the world
00:02:20.120 | or describe the world, that's really powerful,
00:02:22.700 | and that's what, I mean,
00:02:24.960 | when I started designing,
00:02:26.960 | well, when I designed the predecessor
00:02:28.600 | of what's now Wolfram language,
00:02:31.520 | which is a thing called SMP,
00:02:32.680 | which was my first computer language,
00:02:34.720 | I kind of wanted to have this sort of infrastructure
00:02:39.720 | for computation, which was as fundamental as possible.
00:02:42.360 | I mean, this is what I got for having been a physicist
00:02:44.680 | and tried to find fundamental components of things
00:02:48.260 | and wound up with this kind of idea
00:02:50.660 | of transformation rules for symbolic expressions
00:02:54.240 | as being sort of the underlying stuff
00:02:57.560 | from which computation would be built,
00:02:59.640 | and that's what we've been building from in Wolfram language
00:03:03.980 | and operationally what happens,
00:03:06.960 | it's, I would say, by far the highest level
00:03:10.240 | computer language that exists,
00:03:13.080 | and it's really been built in a very different direction
00:03:16.280 | from other languages.
00:03:17.440 | So other languages have been about,
00:03:20.400 | there is a core language.
00:03:22.180 | It really is kind of wrapped around the operations
00:03:24.520 | that a computer intrinsically does.
00:03:26.520 | Maybe people add libraries for this or that,
00:03:29.820 | but the goal of Wolfram language
00:03:31.240 | is to have the language itself be able to cover
00:03:35.280 | this sort of very broad range of things
00:03:37.000 | that show up in the world,
00:03:37.920 | and that means that there are 6,000 primitive functions
00:03:41.880 | in the Wolfram language that cover things.
00:03:44.680 | I could probably pick a random here.
00:03:46.980 | I'm gonna pick just for fun, I'll pick,
00:03:51.200 | let's take a random sample of all the things
00:03:56.200 | that we have here.
00:03:57.660 | So let's just say random sample of 10 of them,
00:04:00.080 | and let's see what we get.
00:04:02.240 | Wow, okay, so these are really different things from-
00:04:05.600 | - Yeah, these are all functions.
00:04:07.120 | - These are all functions, Boolean convert.
00:04:09.360 | Okay, that's a thing for converting
00:04:11.560 | between different types of Boolean expressions.
00:04:14.880 | - So for people just listening,
00:04:17.160 | Stephen typed in a random sample of names,
00:04:19.440 | so this is sampling from all function.
00:04:21.400 | How many you said there might be?
00:04:22.600 | - 6,000. - 6,000,
00:04:23.520 | from 6,000, 10 of them,
00:04:24.800 | and there's a hilarious variety of them.
00:04:27.960 | - Yeah, right, well, we've got things
00:04:29.480 | about dollar requester address
00:04:31.880 | that has to do with interacting
00:04:33.520 | with the world of the cloud and so on,
00:04:37.760 | discrete wavelet data, spheroid-
00:04:40.560 | - It's also graphical, sort of window-
00:04:42.720 | - Yeah, yeah, window movable,
00:04:43.920 | that's a user interface kind of thing.
00:04:45.640 | I want to pick another 10, 'cause I think this is some,
00:04:48.200 | okay, so yeah, there's a lot of infrastructure stuff here
00:04:51.240 | that you see if you just start sampling at random,
00:04:53.720 | there's a lot of kind of infrastructural things.
00:04:55.480 | If you more look at the-
00:04:57.760 | - Some of the exciting machine learning stuff
00:04:59.240 | you showed off, is that also in this pool?
00:05:01.960 | - Oh yeah, yeah, I mean, so one of those functions
00:05:04.360 | is like image identify as a function here,
00:05:07.920 | we just say image identify, I don't know,
00:05:09.440 | it's always good to, let's do this,
00:05:11.440 | let's say current image, and let's pick up an image,
00:05:15.520 | hopefully.
00:05:16.360 | - So current image, accessing the webcam,
00:05:20.120 | took a picture of yourself.
00:05:21.440 | - Took a terrible picture, but anyway,
00:05:23.920 | we can say image identify, open square brackets,
00:05:27.040 | and then we just paste that picture in there.
00:05:29.800 | - Image identify function running on the picture.
00:05:32.160 | - Oh, and it says, oh wow, it says,
00:05:34.640 | I look like a plunger, because I got this great big thing
00:05:37.040 | behind my head.
00:05:37.880 | - Classify, so this image identify classifies
00:05:39.760 | the most likely object in the image,
00:05:42.000 | and it says it's a plunger.
00:05:44.280 | - Okay, that's a bit embarrassing,
00:05:45.920 | let's see what it does, let's pick the top 10.
00:05:48.720 | Okay, well it thinks there's a,
00:05:51.480 | oh, it thinks it's pretty unlikely
00:05:53.120 | that it's a primate, a hominid, a person.
00:05:55.040 | - 8% probability.
00:05:56.320 | - Yeah, that's bad.
00:05:57.160 | - Prime age 57 is a plunger.
00:05:59.280 | - Yeah, well, so.
00:06:00.120 | - That hopefully will not give you an existential crisis,
00:06:02.280 | and then 8%, or I shouldn't say percent, but--
00:06:07.280 | - No, that's right, 8% that it's a hominid.
00:06:10.320 | And yeah, okay, it's really,
00:06:12.000 | I'm gonna do another one of these,
00:06:13.280 | just 'cause I'm embarrassed that it,
00:06:15.240 | oops, it didn't see me at all.
00:06:18.440 | There we go, let's try that, let's see what that did.
00:06:21.720 | - We took a picture with a little bit more of your--
00:06:24.120 | - A little bit more of me,
00:06:26.240 | and not just my bald head, so to speak.
00:06:28.520 | Okay, 89% probability it's a person,
00:06:31.240 | so then I would, but, you know,
00:06:34.000 | so this is image identify as an example of one--
00:06:36.640 | - Of just one of them.
00:06:37.680 | - Just one function out of 6,000.
00:06:38.880 | - And that's part of the, that's like part of the language.
00:06:42.000 | - That's part of the core language, yes.
00:06:42.840 | - That's part of the core language.
00:06:43.680 | - And I mean, you know, something like,
00:06:45.240 | I could say, I don't know, let's find the geo nearest,
00:06:49.240 | what could we find?
00:06:50.840 | Let's find the nearest volcano.
00:06:53.080 | Let's find the 10, I wonder where it thinks here is.
00:06:59.400 | Let's try finding the 10 volcanoes nearest here, okay?
00:07:04.080 | - So geo nearest volcano here, 10 nearest volcanoes.
00:07:08.680 | - Right, let's find out where those are.
00:07:09.960 | We can now, we got a list of volcanoes out,
00:07:12.080 | and I can say geo list plot that,
00:07:15.200 | and hopefully, oh, okay, so there we go.
00:07:16.840 | So there's a map that shows the positions
00:07:19.680 | of those 10 volcanoes.
00:07:21.040 | - Of the East Coast and the Midwest, and it's the,
00:07:23.520 | well, no, we're okay, we're okay, it's not too bad.
00:07:26.240 | - Yeah, they're not very close to us.
00:07:27.400 | We could measure how far away they are.
00:07:29.600 | But, you know, the fact that right in the language,
00:07:33.000 | it knows about all the volcanoes in the world,
00:07:35.440 | it knows, you know, computing what the nearest ones are,
00:07:38.440 | it knows all the maps of the world, and so on.
00:07:40.280 | - It's a fundamentally different idea
00:07:41.400 | of what a language is.
00:07:42.320 | - Yeah, right, that's why I like to talk about it
00:07:45.080 | as a full-scale computational language.
00:07:47.160 | That's what we've tried to do.
00:07:48.640 | - And just if you can comment briefly,
00:07:50.520 | I mean, this kind of, the Wolfram language,
00:07:54.040 | along with Wolfram Alpha, represents kind of
00:07:56.000 | what the dream of what AI is supposed to be.
00:07:58.680 | There's now sort of a craze of learning,
00:08:01.720 | kind of idea that we can take raw data,
00:08:04.560 | and from that extract the different hierarchies
00:08:08.480 | of abstractions in order to be able to,
00:08:11.480 | like in order to form the kind of things
00:08:13.240 | that Wolfram language operates with.
00:08:17.640 | But we're very far from learning systems
00:08:20.080 | being able to form that.
00:08:21.280 | The context of history of AI, if you could just comment on,
00:08:27.180 | there is, you said computation X.
00:08:30.280 | And there's just some sense where in the 80s and 90s,
00:08:33.360 | sort of expert systems represented
00:08:35.400 | a very particular computation X.
00:08:37.320 | - Yes.
00:08:38.160 | - And there's a kind of notion
00:08:39.640 | that those efforts didn't pan out.
00:08:43.400 | - Right.
00:08:44.240 | - But then out of that emerges kind of Wolfram language,
00:08:47.840 | Wolfram Alpha, which is the success, I mean.
00:08:50.480 | - Yeah, right.
00:08:51.320 | I think those are, in some sense,
00:08:52.600 | those efforts were too modest.
00:08:54.400 | - Right, exactly.
00:08:55.240 | - They were looking at particular areas,
00:08:57.640 | and you actually can't do it with a particular area.
00:08:59.960 | I mean, like even a problem
00:09:01.280 | like natural language understanding,
00:09:03.080 | it's critical to have broad knowledge of the world
00:09:05.320 | if you want to do good natural language understanding.
00:09:07.980 | And you kind of have to bite off the whole problem.
00:09:10.400 | If you say we're just gonna do the blocks world over here,
00:09:13.520 | so to speak, you don't really,
00:09:15.680 | it's actually, it's one of these cases
00:09:17.960 | where it's easier to do the whole thing
00:09:19.560 | than it is to do some piece of it.
00:09:21.040 | You know, one comment to make about,
00:09:22.600 | so the relationship between what we've tried to do
00:09:25.200 | and sort of the learning side of AI,
00:09:28.000 | you know, in a sense, if you look at the development
00:09:30.960 | of knowledge in our civilization as a whole,
00:09:33.400 | there was kind of this notion pre 300 years ago or so now,
00:09:37.360 | you want to figure something out about the world,
00:09:38.920 | you can reason it out.
00:09:40.200 | You can do things which are just use raw human thought.
00:09:44.160 | And then along came sort of modern mathematical science.
00:09:47.720 | And we found ways to just sort of blast through that
00:09:51.240 | by in that case, writing down equations.
00:09:53.720 | Now we also know we can do that with computation and so on.
00:09:57.180 | And so that was kind of a different thing.
00:09:59.120 | So when we look at how do we sort of encode knowledge
00:10:03.160 | and figure things out,
00:10:04.620 | one way we could do it is start from scratch,
00:10:06.760 | learn everything,
00:10:08.080 | it's just a neural net figuring everything out.
00:10:10.960 | But in a sense that denies the sort of knowledge
00:10:14.280 | based achievements of our civilization.
00:10:16.480 | Because in our civilization, we have learned lots of stuff.
00:10:19.640 | We've surveyed all the volcanoes in the world.
00:10:21.520 | We've done, you know, we figured out lots of algorithms
00:10:24.360 | for this or that.
00:10:25.700 | Those are things that we can encode computationally.
00:10:28.960 | And that's what we've tried to do.
00:10:30.640 | And we're not saying just,
00:10:32.420 | you don't have to start everything from scratch.
00:10:34.600 | So in a sense, a big part of what we've done
00:10:37.140 | is to try and sort of capture the knowledge of the world
00:10:40.940 | in computational form and computable form.
00:10:43.520 | Now, there's also some pieces
00:10:45.640 | which were for a long time undoable by computers
00:10:49.520 | like image identification,
00:10:51.300 | where there's a really, really useful module
00:10:54.100 | that we can add that is those things
00:10:56.620 | which actually were pretty easy for humans to do
00:10:59.220 | that had been hard for computers to do.
00:11:01.120 | I think the thing that's interesting that's emerging now
00:11:03.620 | is the interplay between these things,
00:11:05.140 | between this kind of knowledge of the world
00:11:07.220 | that is in a sense very symbolic
00:11:09.380 | and this kind of sort of much more statistical
00:11:13.060 | kind of things like image identification and so on.
00:11:17.660 | And putting those together
00:11:19.180 | by having this sort of symbolic representation
00:11:21.500 | of image identification,
00:11:23.660 | that that's where things get really interesting
00:11:25.980 | and where you can kind of symbolically represent patterns
00:11:28.540 | of things and images and so on.
00:11:30.900 | I think that's kind of a part of the path forward,
00:11:34.840 | so to speak.
00:11:35.680 | - Yeah, so the dream of, so the machine learning is not,
00:11:39.460 | in my view, I think the view of many people
00:11:41.520 | is not anywhere close to building the kind of wide world
00:11:46.520 | of computable knowledge that will from a language of build.
00:11:50.280 | But because you have a kind of,
00:11:53.540 | you've done the incredibly hard work of building this world,
00:11:56.740 | now machine learning can serve as tools
00:12:01.060 | to help you explore that world.
00:12:02.620 | - Yeah, yeah.
00:12:03.460 | - And that's what you've added with version 12, right?
00:12:06.860 | You added a few, I was seeing some demos.
00:12:08.740 | It looks amazing.
00:12:10.500 | - Right, I mean, I think this,
00:12:12.400 | it's sort of interesting to see the sort of the,
00:12:17.100 | once it's computable, once it's in there,
00:12:19.020 | it's running in sort of a very efficient computational way,
00:12:22.220 | but then there's sort of things like the interface
00:12:24.100 | of how do you get there?
00:12:25.180 | How do you do natural language understanding to get there?
00:12:27.340 | How do you pick out entities
00:12:29.340 | in a big piece of text or something?
00:12:31.140 | That's, I mean, actually a good example right now
00:12:34.720 | is our NLP, NLU loop, which is,
00:12:37.440 | we've done a lot of stuff, natural language understanding,
00:12:40.700 | using essentially not learning-based methods,
00:12:43.360 | using a lot of little algorithmic methods,
00:12:47.180 | human curation methods, and so on.
00:12:48.700 | - Which is when people try to enter a query
00:12:51.400 | and then converting, so the process of converting,
00:12:54.300 | NLU defined beautifully as converting their query
00:12:59.300 | into a computational language,
00:13:02.140 | which is a very well, first of all,
00:13:04.340 | super practical definition, a very useful definition,
00:13:07.620 | and then also a very clear definition
00:13:10.460 | of natural language understanding.
00:13:12.100 | - Right, I mean, a different thing
00:13:13.500 | is natural language processing,
00:13:14.900 | where it's like, here's a big lump of text,
00:13:17.620 | go pick out all the cities in that text, for example.
00:13:20.640 | And so a good example of, you know, so we do that.
00:13:22.980 | We're using modern machine learning techniques.
00:13:26.420 | And it's actually kind of an interesting process
00:13:29.780 | that's going on right now, is this loop between
00:13:32.460 | what do we pick up with NLP, we're using machine learning,
00:13:35.980 | versus what do we pick up with our more
00:13:38.180 | kind of precise computational methods
00:13:40.420 | in natural language understanding.
00:13:42.140 | And so we've got this kind of loop going between those,
00:13:44.120 | which is improving both of them.
00:13:45.760 | - Yeah, and I think you have some of the state-of-the-art
00:13:47.420 | transformers, like you have BERT in there, I think.
00:13:49.300 | - Oh, yeah.
00:13:50.140 | - So it's cool, so you have,
00:13:51.060 | you have integrating all the models.
00:13:52.500 | I mean, this is the hybrid thing that people
00:13:55.500 | have always dreamed about or talking about.
00:13:57.860 | That makes you just surprised, frankly,
00:14:01.240 | that Wolfram Language is not more popular
00:14:03.580 | than it already is.
00:14:05.580 | - You know, that's a, it's a complicated issue,
00:14:09.660 | because it's like, it involves, you know,
00:14:14.660 | it involves ideas, and ideas are absorbed
00:14:18.340 | slowly in the world.
00:14:19.620 | I mean, I think that's--
00:14:20.460 | - And then there's sort of, like what we're talking about,
00:14:22.340 | there's egos and personalities,
00:14:23.900 | and some of the absorption mechanisms of ideas
00:14:28.900 | have to do with personalities,
00:14:31.380 | and the students of personalities,
00:14:33.500 | and then a little social network.
00:14:35.680 | So it's interesting how the spread of ideas works.
00:14:38.660 | - You know what's funny with Wolfram Language
00:14:40.580 | is that we are, if you say, you know,
00:14:43.540 | what market, sort of market penetration,
00:14:46.040 | if you look at the, I would say, very high end of R&D
00:14:50.060 | and sort of the people where you say,
00:14:52.460 | "Wow, that's a really, you know, impressive, smart person,"
00:14:56.180 | they're very often users of Wolfram Language,
00:14:58.460 | very, very often.
00:14:59.860 | If you look at the more sort of, it's a funny thing,
00:15:02.560 | if you look at the more kind of, I would say,
00:15:05.120 | people who are like, "Oh, we're just plodding away
00:15:07.340 | "doing what we do," they're often not yet
00:15:11.100 | Wolfram Language users, and that dynamic,
00:15:13.300 | it's kind of odd that there hasn't been
00:15:14.780 | more rapid trickle down, because we really,
00:15:17.380 | you know, the high end, we've really been very successful
00:15:20.440 | in for a long time, and it's some,
00:15:23.340 | but with, you know, that's partly, I think,
00:15:26.780 | a consequence of, it's my fault in a sense,
00:15:29.740 | because it's kind of, you know, I have a company
00:15:32.420 | which is, really emphasizes sort of creating products
00:15:37.100 | and building a, sort of the best possible
00:15:41.700 | technical tower we can, rather than sort of
00:15:45.740 | doing the commercial side of things and pumping it out
00:15:48.460 | in sort of the most effective way.
00:15:50.180 | - And there's an interesting idea that, you know,
00:15:52.100 | perhaps you can make it more popular
00:15:53.580 | by opening everything up, sort of the GitHub model,
00:15:58.060 | but there's an interesting, I think I've heard you
00:16:00.200 | discuss this, that that turns out not to work
00:16:03.100 | in a lot of cases, like in this particular case,
00:16:05.640 | that you want it, that when you deeply care about
00:16:09.800 | the integrity, the quality of the knowledge
00:16:14.300 | that you're building, that unfortunately,
00:16:17.580 | you can't distribute that effort.
00:16:20.600 | - Yeah, it's not the nature of how things work.
00:16:24.820 | I mean, you know, what we're trying to do
00:16:27.040 | is a thing that for better or worse,
00:16:29.140 | requires leadership and it requires kind of
00:16:31.860 | maintaining a coherent vision over a long period of time,
00:16:35.480 | and doing not only the cool vision related work,
00:16:39.780 | but also the kind of mundane in the trenches
00:16:42.880 | to make the thing actually work well, work.
00:16:45.180 | - So how do you build the knowledge?
00:16:47.580 | Because that's the fascinating thing.
00:16:49.020 | That's the mundane, the fascinating and the mundane,
00:16:52.100 | is building the knowledge, the adding,
00:16:54.300 | integrating more data.
00:16:55.380 | - Yeah, I mean, that's probably not the most,
00:16:57.500 | I mean, the things like get it to work
00:16:59.660 | in all these different cloud environments and so on.
00:17:02.340 | That's pretty, you know, that's very practical stuff.
00:17:04.780 | You know, have the user interface be smooth
00:17:06.700 | and, you know, have there be, take only, you know,
00:17:09.420 | a fraction of a millisecond to do this or that.
00:17:11.700 | That's a lot of work.
00:17:13.000 | And it's, but, you know, I think my,
00:17:18.000 | it's an interesting thing over the period of time,
00:17:20.240 | you know, orphan language has existed basically
00:17:23.420 | for more than half of the total amount of time
00:17:25.740 | that any language, any computer language has existed.
00:17:28.120 | That is, the computer language is maybe 60 years old,
00:17:31.480 | you know, give or take,
00:17:33.800 | and orphan language is 33 years old.
00:17:36.360 | So it's kind of a, and I think I was realizing recently,
00:17:41.040 | there's been more innovation in the distribution of software
00:17:44.740 | than probably than in the structure
00:17:46.520 | of programming languages over that period of time.
00:17:49.340 | And we, you know, we've been sort of trying
00:17:52.680 | to do our best to adapt to it.
00:17:54.000 | And the good news is that we have, you know,
00:17:56.320 | because I have a simple private company and so on
00:17:59.080 | that doesn't have, you know, a bunch of investors,
00:18:01.560 | you know, telling us we're gonna do this or that,
00:18:04.040 | I have lots of freedom in what we can do.
00:18:05.880 | And so, for example, we're able to, oh, I don't know,
00:18:09.120 | we have this free Wolfram Engine for developers,
00:18:11.260 | which is a free version for developers.
00:18:13.120 | And we've been, you know, we've, there are site licenses
00:18:16.040 | for Mathematica and Wolfram Language
00:18:18.520 | at basically all major universities,
00:18:20.400 | certainly in the US by now.
00:18:22.520 | So it's effectively free to people
00:18:24.560 | and all universities in effect.
00:18:27.420 | And, you know, we've been doing a progression of things.
00:18:31.000 | I mean, different things like Wolfram Alpha, for example,
00:18:33.880 | the main website is just a free website.
00:18:36.720 | - What is Wolfram Alpha?
00:18:38.400 | - Okay, Wolfram Alpha is a system for answering questions
00:18:42.040 | where you ask a question with natural language
00:18:45.640 | and it'll try and generate a report
00:18:47.640 | telling you the answer to that question.
00:18:49.000 | So the question could be something like, you know,
00:18:52.760 | what's the population of Boston divided by New York
00:18:56.800 | compared to New York?
00:18:57.720 | And it'll take those words and give you an answer.
00:19:01.880 | And that have been--
00:19:02.720 | - Converts the words into computable, into--
00:19:06.520 | - Into Wolfram Language, actually.
00:19:07.800 | - Into Wolfram Language.
00:19:08.800 | - And then computational language
00:19:09.920 | and then computes the answer.
00:19:10.760 | - Do you think the underlying knowledge
00:19:12.720 | belongs to Wolfram Alpha or to the Wolfram Language?
00:19:15.600 | What's the--
00:19:16.440 | - We just call it the Wolfram Knowledge Base.
00:19:18.000 | - Knowledge Base.
00:19:18.840 | - I mean, that's been a big effort over the decades
00:19:22.520 | to collect all that stuff.
00:19:23.720 | And, you know, more of it flows in every second.
00:19:26.680 | - Can you just pause on that for a second?
00:19:28.560 | Like, that's one of the most incredible things.
00:19:31.640 | Of course, in the long-term,
00:19:33.760 | Wolfram Language itself is the fundamental thing.
00:19:37.560 | But in the amazing sort of short-term,
00:19:40.760 | the knowledge base is kind of incredible.
00:19:43.880 | So what's the process of building that knowledge base?
00:19:47.680 | The fact that you, first of all,
00:19:48.800 | from the very beginning,
00:19:49.720 | that you're brave enough to start
00:19:51.520 | to take on the general knowledge base.
00:19:54.520 | And how do you go from zero
00:19:58.540 | to the incredible knowledge base that you have now?
00:20:01.520 | - Well, yeah, it was kind of scary at some level.
00:20:03.420 | I mean, I had wondered about doing something like this
00:20:05.920 | since I was a kid.
00:20:07.200 | So it wasn't like I hadn't thought about it for a while.
00:20:09.240 | - But most of us,
00:20:10.220 | most of the brilliant dreamers give up
00:20:14.120 | such a difficult engineering notion at some point.
00:20:17.280 | - Right, right.
00:20:18.440 | Well, the thing that happened with me,
00:20:19.900 | which was kind of,
00:20:21.120 | it's a live-your-own-paradigm kind of theory.
00:20:24.880 | So basically what happened is,
00:20:26.800 | I had assumed that to build something like Wolfram Alpha
00:20:30.160 | would require sort of solving the general AI problem.
00:20:33.120 | That's what I had assumed.
00:20:34.720 | And so I kept on thinking about that,
00:20:36.480 | and I thought I don't really know how to do that,
00:20:38.120 | so I don't do anything.
00:20:39.840 | Then I worked on my new kind of science project
00:20:42.600 | and sort of exploring the computational universe
00:20:44.800 | and came up with things like
00:20:45.960 | this principle of computational equivalence,
00:20:47.960 | which say there is no bright line
00:20:50.260 | between the intelligent and the merely computational.
00:20:53.100 | So I thought, look, that's this paradigm I've built.
00:20:56.180 | Now I have to eat that dog food myself, so to speak.
00:21:00.800 | I've been thinking about doing this thing
00:21:02.400 | with computable knowledge forever,
00:21:04.480 | and let me actually try and do it.
00:21:07.120 | And so it was, if my paradigm is right,
00:21:10.640 | then this should be possible.
00:21:12.240 | But the beginning was certainly,
00:21:13.960 | it was a bit daunting.
00:21:14.800 | I remember I took the early team to a big reference library
00:21:19.480 | and we're looking at this reference library,
00:21:21.120 | and it's like, my basic statement is
00:21:23.720 | our goal over the next year or two
00:21:25.480 | is to ingest everything that's in here.
00:21:28.240 | And that's, it seemed very daunting,
00:21:31.520 | but in a sense I was well aware of the fact
00:21:34.120 | that it's finite.
00:21:35.360 | The fact that you can walk into the reference library,
00:21:36.920 | it's a big, big thing with lots of reference books
00:21:39.400 | all over the place, but it is finite.
00:21:41.960 | This is not an infinite,
00:21:43.760 | it's not the infinite corridor of, so to speak,
00:21:46.880 | of reference library.
00:21:47.820 | It's not truly infinite, so to speak.
00:21:49.880 | But no, I mean, and then what happened
00:21:52.760 | was sort of interesting there was,
00:21:54.520 | from a methodology point of view,
00:21:57.280 | was I didn't start off saying,
00:21:59.440 | let me have a grand theory
00:22:00.800 | for how all this knowledge works.
00:22:02.840 | It was like, let's implement this area, this area,
00:22:06.240 | this area, a few hundred areas and so on.
00:22:09.060 | That's a lot of work.
00:22:10.680 | I also found that,
00:22:12.000 | I've been fortunate in that our products
00:22:17.520 | get used by sort of the world's experts in lots of areas.
00:22:22.000 | And so that really helped
00:22:23.560 | 'cause we were able to ask people,
00:22:26.160 | the world expert in this or that,
00:22:27.920 | and we're able to ask them for input and so on.
00:22:30.380 | And I found that my general principle was
00:22:33.540 | that any area where there wasn't some expert
00:22:36.460 | who helped us figure out what to do wouldn't be right.
00:22:40.360 | 'Cause our goal was to kind of get to the point
00:22:42.300 | where we had sort of true expert level knowledge
00:22:45.100 | about everything.
00:22:46.560 | And so that the ultimate goal is
00:22:49.220 | if there's a question that can be answered
00:22:51.220 | on the basis of general knowledge in our civilization,
00:22:53.900 | make it be automatic to be able to answer that question.
00:22:57.000 | And now, well, WolfMalFa got used in Siri
00:23:01.060 | from the very beginning and it's now also used in Alexa.
00:23:03.840 | And so it's people are kind of getting more of the,
00:23:07.400 | they get more of the sense of this is
00:23:10.200 | what should be possible to do.
00:23:12.160 | I mean, in a sense, the question answering problem
00:23:15.080 | was viewed as one of the sort of core AI problems
00:23:17.680 | for a long time.
00:23:18.640 | I had kind of an interesting experience.
00:23:21.000 | I had a friend Marvin Minsky,
00:23:23.520 | who was a well-known AI person from right around here.
00:23:28.280 | And I remember when WolfMalFa was coming out,
00:23:30.720 | it was a few weeks before it came out, I think.
00:23:34.160 | I happened to see Marvin and I said,
00:23:36.520 | "I should show you this thing we have.
00:23:38.400 | "It's a question answering system."
00:23:40.360 | And he was like, "Okay."
00:23:42.880 | Typed something in, he's like, "Okay, fine."
00:23:45.040 | And then he's talking about something different.
00:23:47.080 | I said, "No, Marvin, this time it actually works.
00:23:51.340 | "Look at this, it actually works."
00:23:52.920 | He types in a few more things.
00:23:54.520 | There's maybe 10 more things.
00:23:56.160 | Of course, we have a record of what he typed in,
00:23:57.840 | which is kind of interesting.
00:23:59.240 | - Can you share where his mind was in the testing space?
00:24:06.920 | - All kinds of random things.
00:24:08.040 | He was just trying random stuff,
00:24:09.720 | medical stuff and chemistry stuff and astronomy and so on.
00:24:14.720 | And it was like, after a few minutes, he was like,
00:24:18.240 | "Oh my God, it actually works."
00:24:22.360 | But that was kind of told you something about the state,
00:24:25.720 | what happened in AI, because people had,
00:24:28.800 | in a sense, by trying to solve the bigger problem,
00:24:31.740 | we were able to actually make something that would work.
00:24:33.560 | Now, to be fair,
00:24:35.400 | we had a bunch of completely unfair advantages.
00:24:37.820 | For example, we already built a bunch of orphan language,
00:24:40.800 | which was very high level symbolic language.
00:24:44.220 | I had the practical experience of building big systems.
00:24:50.840 | I have the sort of intellectual confidence
00:24:53.280 | to not just sort of give up in doing something like this.
00:24:57.000 | I think that the,
00:24:58.120 | it's always a funny thing.
00:25:02.680 | I've worked on a bunch of big projects in my life.
00:25:04.680 | And I would say that the, you mentioned ego,
00:25:09.360 | I would also mention optimism, so to speak.
00:25:11.560 | I mean, if somebody said,
00:25:14.760 | "This project is gonna take 30 years,"
00:25:20.680 | it would be hard to sell me on that.
00:25:23.240 | I'm always in the,
00:25:25.000 | well, I can kind of see a few years,
00:25:27.440 | something's gonna happen in a few years.
00:25:29.680 | And usually it does, something happens in a few years,
00:25:32.040 | but the whole, the tail can be decades long.
00:25:35.200 | And that's, and from a personal point of view,
00:25:38.280 | always the challenge is you end up with these projects
00:25:41.000 | that have infinite tails.
00:25:42.840 | And the question is, do the tails kind of,
00:25:45.880 | do you just drown in kind of dealing
00:25:47.960 | with all of the tails of these projects?
00:25:50.320 | And that's an interesting sort of personal challenge.
00:25:54.560 | And like my efforts now to work
00:25:57.040 | on the fundamental theory of physics,
00:25:58.280 | which I've just started doing,
00:25:59.900 | and I'm having a lot of fun with it,
00:26:02.880 | but it's kind of making a bet that I can kind of,
00:26:07.880 | I can do that as well as doing the incredibly energetic
00:26:13.040 | things that I'm trying to do with orphan language and so on.
00:26:16.000 | I mean, the vision, yeah.
00:26:17.280 | - And underlying that, I mean,
00:26:18.680 | I just talked for the second time with Elon Musk
00:26:21.800 | and that you two share that quality a little bit
00:26:24.280 | of that optimism of taking on basically the daunting,
00:26:29.280 | what most people call impossible.
00:26:32.880 | And he, and you take it on out of, you can call it ego,
00:26:37.440 | you can call it naivety, you can call it optimism,
00:26:39.980 | whatever the heck it is,
00:26:40.960 | but that's how you solve the impossible things.
00:26:43.200 | - Yeah, I mean, look at what happens.
00:26:45.200 | And I don't know, in my own case,
00:26:47.760 | it's been, I progressively got a bit more confident
00:26:52.160 | and progressively able to decide
00:26:55.040 | that these projects aren't crazy.
00:26:56.240 | But then the other thing is,
00:26:57.440 | the other trap that one can end up with is,
00:27:01.200 | oh, I've done these projects and they're big.
00:27:04.000 | Let me never do a project that's any smaller
00:27:05.960 | than any project I've done so far.
00:27:07.760 | (laughing)
00:27:08.600 | And that can be a trap.
00:27:10.960 | And often these projects are of completely unknown,
00:27:15.960 | that their depth and significance
00:27:17.800 | is actually very hard to know.
00:27:19.960 | - On the sort of building this giant knowledge base
00:27:25.240 | that's behind Wolfram Language, Wolfram Alpha,
00:27:28.260 | what do you think about the internet?
00:27:33.520 | What do you think about, for example, Wikipedia,
00:27:38.240 | these large aggregations of text
00:27:40.840 | that's not converted into computable knowledge?
00:27:43.680 | Do you think, if you look at Wolfram Language,
00:27:46.840 | Wolfram Alpha, 20, 30, maybe 50 years down the line,
00:27:51.040 | do you hope to store all of the,
00:27:55.600 | sort of Google's dream is to make
00:27:57.600 | all information searchable, accessible,
00:28:00.720 | but that's really, as defined,
00:28:03.360 | it doesn't include the understanding of information.
00:28:07.800 | - Right.
00:28:08.640 | - Do you hope to make all of knowledge
00:28:12.200 | represented within-- - Sure, I would hope so.
00:28:15.760 | That's what we're trying to do.
00:28:16.920 | - How hard is that problem, like closing that gap?
00:28:19.920 | What's your sense?
00:28:20.760 | - Well, it depends on the use cases.
00:28:21.880 | I mean, so if it's a question
00:28:23.320 | of answering general knowledge questions about the world,
00:28:25.760 | we're in pretty good shape on that right now.
00:28:28.000 | If it's a question of representing
00:28:31.000 | like an area that we're going into right now
00:28:34.320 | is computational contracts,
00:28:36.260 | being able to take something
00:28:38.200 | which would be written in legalese,
00:28:40.320 | it might even be the specifications for,
00:28:42.360 | what should the self-driving car do
00:28:44.000 | when it encounters this or that or the other?
00:28:45.960 | What should the, whatever.
00:28:48.040 | Write that in a computational language
00:28:52.040 | and be able to express things about the world.
00:28:55.040 | If the creature that you see running across the road
00:28:57.680 | is a thing at this point in the tree of life,
00:29:02.360 | then swerve this way, otherwise don't, those kinds of things.
00:29:05.920 | - Are there ethical components
00:29:08.300 | when you start to get to some of the messy human things,
00:29:10.620 | are those encodable into computable knowledge?
00:29:13.700 | - Well, I think that it is a necessary feature
00:29:17.480 | of attempting to automate more in the world
00:29:20.180 | that we encode more and more of ethics
00:29:23.020 | in a way that gets sort of quickly,
00:29:26.300 | you know, is able to be dealt with by computer.
00:29:28.460 | I mean, I've been involved recently,
00:29:30.180 | I sort of got backed into being involved
00:29:32.540 | in the question of automated content selection
00:29:36.020 | on the internet.
00:29:36.860 | So, you know, the Facebooks, Googles, Twitters,
00:29:39.980 | you know, how do they rank the stuff
00:29:42.200 | they feed to us humans, so to speak?
00:29:44.940 | And the question of what are, you know,
00:29:46.940 | what should never be fed to us?
00:29:48.460 | What should be blocked forever?
00:29:49.820 | What should be upranked, you know?
00:29:52.020 | And what are the kind of principles behind that?
00:29:55.100 | And what I kind of, well, a bunch of different things
00:29:58.100 | I realized about that, but one thing that's interesting
00:30:02.260 | is being able, you know, in fact,
00:30:03.740 | you're building sort of an AI ethics,
00:30:06.440 | you have to build an AI ethics module, in effect,
00:30:09.140 | to decide, is this thing so shocking
00:30:11.300 | I'm never gonna show it to people?
00:30:12.940 | Is this thing so whatever?
00:30:15.220 | And I did realize in thinking about that,
00:30:17.540 | that, you know, there's not gonna be one of these things.
00:30:20.180 | It's not possible to decide, or it might be possible,
00:30:23.380 | but it would be really bad for the future of our species
00:30:25.620 | if we just decided there's this one AI ethics module
00:30:29.580 | and it's gonna determine the practices
00:30:32.980 | of everything in the world, so to speak.
00:30:35.100 | And I kind of realized one has to sort of break it up,
00:30:37.100 | and that's an interesting societal problem
00:30:39.700 | of how one does that and how one sort of has people
00:30:43.220 | sort of self-identify for, you know,
00:30:45.340 | I'm buying in in the case of just content selection,
00:30:47.580 | it's sort of easier because it's like an,
00:30:49.940 | it's for an individual, it's not something
00:30:51.860 | that kind of cuts across sort of societal boundaries.
00:30:57.260 | - It's a really interesting notion of,
00:31:00.780 | I heard you describe, I really like it,
00:31:03.500 | sort of maybe in the, sort of have different AI systems
00:31:08.500 | that have a certain kind of brand
00:31:09.980 | that they represent, essentially.
00:31:11.740 | - Right. - You could have like,
00:31:12.860 | I don't know, whether it's conservative or liberal,
00:31:17.780 | and then libertarian, and there's an Iranian objectivist
00:31:22.220 | AI ethics system, and different ethical,
00:31:24.900 | I mean, it's almost encoding some of the ideologies
00:31:28.140 | which we've been struggling, I come from the Soviet Union,
00:31:31.140 | that didn't work out so well with the ideologies
00:31:33.780 | that worked out there, and so you have,
00:31:36.140 | but they all, everybody purchased
00:31:38.780 | that particular ethics system.
00:31:40.660 | - Indeed. - And in the same,
00:31:42.860 | I suppose, could be done, encoded,
00:31:45.540 | that system could be encoded into computational knowledge,
00:31:50.460 | and allow us to explore in the realm of,
00:31:53.220 | in the digital space, that's a really exciting possibility.
00:31:57.020 | Are you playing with those ideas in Wolfram Language?
00:32:00.380 | - Yeah, yeah, I mean, that's, Wolfram Language
00:32:03.700 | has sort of the best opportunity to kind of express
00:32:06.900 | those essentially computational contracts about what to do.
00:32:09.700 | Now, there's a bunch more work to be done
00:32:11.660 | to do it in practice for deciding,
00:32:15.140 | is this a credible news story, what does that mean,
00:32:17.580 | or whatever else you're gonna pick.
00:32:19.620 | I think that that's, the question of exactly
00:32:24.620 | what we get to do with that is,
00:32:27.460 | for me, it's kind of a complicated thing
00:32:31.260 | because there are these big projects that I think about,
00:32:34.340 | like, find the fundamental theory of physics,
00:32:36.380 | okay, that's box number one, right?
00:32:38.540 | Box number two, solve the AI ethics problem
00:32:41.820 | in the case of, figure out how you rank all content,
00:32:45.140 | so to speak, and decide what people see,
00:32:46.940 | that's kind of a box number two, so to speak.
00:32:49.700 | These are big projects, and I think--
00:32:51.740 | - What do you think is more important,
00:32:53.100 | the fundamental nature of reality, or--
00:32:56.460 | - Depends who you ask, it's one of these things
00:32:58.300 | that's exactly like, what's the ranking, right?
00:33:00.900 | It's the ranking system, it's like,
00:33:03.340 | whose module do you use to rank that?
00:33:05.780 | If you, and I think--
00:33:08.620 | - Having multiple modules is a really compelling notion
00:33:10.980 | to us humans, that in a world where it's not clear
00:33:14.260 | that there's a right answer,
00:33:16.220 | perhaps you have systems that operate under different,
00:33:21.220 | how would you say it, I mean--
00:33:26.100 | - It's different value systems, basically.
00:33:27.500 | - Different value systems.
00:33:28.460 | - I mean, I think, in a sense,
00:33:30.500 | I mean, I'm not really a politics-oriented person,
00:33:34.620 | but in the kind of totalitarianism,
00:33:37.380 | it's kind of like, you're gonna have this system,
00:33:40.740 | and that's the way it is.
00:33:42.260 | I mean, kind of the concept
00:33:44.540 | of sort of a market-based system where you have,
00:33:47.860 | okay, I as a human, I'm gonna pick this system,
00:33:50.700 | I as another human, I'm gonna pick this system.
00:33:53.060 | I mean, that's, in a sense,
00:33:54.980 | this case of automated content selection is a non-trivial,
00:33:59.980 | but it is probably the easiest of the AI ethics situations,
00:34:03.460 | because it is, each person gets to pick for themselves,
00:34:06.180 | and there's not a huge interplay
00:34:08.580 | between what different people pick.
00:34:10.420 | By the time you're dealing with other societal things,
00:34:13.820 | like what should the policy
00:34:15.780 | of the central bank be or something.
00:34:17.620 | - Or healthcare systems,
00:34:18.460 | and all those kind of centralized kind of things.
00:34:20.900 | - Right, well, I mean, healthcare, again,
00:34:22.500 | has the feature that at some level,
00:34:24.820 | each person can pick for themselves, so to speak.
00:34:27.100 | I mean, whereas there are other things
00:34:28.700 | where there's a necessary,
00:34:29.980 | public health is one example,
00:34:31.700 | where that's not, where that doesn't get to be
00:34:35.300 | something which people can, what they pick for themselves,
00:34:38.420 | they may impose on other people,
00:34:39.940 | and then it becomes a more non-trivial piece
00:34:41.980 | of sort of political philosophy.
00:34:43.540 | - Of course, the central banking systems,
00:34:45.020 | I would argue, we would move,
00:34:46.420 | we need to move away into digital currency and so on,
00:34:48.900 | and Bitcoin and ledgers and so on.
00:34:51.540 | So there's a lot of-
00:34:53.100 | - We've been quite involved in that.
00:34:54.460 | And that's where, that's sort of the motivation
00:34:56.780 | for computational contracts,
00:34:58.580 | in part, comes out of this idea,
00:35:01.700 | oh, we can just have this autonomously
00:35:03.220 | executing smart contract.
00:35:05.540 | The idea of a computational contract is just to say,
00:35:08.580 | have something where all of the conditions
00:35:12.620 | of the contract are represented in computational form.
00:35:15.100 | So in principle, it's automatic to execute the contract.
00:35:18.900 | And I think that's, that will surely be the future
00:35:22.620 | of the idea of legal contracts written in English
00:35:25.660 | or legalese or whatever,
00:35:27.260 | and where people have to argue about what goes on
00:35:30.700 | is surely not,
00:35:33.380 | we have a much more streamlined process
00:35:36.780 | if everything can be represented computationally
00:35:38.780 | and the computers can kind of decide what to do.
00:35:40.740 | I mean, ironically enough,
00:35:42.780 | old Gottfried Leibniz back in the 1600s
00:35:46.620 | was saying exactly the same thing,
00:35:48.780 | but he had, his pinnacle of technical achievement
00:35:52.220 | was this brass four-function mechanical calculator thing
00:35:56.020 | that never really worked properly, actually.
00:35:58.620 | And so he was like 300 years too early for that idea.
00:36:02.740 | But now that idea is pretty realistic, I think.
00:36:06.140 | And you ask how much more difficult is it
00:36:08.820 | than what we have now in Wolfram language to express,
00:36:11.580 | I call it symbolic discourse language,
00:36:13.980 | being able to express sort of everything in the world
00:36:16.700 | in kind of computational symbolic form.
00:36:19.100 | I think it is absolutely within reach.
00:36:22.700 | I mean, I think it's a, you know, I don't know,
00:36:24.540 | maybe I'm just too much of an optimist,
00:36:25.940 | but I think it's a limited number of years
00:36:28.580 | to have a pretty well-built out version of that
00:36:31.140 | that will allow one to encode the kinds of things
00:36:33.300 | that are relevant to typical legal contracts
00:36:37.580 | and these kinds of things.
00:36:39.060 | - The idea of symbolic discourse language,
00:36:43.020 | can you try to define the scope of what it is?
00:36:48.020 | - So we're having a conversation, it's a natural language.
00:36:52.540 | Can we have a representation of the sort of actionable parts
00:36:56.740 | of that conversation in a precise computable form
00:37:00.820 | so that a computer could go do it?
00:37:02.460 | - And not just contracts, but really sort of
00:37:04.780 | some of the things we think of as common sense, essentially,
00:37:07.700 | even just like basic notions of human life.
00:37:11.460 | - Well, I mean, things like, you know,
00:37:13.420 | I'm getting hungry and want to eat something, right?
00:37:17.620 | That's something we don't have a representation,
00:37:19.780 | you know, in Wolfram language right now,
00:37:21.580 | if I was like, I'm eating blueberries and raspberries
00:37:23.740 | and things like that, and I'm eating this amount of them,
00:37:25.980 | we know all about those kinds of fruits and plants
00:37:28.500 | and nutrition content and all that kind of thing,
00:37:30.660 | but the I want to eat them part of it is not covered yet.
00:37:34.500 | - And you need to do that in order to have
00:37:38.100 | a complete symbolic discourse language,
00:37:40.260 | to be able to have a natural language conversation.
00:37:42.820 | - Right, right, to be able to express the kinds of things
00:37:45.700 | that say, you know, if it's a legal contract,
00:37:48.540 | it's, you know, the party's desire to have this and that.
00:37:52.180 | And that's, you know, that's a thing like,
00:37:54.140 | I want to eat a raspberry or something.
00:37:55.900 | - But isn't that, isn't this, just to let you know,
00:37:58.980 | you said it's centuries old, this dream.
00:38:02.260 | - Yes.
00:38:03.860 | - But it's also the more near term,
00:38:06.540 | the dream of Turing and formulating the Turing test.
00:38:10.500 | - Yes.
00:38:11.340 | - So, do you hope, do you think that's the ultimate test
00:38:17.500 | of creating something special?
00:38:22.500 | 'Cause we said--
00:38:23.580 | - I don't know, I think by special,
00:38:25.900 | look, if the test is, does it walk and talk like a human?
00:38:30.180 | Well, that's just the talking like a human.
00:38:32.580 | But the answer is, it's an okay test.
00:38:36.500 | If you say, is it a test of intelligence?
00:38:39.380 | You know, people have attached the Wolfram Alpha API
00:38:42.660 | to, you know, Turing test bots.
00:38:45.020 | And those bots just lose immediately.
00:38:47.220 | 'Cause all you have to do is ask it five questions
00:38:49.740 | that, you know, are about really obscure,
00:38:52.060 | weird pieces of knowledge,
00:38:53.140 | and it's just trot them right out.
00:38:55.060 | And you say, that's not a human.
00:38:56.900 | Right, it's a different thing.
00:38:58.620 | It's achieving a different--
00:39:00.460 | - Right now, but it's, I would argue not.
00:39:03.460 | I would argue it's not a different thing.
00:39:05.660 | It's actually legitimately, Wolfram Alpha is legitimately,
00:39:10.660 | or Wolfram Language, I think,
00:39:12.700 | is legitimately trying to solve the Turing,
00:39:14.660 | the intent of the Turing test.
00:39:16.620 | - Perhaps the intent.
00:39:17.860 | Yeah, perhaps the intent.
00:39:18.700 | I mean, it's actually kind of fun.
00:39:20.140 | You know, Alan Turing had tried to work out,
00:39:22.420 | he thought about taking Encyclopedia Britannica
00:39:25.340 | and, you know, making it computational in some way.
00:39:27.940 | And he estimated how much work it would be.
00:39:30.380 | And actually, I have to say,
00:39:31.740 | he was a bit more pessimistic than the reality.
00:39:34.020 | We did it more efficiently than that.
00:39:35.580 | - But to him, that represented--
00:39:37.220 | - So, I mean, he was on the same--
00:39:39.100 | - It's a mighty mental task.
00:39:40.420 | - Yeah, right, he had the same idea.
00:39:42.460 | I mean, it was, you know,
00:39:43.820 | we were able to do it more efficiently
00:39:45.340 | 'cause we had a lot, we had layers of automation
00:39:48.220 | that he, I think, hadn't, you know,
00:39:50.820 | it's hard to imagine those layers of abstraction
00:39:53.940 | that end up being built up.
00:39:55.700 | - But to him, it represented, like,
00:39:57.340 | an impossible task, essentially.
00:39:59.100 | - Well, he thought it was difficult.
00:40:00.420 | He thought it was, you know,
00:40:01.500 | maybe if he'd lived another 50 years,
00:40:02.860 | he would've been able to do it.
00:40:03.780 | I don't know.
00:40:04.620 | (upbeat music)
00:40:07.220 | (upbeat music)
00:40:09.820 | (upbeat music)
00:40:12.420 | (upbeat music)
00:40:15.020 | (upbeat music)
00:40:17.620 | (upbeat music)
00:40:20.220 | [BLANK_AUDIO]