Back to Index

What is Wolfram Language? (Stephen Wolfram) | AI Podcast Clips


Chapters

0:0 Intro
1:0 Symbolic language
3:45 Random sample
7:50 History of AI
11:30 The Dream of Machine Learning
13:45 The State of Wolfram Language
19:2 Wolfram Knowledge Base
26:16 Optimism
27:20 The Internet
30:57 Encoding ideologies
33:8 Different value systems

Transcript

what is Wolfram language in terms of, sort of, I mean I can answer the question for you, but is it basically, not the philosophical, deep, the profound, the impact of it, I'm talking about in terms of tools, in terms of things you can download, in terms of stuff you can play with, what is it?

What does it fit into the infrastructure? What are the different ways to interact with it? - Right, so I mean the two big things that people have sort of perhaps heard of that come from Wolfram language, one is Mathematica, the other is Wolfram Alpha. So Mathematica first came out in 1988, it's this system that is basically an instance of Wolfram language, and it's used to do computations, particularly in sort of technical areas, and the typical thing you're doing is you're typing little pieces of computational language, and you're getting computations done.

- It's very kind of, there's like a symbolic, yeah, it's a symbolic language. - It's a symbolic language, so I mean, I don't know how to cleanly express that, but that makes it very distinct from how we think about sort of, I don't know, programming in a language like Python or something.

- Right, so the point is that in a traditional programming language, the raw material of the programming language is just stuff that computers intrinsically do, and the point of Wolfram language is that what the language is talking about is things that exist in the world or things that we can imagine and construct, not, it's not sort of, it's aimed to be an abstract language from the beginning, and so, for example, one feature it has is that it's a symbolic language, which means that the thing called, you have an X, just type in X, and Wolfram language will just say, oh, that's X.

It won't say error, undefined thing. I don't know what it is, computation, but in terms of the internals of the computer. Now, that X could perfectly well be the city of Boston. That's a thing, that's a symbolic thing, or it could perfectly well be the trajectory of some spacecraft represented as a symbolic thing, and that idea that one can work with, sort of computationally work with these different, these kinds of things that exist in the world or describe the world, that's really powerful, and that's what, I mean, when I started designing, well, when I designed the predecessor of what's now Wolfram language, which is a thing called SMP, which was my first computer language, I kind of wanted to have this sort of infrastructure for computation, which was as fundamental as possible.

I mean, this is what I got for having been a physicist and tried to find fundamental components of things and wound up with this kind of idea of transformation rules for symbolic expressions as being sort of the underlying stuff from which computation would be built, and that's what we've been building from in Wolfram language and operationally what happens, it's, I would say, by far the highest level computer language that exists, and it's really been built in a very different direction from other languages.

So other languages have been about, there is a core language. It really is kind of wrapped around the operations that a computer intrinsically does. Maybe people add libraries for this or that, but the goal of Wolfram language is to have the language itself be able to cover this sort of very broad range of things that show up in the world, and that means that there are 6,000 primitive functions in the Wolfram language that cover things.

I could probably pick a random here. I'm gonna pick just for fun, I'll pick, let's take a random sample of all the things that we have here. So let's just say random sample of 10 of them, and let's see what we get. Wow, okay, so these are really different things from- - Yeah, these are all functions.

- These are all functions, Boolean convert. Okay, that's a thing for converting between different types of Boolean expressions. - So for people just listening, Stephen typed in a random sample of names, so this is sampling from all function. How many you said there might be? - 6,000. - 6,000, from 6,000, 10 of them, and there's a hilarious variety of them.

- Yeah, right, well, we've got things about dollar requester address that has to do with interacting with the world of the cloud and so on, discrete wavelet data, spheroid- - It's also graphical, sort of window- - Yeah, yeah, window movable, that's a user interface kind of thing. I want to pick another 10, 'cause I think this is some, okay, so yeah, there's a lot of infrastructure stuff here that you see if you just start sampling at random, there's a lot of kind of infrastructural things.

If you more look at the- - Some of the exciting machine learning stuff you showed off, is that also in this pool? - Oh yeah, yeah, I mean, so one of those functions is like image identify as a function here, we just say image identify, I don't know, it's always good to, let's do this, let's say current image, and let's pick up an image, hopefully.

- So current image, accessing the webcam, took a picture of yourself. - Took a terrible picture, but anyway, we can say image identify, open square brackets, and then we just paste that picture in there. - Image identify function running on the picture. - Oh, and it says, oh wow, it says, I look like a plunger, because I got this great big thing behind my head.

- Classify, so this image identify classifies the most likely object in the image, and it says it's a plunger. - Okay, that's a bit embarrassing, let's see what it does, let's pick the top 10. Okay, well it thinks there's a, oh, it thinks it's pretty unlikely that it's a primate, a hominid, a person.

- 8% probability. - Yeah, that's bad. - Prime age 57 is a plunger. - Yeah, well, so. - That hopefully will not give you an existential crisis, and then 8%, or I shouldn't say percent, but-- - No, that's right, 8% that it's a hominid. And yeah, okay, it's really, I'm gonna do another one of these, just 'cause I'm embarrassed that it, oops, it didn't see me at all.

There we go, let's try that, let's see what that did. - We took a picture with a little bit more of your-- - A little bit more of me, and not just my bald head, so to speak. Okay, 89% probability it's a person, so then I would, but, you know, so this is image identify as an example of one-- - Of just one of them.

- Just one function out of 6,000. - And that's part of the, that's like part of the language. - That's part of the core language, yes. - That's part of the core language. - And I mean, you know, something like, I could say, I don't know, let's find the geo nearest, what could we find?

Let's find the nearest volcano. Let's find the 10, I wonder where it thinks here is. Let's try finding the 10 volcanoes nearest here, okay? - So geo nearest volcano here, 10 nearest volcanoes. - Right, let's find out where those are. We can now, we got a list of volcanoes out, and I can say geo list plot that, and hopefully, oh, okay, so there we go.

So there's a map that shows the positions of those 10 volcanoes. - Of the East Coast and the Midwest, and it's the, well, no, we're okay, we're okay, it's not too bad. - Yeah, they're not very close to us. We could measure how far away they are. But, you know, the fact that right in the language, it knows about all the volcanoes in the world, it knows, you know, computing what the nearest ones are, it knows all the maps of the world, and so on.

- It's a fundamentally different idea of what a language is. - Yeah, right, that's why I like to talk about it as a full-scale computational language. That's what we've tried to do. - And just if you can comment briefly, I mean, this kind of, the Wolfram language, along with Wolfram Alpha, represents kind of what the dream of what AI is supposed to be.

There's now sort of a craze of learning, kind of idea that we can take raw data, and from that extract the different hierarchies of abstractions in order to be able to, like in order to form the kind of things that Wolfram language operates with. But we're very far from learning systems being able to form that.

The context of history of AI, if you could just comment on, there is, you said computation X. And there's just some sense where in the 80s and 90s, sort of expert systems represented a very particular computation X. - Yes. - And there's a kind of notion that those efforts didn't pan out.

- Right. - But then out of that emerges kind of Wolfram language, Wolfram Alpha, which is the success, I mean. - Yeah, right. I think those are, in some sense, those efforts were too modest. - Right, exactly. - They were looking at particular areas, and you actually can't do it with a particular area.

I mean, like even a problem like natural language understanding, it's critical to have broad knowledge of the world if you want to do good natural language understanding. And you kind of have to bite off the whole problem. If you say we're just gonna do the blocks world over here, so to speak, you don't really, it's actually, it's one of these cases where it's easier to do the whole thing than it is to do some piece of it.

You know, one comment to make about, so the relationship between what we've tried to do and sort of the learning side of AI, you know, in a sense, if you look at the development of knowledge in our civilization as a whole, there was kind of this notion pre 300 years ago or so now, you want to figure something out about the world, you can reason it out.

You can do things which are just use raw human thought. And then along came sort of modern mathematical science. And we found ways to just sort of blast through that by in that case, writing down equations. Now we also know we can do that with computation and so on.

And so that was kind of a different thing. So when we look at how do we sort of encode knowledge and figure things out, one way we could do it is start from scratch, learn everything, it's just a neural net figuring everything out. But in a sense that denies the sort of knowledge based achievements of our civilization.

Because in our civilization, we have learned lots of stuff. We've surveyed all the volcanoes in the world. We've done, you know, we figured out lots of algorithms for this or that. Those are things that we can encode computationally. And that's what we've tried to do. And we're not saying just, you don't have to start everything from scratch.

So in a sense, a big part of what we've done is to try and sort of capture the knowledge of the world in computational form and computable form. Now, there's also some pieces which were for a long time undoable by computers like image identification, where there's a really, really useful module that we can add that is those things which actually were pretty easy for humans to do that had been hard for computers to do.

I think the thing that's interesting that's emerging now is the interplay between these things, between this kind of knowledge of the world that is in a sense very symbolic and this kind of sort of much more statistical kind of things like image identification and so on. And putting those together by having this sort of symbolic representation of image identification, that that's where things get really interesting and where you can kind of symbolically represent patterns of things and images and so on.

I think that's kind of a part of the path forward, so to speak. - Yeah, so the dream of, so the machine learning is not, in my view, I think the view of many people is not anywhere close to building the kind of wide world of computable knowledge that will from a language of build.

But because you have a kind of, you've done the incredibly hard work of building this world, now machine learning can serve as tools to help you explore that world. - Yeah, yeah. - And that's what you've added with version 12, right? You added a few, I was seeing some demos.

It looks amazing. - Right, I mean, I think this, it's sort of interesting to see the sort of the, once it's computable, once it's in there, it's running in sort of a very efficient computational way, but then there's sort of things like the interface of how do you get there?

How do you do natural language understanding to get there? How do you pick out entities in a big piece of text or something? That's, I mean, actually a good example right now is our NLP, NLU loop, which is, we've done a lot of stuff, natural language understanding, using essentially not learning-based methods, using a lot of little algorithmic methods, human curation methods, and so on.

- Which is when people try to enter a query and then converting, so the process of converting, NLU defined beautifully as converting their query into a computational language, which is a very well, first of all, super practical definition, a very useful definition, and then also a very clear definition of natural language understanding.

- Right, I mean, a different thing is natural language processing, where it's like, here's a big lump of text, go pick out all the cities in that text, for example. And so a good example of, you know, so we do that. We're using modern machine learning techniques. And it's actually kind of an interesting process that's going on right now, is this loop between what do we pick up with NLP, we're using machine learning, versus what do we pick up with our more kind of precise computational methods in natural language understanding.

And so we've got this kind of loop going between those, which is improving both of them. - Yeah, and I think you have some of the state-of-the-art transformers, like you have BERT in there, I think. - Oh, yeah. - So it's cool, so you have, you have integrating all the models.

I mean, this is the hybrid thing that people have always dreamed about or talking about. That makes you just surprised, frankly, that Wolfram Language is not more popular than it already is. - You know, that's a, it's a complicated issue, because it's like, it involves, you know, it involves ideas, and ideas are absorbed slowly in the world.

I mean, I think that's-- - And then there's sort of, like what we're talking about, there's egos and personalities, and some of the absorption mechanisms of ideas have to do with personalities, and the students of personalities, and then a little social network. So it's interesting how the spread of ideas works.

- You know what's funny with Wolfram Language is that we are, if you say, you know, what market, sort of market penetration, if you look at the, I would say, very high end of R&D and sort of the people where you say, "Wow, that's a really, you know, impressive, smart person," they're very often users of Wolfram Language, very, very often.

If you look at the more sort of, it's a funny thing, if you look at the more kind of, I would say, people who are like, "Oh, we're just plodding away "doing what we do," they're often not yet Wolfram Language users, and that dynamic, it's kind of odd that there hasn't been more rapid trickle down, because we really, you know, the high end, we've really been very successful in for a long time, and it's some, but with, you know, that's partly, I think, a consequence of, it's my fault in a sense, because it's kind of, you know, I have a company which is, really emphasizes sort of creating products and building a, sort of the best possible technical tower we can, rather than sort of doing the commercial side of things and pumping it out in sort of the most effective way.

- And there's an interesting idea that, you know, perhaps you can make it more popular by opening everything up, sort of the GitHub model, but there's an interesting, I think I've heard you discuss this, that that turns out not to work in a lot of cases, like in this particular case, that you want it, that when you deeply care about the integrity, the quality of the knowledge that you're building, that unfortunately, you can't distribute that effort.

- Yeah, it's not the nature of how things work. I mean, you know, what we're trying to do is a thing that for better or worse, requires leadership and it requires kind of maintaining a coherent vision over a long period of time, and doing not only the cool vision related work, but also the kind of mundane in the trenches to make the thing actually work well, work.

- So how do you build the knowledge? Because that's the fascinating thing. That's the mundane, the fascinating and the mundane, is building the knowledge, the adding, integrating more data. - Yeah, I mean, that's probably not the most, I mean, the things like get it to work in all these different cloud environments and so on.

That's pretty, you know, that's very practical stuff. You know, have the user interface be smooth and, you know, have there be, take only, you know, a fraction of a millisecond to do this or that. That's a lot of work. And it's, but, you know, I think my, it's an interesting thing over the period of time, you know, orphan language has existed basically for more than half of the total amount of time that any language, any computer language has existed.

That is, the computer language is maybe 60 years old, you know, give or take, and orphan language is 33 years old. So it's kind of a, and I think I was realizing recently, there's been more innovation in the distribution of software than probably than in the structure of programming languages over that period of time.

And we, you know, we've been sort of trying to do our best to adapt to it. And the good news is that we have, you know, because I have a simple private company and so on that doesn't have, you know, a bunch of investors, you know, telling us we're gonna do this or that, I have lots of freedom in what we can do.

And so, for example, we're able to, oh, I don't know, we have this free Wolfram Engine for developers, which is a free version for developers. And we've been, you know, we've, there are site licenses for Mathematica and Wolfram Language at basically all major universities, certainly in the US by now.

So it's effectively free to people and all universities in effect. And, you know, we've been doing a progression of things. I mean, different things like Wolfram Alpha, for example, the main website is just a free website. - What is Wolfram Alpha? - Okay, Wolfram Alpha is a system for answering questions where you ask a question with natural language and it'll try and generate a report telling you the answer to that question.

So the question could be something like, you know, what's the population of Boston divided by New York compared to New York? And it'll take those words and give you an answer. And that have been-- - Converts the words into computable, into-- - Into Wolfram Language, actually. - Into Wolfram Language.

- And then computational language and then computes the answer. - Do you think the underlying knowledge belongs to Wolfram Alpha or to the Wolfram Language? What's the-- - We just call it the Wolfram Knowledge Base. - Knowledge Base. - I mean, that's been a big effort over the decades to collect all that stuff.

And, you know, more of it flows in every second. - Can you just pause on that for a second? Like, that's one of the most incredible things. Of course, in the long-term, Wolfram Language itself is the fundamental thing. But in the amazing sort of short-term, the knowledge base is kind of incredible.

So what's the process of building that knowledge base? The fact that you, first of all, from the very beginning, that you're brave enough to start to take on the general knowledge base. And how do you go from zero to the incredible knowledge base that you have now? - Well, yeah, it was kind of scary at some level.

I mean, I had wondered about doing something like this since I was a kid. So it wasn't like I hadn't thought about it for a while. - But most of us, most of the brilliant dreamers give up such a difficult engineering notion at some point. - Right, right. Well, the thing that happened with me, which was kind of, it's a live-your-own-paradigm kind of theory.

So basically what happened is, I had assumed that to build something like Wolfram Alpha would require sort of solving the general AI problem. That's what I had assumed. And so I kept on thinking about that, and I thought I don't really know how to do that, so I don't do anything.

Then I worked on my new kind of science project and sort of exploring the computational universe and came up with things like this principle of computational equivalence, which say there is no bright line between the intelligent and the merely computational. So I thought, look, that's this paradigm I've built.

Now I have to eat that dog food myself, so to speak. I've been thinking about doing this thing with computable knowledge forever, and let me actually try and do it. And so it was, if my paradigm is right, then this should be possible. But the beginning was certainly, it was a bit daunting.

I remember I took the early team to a big reference library and we're looking at this reference library, and it's like, my basic statement is our goal over the next year or two is to ingest everything that's in here. And that's, it seemed very daunting, but in a sense I was well aware of the fact that it's finite.

The fact that you can walk into the reference library, it's a big, big thing with lots of reference books all over the place, but it is finite. This is not an infinite, it's not the infinite corridor of, so to speak, of reference library. It's not truly infinite, so to speak.

But no, I mean, and then what happened was sort of interesting there was, from a methodology point of view, was I didn't start off saying, let me have a grand theory for how all this knowledge works. It was like, let's implement this area, this area, this area, a few hundred areas and so on.

That's a lot of work. I also found that, I've been fortunate in that our products get used by sort of the world's experts in lots of areas. And so that really helped 'cause we were able to ask people, the world expert in this or that, and we're able to ask them for input and so on.

And I found that my general principle was that any area where there wasn't some expert who helped us figure out what to do wouldn't be right. 'Cause our goal was to kind of get to the point where we had sort of true expert level knowledge about everything. And so that the ultimate goal is if there's a question that can be answered on the basis of general knowledge in our civilization, make it be automatic to be able to answer that question.

And now, well, WolfMalFa got used in Siri from the very beginning and it's now also used in Alexa. And so it's people are kind of getting more of the, they get more of the sense of this is what should be possible to do. I mean, in a sense, the question answering problem was viewed as one of the sort of core AI problems for a long time.

I had kind of an interesting experience. I had a friend Marvin Minsky, who was a well-known AI person from right around here. And I remember when WolfMalFa was coming out, it was a few weeks before it came out, I think. I happened to see Marvin and I said, "I should show you this thing we have.

"It's a question answering system." And he was like, "Okay." Typed something in, he's like, "Okay, fine." And then he's talking about something different. I said, "No, Marvin, this time it actually works. "Look at this, it actually works." He types in a few more things. There's maybe 10 more things.

Of course, we have a record of what he typed in, which is kind of interesting. - Can you share where his mind was in the testing space? - All kinds of random things. He was just trying random stuff, medical stuff and chemistry stuff and astronomy and so on. And it was like, after a few minutes, he was like, "Oh my God, it actually works." But that was kind of told you something about the state, what happened in AI, because people had, in a sense, by trying to solve the bigger problem, we were able to actually make something that would work.

Now, to be fair, we had a bunch of completely unfair advantages. For example, we already built a bunch of orphan language, which was very high level symbolic language. I had the practical experience of building big systems. I have the sort of intellectual confidence to not just sort of give up in doing something like this.

I think that the, it's always a funny thing. I've worked on a bunch of big projects in my life. And I would say that the, you mentioned ego, I would also mention optimism, so to speak. I mean, if somebody said, "This project is gonna take 30 years," it would be hard to sell me on that.

I'm always in the, well, I can kind of see a few years, something's gonna happen in a few years. And usually it does, something happens in a few years, but the whole, the tail can be decades long. And that's, and from a personal point of view, always the challenge is you end up with these projects that have infinite tails.

And the question is, do the tails kind of, do you just drown in kind of dealing with all of the tails of these projects? And that's an interesting sort of personal challenge. And like my efforts now to work on the fundamental theory of physics, which I've just started doing, and I'm having a lot of fun with it, but it's kind of making a bet that I can kind of, I can do that as well as doing the incredibly energetic things that I'm trying to do with orphan language and so on.

I mean, the vision, yeah. - And underlying that, I mean, I just talked for the second time with Elon Musk and that you two share that quality a little bit of that optimism of taking on basically the daunting, what most people call impossible. And he, and you take it on out of, you can call it ego, you can call it naivety, you can call it optimism, whatever the heck it is, but that's how you solve the impossible things.

- Yeah, I mean, look at what happens. And I don't know, in my own case, it's been, I progressively got a bit more confident and progressively able to decide that these projects aren't crazy. But then the other thing is, the other trap that one can end up with is, oh, I've done these projects and they're big.

Let me never do a project that's any smaller than any project I've done so far. (laughing) And that can be a trap. And often these projects are of completely unknown, that their depth and significance is actually very hard to know. - On the sort of building this giant knowledge base that's behind Wolfram Language, Wolfram Alpha, what do you think about the internet?

What do you think about, for example, Wikipedia, these large aggregations of text that's not converted into computable knowledge? Do you think, if you look at Wolfram Language, Wolfram Alpha, 20, 30, maybe 50 years down the line, do you hope to store all of the, sort of Google's dream is to make all information searchable, accessible, but that's really, as defined, it doesn't include the understanding of information.

- Right. - Do you hope to make all of knowledge represented within-- - Sure, I would hope so. That's what we're trying to do. - How hard is that problem, like closing that gap? What's your sense? - Well, it depends on the use cases. I mean, so if it's a question of answering general knowledge questions about the world, we're in pretty good shape on that right now.

If it's a question of representing like an area that we're going into right now is computational contracts, being able to take something which would be written in legalese, it might even be the specifications for, what should the self-driving car do when it encounters this or that or the other?

What should the, whatever. Write that in a computational language and be able to express things about the world. If the creature that you see running across the road is a thing at this point in the tree of life, then swerve this way, otherwise don't, those kinds of things. - Are there ethical components when you start to get to some of the messy human things, are those encodable into computable knowledge?

- Well, I think that it is a necessary feature of attempting to automate more in the world that we encode more and more of ethics in a way that gets sort of quickly, you know, is able to be dealt with by computer. I mean, I've been involved recently, I sort of got backed into being involved in the question of automated content selection on the internet.

So, you know, the Facebooks, Googles, Twitters, you know, how do they rank the stuff they feed to us humans, so to speak? And the question of what are, you know, what should never be fed to us? What should be blocked forever? What should be upranked, you know? And what are the kind of principles behind that?

And what I kind of, well, a bunch of different things I realized about that, but one thing that's interesting is being able, you know, in fact, you're building sort of an AI ethics, you have to build an AI ethics module, in effect, to decide, is this thing so shocking I'm never gonna show it to people?

Is this thing so whatever? And I did realize in thinking about that, that, you know, there's not gonna be one of these things. It's not possible to decide, or it might be possible, but it would be really bad for the future of our species if we just decided there's this one AI ethics module and it's gonna determine the practices of everything in the world, so to speak.

And I kind of realized one has to sort of break it up, and that's an interesting societal problem of how one does that and how one sort of has people sort of self-identify for, you know, I'm buying in in the case of just content selection, it's sort of easier because it's like an, it's for an individual, it's not something that kind of cuts across sort of societal boundaries.

- It's a really interesting notion of, I heard you describe, I really like it, sort of maybe in the, sort of have different AI systems that have a certain kind of brand that they represent, essentially. - Right. - You could have like, I don't know, whether it's conservative or liberal, and then libertarian, and there's an Iranian objectivist AI ethics system, and different ethical, I mean, it's almost encoding some of the ideologies which we've been struggling, I come from the Soviet Union, that didn't work out so well with the ideologies that worked out there, and so you have, but they all, everybody purchased that particular ethics system.

- Indeed. - And in the same, I suppose, could be done, encoded, that system could be encoded into computational knowledge, and allow us to explore in the realm of, in the digital space, that's a really exciting possibility. Are you playing with those ideas in Wolfram Language? - Yeah, yeah, I mean, that's, Wolfram Language has sort of the best opportunity to kind of express those essentially computational contracts about what to do.

Now, there's a bunch more work to be done to do it in practice for deciding, is this a credible news story, what does that mean, or whatever else you're gonna pick. I think that that's, the question of exactly what we get to do with that is, for me, it's kind of a complicated thing because there are these big projects that I think about, like, find the fundamental theory of physics, okay, that's box number one, right?

Box number two, solve the AI ethics problem in the case of, figure out how you rank all content, so to speak, and decide what people see, that's kind of a box number two, so to speak. These are big projects, and I think-- - What do you think is more important, the fundamental nature of reality, or-- - Depends who you ask, it's one of these things that's exactly like, what's the ranking, right?

It's the ranking system, it's like, whose module do you use to rank that? If you, and I think-- - Having multiple modules is a really compelling notion to us humans, that in a world where it's not clear that there's a right answer, perhaps you have systems that operate under different, how would you say it, I mean-- - It's different value systems, basically.

- Different value systems. - I mean, I think, in a sense, I mean, I'm not really a politics-oriented person, but in the kind of totalitarianism, it's kind of like, you're gonna have this system, and that's the way it is. I mean, kind of the concept of sort of a market-based system where you have, okay, I as a human, I'm gonna pick this system, I as another human, I'm gonna pick this system.

I mean, that's, in a sense, this case of automated content selection is a non-trivial, but it is probably the easiest of the AI ethics situations, because it is, each person gets to pick for themselves, and there's not a huge interplay between what different people pick. By the time you're dealing with other societal things, like what should the policy of the central bank be or something.

- Or healthcare systems, and all those kind of centralized kind of things. - Right, well, I mean, healthcare, again, has the feature that at some level, each person can pick for themselves, so to speak. I mean, whereas there are other things where there's a necessary, public health is one example, where that's not, where that doesn't get to be something which people can, what they pick for themselves, they may impose on other people, and then it becomes a more non-trivial piece of sort of political philosophy.

- Of course, the central banking systems, I would argue, we would move, we need to move away into digital currency and so on, and Bitcoin and ledgers and so on. So there's a lot of- - We've been quite involved in that. And that's where, that's sort of the motivation for computational contracts, in part, comes out of this idea, oh, we can just have this autonomously executing smart contract.

The idea of a computational contract is just to say, have something where all of the conditions of the contract are represented in computational form. So in principle, it's automatic to execute the contract. And I think that's, that will surely be the future of the idea of legal contracts written in English or legalese or whatever, and where people have to argue about what goes on is surely not, we have a much more streamlined process if everything can be represented computationally and the computers can kind of decide what to do.

I mean, ironically enough, old Gottfried Leibniz back in the 1600s was saying exactly the same thing, but he had, his pinnacle of technical achievement was this brass four-function mechanical calculator thing that never really worked properly, actually. And so he was like 300 years too early for that idea. But now that idea is pretty realistic, I think.

And you ask how much more difficult is it than what we have now in Wolfram language to express, I call it symbolic discourse language, being able to express sort of everything in the world in kind of computational symbolic form. I think it is absolutely within reach. I mean, I think it's a, you know, I don't know, maybe I'm just too much of an optimist, but I think it's a limited number of years to have a pretty well-built out version of that that will allow one to encode the kinds of things that are relevant to typical legal contracts and these kinds of things.

- The idea of symbolic discourse language, can you try to define the scope of what it is? - So we're having a conversation, it's a natural language. Can we have a representation of the sort of actionable parts of that conversation in a precise computable form so that a computer could go do it?

- And not just contracts, but really sort of some of the things we think of as common sense, essentially, even just like basic notions of human life. - Well, I mean, things like, you know, I'm getting hungry and want to eat something, right? That's something we don't have a representation, you know, in Wolfram language right now, if I was like, I'm eating blueberries and raspberries and things like that, and I'm eating this amount of them, we know all about those kinds of fruits and plants and nutrition content and all that kind of thing, but the I want to eat them part of it is not covered yet.

- And you need to do that in order to have a complete symbolic discourse language, to be able to have a natural language conversation. - Right, right, to be able to express the kinds of things that say, you know, if it's a legal contract, it's, you know, the party's desire to have this and that.

And that's, you know, that's a thing like, I want to eat a raspberry or something. - But isn't that, isn't this, just to let you know, you said it's centuries old, this dream. - Yes. - But it's also the more near term, the dream of Turing and formulating the Turing test.

- Yes. - So, do you hope, do you think that's the ultimate test of creating something special? 'Cause we said-- - I don't know, I think by special, look, if the test is, does it walk and talk like a human? Well, that's just the talking like a human. But the answer is, it's an okay test.

If you say, is it a test of intelligence? You know, people have attached the Wolfram Alpha API to, you know, Turing test bots. And those bots just lose immediately. 'Cause all you have to do is ask it five questions that, you know, are about really obscure, weird pieces of knowledge, and it's just trot them right out.

And you say, that's not a human. Right, it's a different thing. It's achieving a different-- - Right now, but it's, I would argue not. I would argue it's not a different thing. It's actually legitimately, Wolfram Alpha is legitimately, or Wolfram Language, I think, is legitimately trying to solve the Turing, the intent of the Turing test.

- Perhaps the intent. Yeah, perhaps the intent. I mean, it's actually kind of fun. You know, Alan Turing had tried to work out, he thought about taking Encyclopedia Britannica and, you know, making it computational in some way. And he estimated how much work it would be. And actually, I have to say, he was a bit more pessimistic than the reality.

We did it more efficiently than that. - But to him, that represented-- - So, I mean, he was on the same-- - It's a mighty mental task. - Yeah, right, he had the same idea. I mean, it was, you know, we were able to do it more efficiently 'cause we had a lot, we had layers of automation that he, I think, hadn't, you know, it's hard to imagine those layers of abstraction that end up being built up.

- But to him, it represented, like, an impossible task, essentially. - Well, he thought it was difficult. He thought it was, you know, maybe if he'd lived another 50 years, he would've been able to do it. I don't know. (upbeat music) (upbeat music) (upbeat music) (upbeat music) (upbeat music) (upbeat music)